scispace - formally typeset
Search or ask a question

Showing papers on "LTE Advanced published in 2014"


Journal ArticleDOI
TL;DR: This paper extends the popular Wireless World Initiative for New Radio (WINNER) channel model with new features to make it as realistic as possible and can accurately predict the performance for an urban macro-cell setup with commercial high-gain antennas.
Abstract: Channel models are important tools to evaluate the performance of new concepts in mobile communications. However, there is a tradeoff between complexity and accuracy. In this paper, we extend the popular Wireless World Initiative for New Radio (WINNER) channel model with new features to make it as realistic as possible. Our approach enables more realistic evaluation results at an early stage of algorithm development. The new model supports 3-D propagation, 3-D antenna patterns, time evolving channel traces of arbitrary length, scenario transitions and variable terminal speeds. We validated the model by measurements in a coherent LTE advanced testbed in downtown Berlin, Germany. We then reproduced the same scenario in the model and compared several channel parameters (delay spread, path gain, K-factor, geometry factor and capacity). The results match very well and we can accurately predict the performance for an urban macro-cell setup with commercial high-gain antennas. At the same time, the computational complexity does not increase significantly and we can use all existing WINNER parameter tables. These artificial channels, having equivalent characteristics as measured data, enable virtual field trials long before prototypes are available.

679 citations


Journal ArticleDOI
TL;DR: A survey of the alternatives that have been proposed over the last years to improve the operation of the random access channel of LTE and LTE-A is provided, identifying strengths and weaknesses of each one of them, while drawing future trends to steer the efforts over the same shooting line.
Abstract: The 3GPP has raised the need to revisit the design of next generations of cellular networks in order to make them capable and efficient to provide M2M services. One of the key challenges that has been identified is the need to enhance the operation of the random access channel of LTE and LTE-A. The current mechanism to request access to the system is known to suffer from congestion and overloading in the presence of a huge number of devices. For this reason, different research groups around the globe are working towards the design of more efficient ways of managing the access to these networks in such circumstances. This paper aims to provide a survey of the alternatives that have been proposed over the last years to improve the operation of the random access channel of LTE and LTE-A. A comprehensive discussion of the different alternatives is provided, identifying strengths and weaknesses of each one of them, while drawing future trends to steer the efforts over the same shooting line. In addition, while existing literature has been focused on the performance in terms of delay, the energy efficiency of the access mechanism of LTE will play a key role in the deployment of M2M networks. For this reason, a comprehensive performance evaluation of the energy efficiency of the random access mechanism of LTE is provided in this paper. The aim of this computer-based simulation study is to set a baseline performance upon which new and more energy-efficient mechanisms can be designed in the near future.

571 citations


Journal ArticleDOI
TL;DR: Future mobile broadband technologies and standards as well as evolutions of the 3GPP's existing LTE standard and IEEE 802.11 standards are targeted, providing subscribers with the type of responsive Internet browsing experience that previously was only possible on wired broadband connections.
Abstract: Mobile services based on 4G LTE services are steadily expanding across global markets, providing subscribers with the type of responsive Internet browsing experience that previously was only possible on wired broadband connections. With more than 200 commercial LTE networks in operation as of August 2013 [1], LTE subscriptions are expected to exceed 1.3 billion by the end of 2018 [2]. LTE's rapid uptake, based on exponential growth in network data traffic, has opened the industry's eyes to an important reality: the mobile industry must deliver an economically sustainable capacity and performance growth strategy; one that offers increasingly better coverage and a superior user experience at lower cost than existing wireless systems, including LTE. This strategy will be based on a combination of network topology innovations and new terminal capabilities. Simple network economics also require that the industry's strategy enable new services, new applications, and ultimately new opportunities to monetize the user experience. To address these pressing requirements, many expert prognosticators are turning their attention to future mobile broadband technologies and standards (i.e., 5G) as well as evolutions of the 3GPP's existing LTE standard and IEEE 802.11 standards.

440 citations


Journal ArticleDOI
TL;DR: In this paper, the authors identify several emerging technologies which will change and define the future generations of telecommunication standards and look at some of the research problems that these new technologies pose.
Abstract: As the take-up of Long Term Evolution (LTE)/4G cellular accelerates, there is increasing interest in technologies that will define the next generation (5G) telecommunication standard. This article identifies several emerging technologies which will change and define the future generations of telecommunication standards. Some of these technologies are already making their way into standards such as 3GPP LTE, while others are still in development. Additionally, we will look at some of the research problems that these new technologies pose.

380 citations


Journal ArticleDOI
TL;DR: An overview of the security functionality of the LTE and LTE-A networks and the security vulnerabilities existing in the architecture and the design are explored and the potential research issues for the future research works are shown.
Abstract: High demands for broadband mobile wireless communications and the emergence of new wireless multimedia applications constitute the motivation to the development of broadband wireless access technologies in recent years. The Long Term Evolution/System Architecture Evolution (LTE/SAE) system has been specified by the Third Generation Partnership Project (3GPP) on the way towards fourth-generation (4G) mobile to ensure 3GPP keeping the dominance of the cellular communication technologies. Through the design and optimization of new radio access techniques and a further evolution of the LTE systems, the 3GPP is developing the future LTE-Advanced (LTE-A) wireless networks as the 4G standard of the 3GPP. Since the 3GPP LTE and LTE-A architecture are designed to support flat Internet Protocol (IP) connectivity and full interworking with heterogeneous wireless access networks, the new unique features bring some new challenges in the design of the security mechanisms. This paper makes a number of contributions to the security aspects of the LTE and LTE-A networks. First, we present an overview of the security functionality of the LTE and LTE-A networks. Second, the security vulnerabilities existing in the architecture and the design of the LTE and LTE-A networks are explored. Third, the existing solutions to these problems are classically reviewed. Finally, we show the potential research issues for the future research works.

264 citations


Journal ArticleDOI
TL;DR: A device-to-device communication-based load balancing algorithm, which utilizes D2D communications as bridges to flexibly offload traffic among different tier cells and achieve efficient load balancing according to their real-time traffic distributions is proposed.
Abstract: In LTE-Advanced networks, besides the overall coverage provided by traditional macrocells, various classes of low-power nodes (e.g., pico eNBs, femto eNBs, and relays) can be distributed throughout the macrocells as a more targeted underlay to further enhance the area?s spectral efficiency, alleviate traffic hot zones, and thus improve the end-user experience. Considering the limited backhaul connections within lowpower nodes and the imbalanced traffic distribution among different cells, it is highly possible that some cells are severely congested while adjacent cells are very lightly loaded. Therefore, it is of critical importance to achieve efficient load balancing among multi-tier cells in LTEAdvanced networks. However, available techniques such as smart cell and biasing, although able to alleviate congestion or distribute traffic to some extent, cannot respond or adapt flexibly to the real-time traffic distributions among multi-tier cells. Toward this end, we propose in this article a device-to-device communicationbased load balancing algorithm, which utilizes D2D communications as bridges to flexibly offload traffic among different tier cells and achieve efficient load balancing according to their real-time traffic distributions. Besides identifying the research issues that deserve further study, we also present numerical results to show the performance gains that can be achieved by the proposed algorithm.

217 citations


Journal ArticleDOI
TL;DR: A comprehensive discussion on the key aspects and research challenges of MM support in the presence of femtocells, with the emphasis given on the phases of a) cell identification, b) access control, c) cell search, d) cell selection/reselection, e) handover (HO) decision, and f) HO execution.
Abstract: Support of femtocells is an integral part of the Long Term Evolution - Advanced (LTE-A) system and a key enabler for its wide adoption in a broad scale. Femtocells are short-range, low-power and low-cost cellular stations which are installed by the consumers in an unplanned manner. Even though current literature includes various studies towards understanding the main challenges of interference management in the presence of femtocells, little light has been shed on the open issues of mobility management (MM) in the two-tier macrocell-femtocell network. In this paper, we provide a comprehensive discussion on the key aspects and research challenges of MM support in the presence of femtocells, with the emphasis given on the phases of a) cell identification, b) access control, c) cell search, d) cell selection/reselection, e) handover (HO) decision, and f) HO execution. A detailed overview of the respective MM procedures in the LTE-A system is also provided to better comprehend the solutions and open issues posed in real-life systems. Based on the discussion for the HO decision phase, we subsequently survey and classify existing HO decision algorithms for the two-tier macrocell-femtocell network, depending on the primary HO decision criterion used. For each class, we overview up to three representative algorithms and provide detailed flowcharts to describe their fundamental operation. A comparative summary of the main decision parameters and key features of selected HO decision algorithms concludes this work, providing insights for future algorithmic design and standardization activities.

217 citations


Journal ArticleDOI
TL;DR: This paper presents a comprehensive survey of the RRM schemes that have been studied in recent years for LTE/LTE-A HetNets, with a particular focus on those for femtocells and relay nodes.
Abstract: As heterogeneous networks (HetNets) emerge as one of the most promising developments toward realizing the target specifications of Long Term Evolution (LTE) and LTE-Advanced (LTE-A) networks, radio resource management (RRM) research for such networks has, in recent times, been intensively pursued Clearly, recent research mainly concentrates on the aspect of interference mitigation Other RRM aspects, such as radio resource utilization, fairness, complexity, and QoS, have not been given much attention In this paper, we aim to provide an overview of the key challenges arising from HetNets and highlight their importance Subsequently, we present a comprehensive survey of the RRM schemes that have been studied in recent years for LTE/LTE-A HetNets, with a particular focus on those for femtocells and relay nodes Furthermore, we classify these RRM schemes according to their underlying approaches In addition, these RRM schemes are qualitatively analyzed and compared to each other We also identify a number of potential research directions for future RRM development Finally, we discuss the lack of current RRM research and the importance of multi-objective RRM studies

187 citations


Journal ArticleDOI
TL;DR: This paper offers a tutorial on scheduling in LTE and its successor LTE-Advanced, surveys representative schemes in the literature that have addressed the scheduling problem, and offers an evaluation methodology to be used as a basis for comparison between scheduling proposals in the Literature.
Abstract: The choice of OFDM-based multi-carrier access techniques for LTE marked a fundamental and farsighted parting from preceding 3GPP networks. With OFDMA in the downlink and SC-FDMA in the uplink, LTE possesses a robust and adaptive multiple access scheme that facilitates many physical layer enhancements. Despite this flexibility, scheduling in LTE is a challenging functionality to design, especially in the uplink. Resource allocation in LTE is made complex, especially when considering its target packet-based services and mobility profiles, both current and emerging, in addition to the use of several physical layer enhancements. In this paper, we offer a tutorial on scheduling in LTE and its successor LTE-Advanced. We also survey representative schemes in the literature that have addressed the scheduling problem, and offer an evaluation methodology to be used as a basis for comparison between scheduling proposals in the literature.

175 citations


Journal ArticleDOI
TL;DR: On-going research on the different RRM aspects and algorithms to support CA in LTE-Advanced are surveyed, followed by requirements on radio resource management (RRM) functionality in support of CA.
Abstract: In order to satisfy the requirements of future IMT-Advanced mobile systems, the concept of spectrum aggregation is introduced by 3GPP in its new LTE-Advanced (LTE Rel. 10) standards. While spectrum aggregation allows aggregation of carrier components (CCs) dispersed within and across different bands (intra/inter-band) as well as combination of CCs having different bandwidths, spectrum aggregation is expected to provide a powerful boost to the user throughput in LTE-Advanced (LTE-A). However, introduction of spectrum aggregation or carrier aggregation (CA) as referred to in LTE Rel. 10, has required some changes from the baseline LTE Rel. 8 although each CC in LTE-A remains backward compatible with LTE Rel. 8. This article provides a review of spectrum aggregation techniques, followed by requirements on radio resource management (RRM) functionality in support of CA. On-going research on the different RRM aspects and algorithms to support CA in LTE-Advanced are surveyed. Technical challenges for future research on aggregation in LTE-Advanced systems are also outlined.

170 citations


Journal ArticleDOI
TL;DR: The limits of machine type communication traffic coexisting with human communication traffic in LTE-A networks are investigated, such that human customer churn is minimized and under proper design, the outage probability of human communication is marginally impacted.
Abstract: Machine-to-machine wireless systems are being standardized to provide ubiquitous connectivity between machines without the need for human intervention. A natural concern of cellular operators and service providers is the impact that these machine type communications will have on current human type communications. Given the exponential growth of machine type communication traffic, it is of utmost importance to ensure that current voice and data traffic is not jeopardized. This article investigates the limits of machine type communication traffic coexisting with human communication traffic in LTE-A networks, such that human customer churn is minimized. We show that under proper design, the outage probability of human communication is marginally impacted whilst duty cycle and access delay of machine type communications are reasonably bounded to ensure viable M2M operations.

Journal ArticleDOI
TL;DR: There is a distance threshold beyond which relay-aided D2D communication significantly improves network performance when compared to direct communication between D1D peers, and the chance constraint approach is utilized to achieve a trade-off between robustness and optimality.
Abstract: Device-to-device (D2D) communication in cellular networks allows direct transmission between two cellular devices with local communication needs. Due to the increasing number of autonomous heterogeneous devices in future mobile networks, an efficient resource allocation scheme is required to maximize network throughput and achieve higher spectral efficiency. In this paper, performance of network-integrated D2D communication under channel uncertainties is investigated where D2D traffic is carried through relay nodes. Considering a multi-user and multi-relay network, we propose a robust distributed solution for resource allocation with a view to maximizing network sum-rate when the interference from other relay nodes and the link gains are uncertain. An optimization problem is formulated for allocating radio resources at the relays to maximize end-to-end rate as well as satisfy the quality-of-service (QoS) requirements for cellular and D2D user equipments under total power constraint. Each of the uncertain parameters is modeled by a bounded distance between its estimated and bounded values. We show that the robust problem is convex and a gradient-aided dual decomposition algorithm is applied to allocate radio resources in a distributed manner. Finally, to reduce the cost of robustness defined as the reduction of achievable sum-rate, we utilize the chance constraint approach to achieve a trade-off between robustness and optimality. The numerical results show that there is a distance threshold beyond which relay-aided D2D communication significantly improves network performance when compared to direct communication between D2D peers.

Patent
15 Aug 2014
TL;DR: In this article, various downlink procedures may be modified in order to handle communications between licensed and unlicensed spectrum with LTE/LTE-A deployments with un-licensed spectrum.
Abstract: Long term evolution (LTE)/LTE- Advanced (LTE-A) deployments with unlicensed spectrum leverage more efficient LTE communication aspects over unlicensed spectrum, such as over WIFI radio access technology. In order to accommodate such communications, various downlink procedures may be modified in order to handle communications between licensed and unlicensed spectrum with LTE/LTE-A deployments with unlicensed spectrum.

Patent
10 Jun 2014
TL;DR: In this paper, the uplink grant may be configured to trigger a clear channel assessment (CCA) to determine availability of an unlicensed spectrum prior to a transmission associated with uplink grants.
Abstract: Methods, systems, and apparatuses are described for wireless communications. In one method, an uplink grant may be received over a licensed spectrum. A clear channel assessment (CCA) may be performed in response to the uplink grant to determine availability of an unlicensed spectrum. The CCA may be performed prior to a transmission associated with the uplink grant. In another method, scheduling information may be received over a licensed spectrum. An uplink grant may be transmitted over the licensed spectrum. The uplink grant may be based at least in part on the scheduling information. The uplink grant may be configured to trigger a CCA to determine availability of an unlicensed spectrum prior to a transmission associated with the uplink grant.

Journal ArticleDOI
TL;DR: A state-of-the-art comprehensive view on the key enabling technologies for LTE-Advanced systems is provided while novel approaches and enhancements currently under consideration for Rel-12 are also discussed.

Proceedings ArticleDOI
28 Aug 2014
TL;DR: SimuLTE is an open-source system-level simulator for LTE and LTE-Advanced (LTE-A) networks based on OMNeT++, a well-known, widely-used modular simulation framework, which offers a high degree of experiment support.
Abstract: This paper describes SimuLTE, an open-source system-level simulator for LTE and LTE-Advanced (LTE-A) networks. SimuLTE is based on OMNeT++, a well-known, widely-used modular simulation framework, which offers a high degree of experiment support. As such, it can be seamlessly integrated with all the networkoriented modules of the OMNeT++ family, such as INET, thus enabling - among other things - credible simulation of end-to-end real-life applications across heterogeneous technologies. We describe the modeling choices and general architecture of the SimuLTE software, with particular emphasis on the MAC and scheduling functions, and show performance evaluation results obtained using the simulator.

Journal ArticleDOI
TL;DR: The results show that the proposed tradeoff scheme is efficient in keeping a balance between power saving and latency, and indicates that DRX short cycles are very effective in reducing latency for active traffic, while shorter inactivity timer is desirable for background traffic to enhance power saving.
Abstract: Discontinuous reception (DRX) saves battery power of user equipment (UE) usually at the expense of potential increase in latency in the Long Term Evolution (LTE) networks. Therefore, an optimization is needed to find the best tradeoff between latency and power saving. In this paper, we first develop an analytical model to estimate power saving achieved and latency incurred by DRX mechanism for active and background mobile traffic. A tradeoff scheme is then formulated to maintain a balance between these two performance parameters based on operator's preference for power saving and latency requirement of traffic. The analytical model is validated using system level simulation results obtained from OPNET Modeler. The results show that the proposed tradeoff scheme is efficient in keeping a balance between power saving and latency. The results also indicate that DRX short cycles are very effective in reducing latency for active traffic, while shorter inactivity timer is desirable for background traffic to enhance power saving. We also proposed a mechanism to switch DRX configuration based on traffic running at UE, using UE assistance procedure recently adopted by 3GPP in Release 11. DRX configuration switching increases the power saving significantly without any noticeable increase in latency of active traffic.

Proceedings ArticleDOI
01 Dec 2014
TL;DR: Simulation results show that WiFi performance is more vulnerable to LTE interference, while LTE performance is degraded only slightly, however, WiFi throughput degradation is lower for TDD configurations with larger number of LTE uplink sub-frames and smaller path loss compensation factors.
Abstract: One of the effective ways to address the exponentially increasing traffic demand in mobile communication systems is to use more spectrum. Although licensed spectrum is always preferable for providing better user experience, unlicensed spectrum can be considered as an effective complement. Before moving into unlicensed spectrum, it is essential to carry out proper coexistence performance evaluations. In this paper, we analyze WiFi 802.11n and Long Term Evolution (LTE) coexistence performance considering multi-layer cell layouts through system level simulations. We consider a time division duplexing (TDD)-LTE system with an FTP traffic model for performance evaluation. Simulation results show that WiFi performance is more vulnerable to LTE interference, while LTE performance is degraded only slightly. However, WiFi throughput degradation is lower for TDD configurations with larger number of LTE uplink sub-frames and smaller path loss compensation factors.

Book
13 Mar 2014
TL;DR: This book provides an insight into the key practical aspects and best practice of 4G-LTE network design, performance, and deployment design, deployment and performance and presents a realistic roadmap for evolution of deployed 3G/4G networks.
Abstract: This book provides an insight into the key practical aspects and best practice of 4G-LTE network design, performance, and deploymentDesign, Deployment and Performance of 4G-LTE Networksaddresses the key practical aspects and best practice of 4G networks design, performance, and deployment. In addition, the book focuses on the end-to-end aspects of the LTE network architecture and different deployment scenarios of commercial LTE networks. It describes the air interface of LTE focusing on the access stratum protocol layers: PDCP, RLC, MAC, and Physical Layer. The air interface described in this book covers the concepts of LTE frame structure, downlink and uplink scheduling, and detailed illustrations of the data flow across the protocol layers. It describes the details of the optimization process including performance measurements and troubleshooting mechanisms in addition to demonstrating common issues and case studies based on actual field results. The book provides detailed performance analysis of key features/enhancements such as C-DRX for Smartphones battery saving, CSFB solution to support voice calls with LTE, and MIMO techniques.The book presents analysis of LTE coverage and link budgets alongside a detailed comparative analysis with HSPA+. Practical link budget examples are provided for data and VoLTE scenarios. Furthermore, the reader is provided with a detailed explanation of capacity dimensioning of the LTE systems. The LTE capacity analysis in this book is presented in a comparative manner with reference to the HSPA+ network to benchmark the LTE network capacity. The book describes the voice options for LTE including VoIP protocol stack, IMS Single Radio Voice Call Continuity (SRVCC). In addition, key VoLTE features are presented: Semi-persistent scheduling (SPS), TTI bundling, Quality of Service (QoS), VoIP with C-DRX, Robust Header Compression (RoHC), and VoLTE Vocoders and De-Jitter buffer. The book describes several LTE and LTE-A advanced features in the evolution from Release 8 to 10 including SON, eICIC, CA, CoMP, HetNet, Enhanced MIMO, Relays, and LBS.This book can be used as a reference for best practices in LTE networks design and deployment, performance analysis, and evolution strategy.Conveys the theoretical background of 4G-LTE networksPresents key aspects and best practice of 4G-LTE networks design and deploymentIncludes a realistic roadmap for evolution of deployed 3G/4G networksAddresses the practical aspects for designing and deploying commercial LTE networks.Analyzes LTE coverage and link budgets, including a detailed comparative analysis with HSPA+.References the best practices in LTE networks design and deployment, performance analysis, and evolution strategyCovers infrastructure-sharing scenarios for CAPEX and OPEX saving.Provides key practical aspects for supporting voice services over LTE,Written for all 4G engineers/designers working in networks design for operators, network deployment engineers, R&D engineers, telecom consulting firms, measurement/performance tools firms, deployment subcontractors, senior undergraduate students and graduate students interested in understanding the practical aspects of 4G-LTE networks as part of their classes, research, or projects.

Journal ArticleDOI
TL;DR: A novel greedy-based scheme is proposed to maximize the system throughput while maintaining proportional fairness of radio resource allocation among all UEs and shows that this scheme can guarantee at least half of the performance of the optimal solution.
Abstract: In long term evolution-advanced (LTE-A) networks, the carrier aggregation technique is incorporated for user equipments (UEs) to simultaneously aggregate multiple component carriers (CCs) for achieving higher transmission rate. Many research works for LTE-A systems with carrier aggregation configuration have concentrated on the radio resource management problem for downlink transmission, including mainly CC assignment and packet scheduling. Most previous studies have not considered that the assigned CCs in each UE can be changed. Furthermore, they also have not considered the modulation and coding scheme constraint, as specified in LTE-A standards. Therefore, their proposed schemes may limit the radio resource usage and are not compatible with LTE-A systems. In this paper, we assume that the scheduler can reassign CCs to each UE at each transmission time interval and formulate the downlink radio resource scheduling problem under the modulation and coding scheme constraint, which is proved to be NP-hard. Then, a novel greedy-based scheme is proposed to maximize the system throughput while maintaining proportional fairness of radio resource allocation among all UEs. We show that this scheme can guarantee at least half of the performance of the optimal solution. Simulation results show that our proposed scheme outperforms the schemes in previous studies.

Proceedings ArticleDOI
07 Sep 2014
TL;DR: This work presents OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground and demonstrates an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices.
Abstract: LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems. In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.

Journal ArticleDOI
TL;DR: This work investigates the relationship between EE and BE in a mobile environment, and proposes an EE-BE-aware scheduling scheme with a dynamic relay selection strategy that is flexible enough for making the transmission decision, including relay selection, rate allocation, and routing.
Abstract: Energy efficiency and bandwidth efficiency are two paramount important performance metrics for device-to-device communications. In this work, we investigate how mobility impacts EE and BE in a general framework of an LTEAdvanced network. First, we deploy a simple but practical mobility model to capture the track of the mobile devices. In particular, unlike previous works focusing on mobility velocity, which is difficult to obtain in practical mobile D2D systems, we deploy the parameter of device density to describe the device mobility. Next, we investigate the relationship between EE and BE in a mobile environment, and propose an EE-BE-aware scheduling scheme with a dynamic relay selection strategy that is flexible enough for making the transmission decision, including relay selection, rate allocation, and routing. Subsequently, through rigorous theoretical analysis, we derive a precise EE-BE trade-off curve for any device density and achieve the condition to attain the optimal EE and BE simultaneously. Finally, numerical simulation results are provided to validate the efficiency of the proposed scheduling scheme and the correctness of our analysis.

Proceedings ArticleDOI
01 Apr 2014
TL;DR: A carrier aggregation resource allocation algorithm to allocate the LTE Advanced and the radar carriers' resources optimally among users based on the type of user application and gives priority to users running inelastic traffic when allocating resources.
Abstract: Spectrum sharing is a promising solution for the problem of spectrum congestion. We consider a spectrum sharing scenario between a multiple-input multiple-output (MIMO) radar and Long Term Evolution (LTE) Advanced cellular system. In this paper, we consider resource allocation optimization problem with carrier aggregation. The LTE Advanced system has N BS base stations (BS) which it operates in the radar band on a sharing basis. Our objective is to allocate resources from the LTE Advanced carrier and the MIMO radar carrier to each user equipment (UE) in an LTE Advanced cell based on the running application of UE. Each user application is assigned a utility function based on the type of application. We propose a carrier aggregation resource allocation algorithm to allocate the LTE Advanced and the radar carriers' resources optimally among users based on the type of user application. The algorithm gives priority to users running inelastic traffic when allocating resources. Finally we present simulation results on the performance of the proposed carrier aggregation resource allocation algorithm.

Patent
Wanshi Chen1, Aleksandar Damnjanovic1, Srinivas Yerramalli1, Tao Luo1, Peter Gaal1 
22 Sep 2014
TL;DR: In this paper, uplink control channel management is disclosed for LTE/LTE-A communication systems with unlicensed spectrum in which two or more physical resource blocks (PRBs) are allocated for uplink channel transmission.
Abstract: Uplink control channel management is disclosed for LTE/LTE-A communication systems with unlicensed spectrum in which two or more physical resource blocks (PRBs) are allocated for uplink control channel transmission. The uplink control information (UCI) payload may be determined based on clear channel assessment (CCA) information associated with carriers scheduled for transmission of the UCI data. With the UCI payload determined, two or more uplink control channel messages may be generated according to at least one control channel format, wherein uplink control channel messages include the UCI payload. These generated uplink control channel messages may then be transmitted over the allocated PRBs.

Journal ArticleDOI
TL;DR: The characteristics of the LTE railway radio access network in terms of e NodeB (LTE base station) density and eNodeB transmission power are analyzed and a set of computer-based simulation scenarios is evaluated regarding the achieved transfer delay and data integrity of European Train Control System (ETCS) messages.
Abstract: The Global System for Mobile Communications-Railways (GSM-R) is an obsolete mobile technology with considerable shortcomings in terms of capacity and data transmission capabilities. Because of these shortcomings, GSM-R is becoming the element limiting the number of running trains in areas with high train concentration, such as major train stations. Moreover, GSM-R cannot support advanced data services. Hence, modern technologies, such as long-term evolution (LTE), have to be evaluated as possible railway communication technologies to replace GSM-R in the future. This article analyzes the characteristics of the LTE railway radio access network in terms of eNodeB (LTE base station) density and eNodeB transmission power. Based on this analysis, a set of computer-based simulation scenarios (e.g., OPNET) with varying numbers of eNodeBs is evaluated regarding the achieved transfer delay and data integrity of European Train Control System (ETCS) messages.

Proceedings ArticleDOI
04 May 2014
TL;DR: This paper proposes - to the best of the knowledge - the first ASIC design for high-throughput data detection in single carrier frequency division multiple access (SC-FDMA)-based large-scale MIMO systems, such as systems building on future 3GPP LTE-Advanced standards.
Abstract: This paper proposes - to the best of our knowledge - the first ASIC design for high-throughput data detection in single carrier frequency division multiple access (SC-FDMA)-based large-scale MIMO systems, such as systems building on future 3GPP LTE-Advanced standards. In order to substantially reduce the complexity of linear soft-output data detection in systems having hundreds of antennas at the base station (BS), the proposed detector builds upon a truncated Neumann series expansion to compute the necessary matrix inverse at low complexity. To achieve high throughput in the 3GPP LTE-A uplink, we develop a systolic VLSI architecture including all necessary processing blocks. We present a corresponding ASIC design that achieves 3.8 Gb/s for a 128 antenna, 8 user 3GPP LTE-A based large-scale MIMO system, while occupying 11.1 mm 2 in a TSMC 45nm CMOS technology.

Proceedings ArticleDOI
06 Apr 2014
TL;DR: Two simple DPS schemes are proposed that take into account the UE's current channel conditions and the cell loading conditions to make the UE’s TP switching decisions and improve the system performance under different practical and realistic settings.
Abstract: Dynamic Point selection (DPS) is a key Downlink (DL) Coordinated Multipoint (CoMP) technique that switches the serving data Transmission Point (TP) of a User Equipment (UE) dynamically among the UE's cooperating set of TPs without requiring a cell handover. It provides performance improvement due to TP selection-diversity gains and dynamic UE load balancing benefit. In this paper, we propose two simple DPS schemes that take into account the UE's current channel conditions and the cell loading conditions to make the UE's TP switching decisions. We show that these schemes improve the system performance under different practical and realistic settings, such as, cell handover margin, TP switching periods, bursty traffic conditions, and cooperation cell cluster sizes.

Proceedings ArticleDOI
17 Apr 2014
TL;DR: The objective is not to meet short term needs but to fulfill the future throughput requirements by revamping the transport network in the mobile backhaul, and moving most of the current LTE network elements to the cloud.
Abstract: The major challenge in future mobile networks is how to improve throughput to support the increase demand in traffic. Software Defined Networks is the new technology proposed but so far it has been proposed to be integrated in the current 4G architecture without major changes. However, we present the mobile networks beyond 4G where Software Defined Network should be used in a more disruptive approach to redesign the current architecture. This proposal includes the proper migration path to ensure reasonable transition phase. The objective is not to meet short term needs but to fulfill the future throughput requirements by revamping the transport network in the mobile backhaul, and moving most of the current LTE network elements to the cloud. In this approach we remove completely the core mobile network as it is known today. Instead, the mobile architecture beyond 4G consists of a simplified access network formed by base stations i.e. eNodeBs, interconnected through a backhaul network composed by SDN switches managed from the cloud together with the rest of LTE network elements.

Journal ArticleDOI
TL;DR: Different small cell deployment approaches are presented: sharing the same frequency bands with the macrocell, or using separated higher frequency bands: for each approach, the benefits of beamforming are discussed.
Abstract: Integrating the standard coverage of traditional macrocells with cells of reduced dimensions (i.e., small cells) is considered one of the main enhancements for Long Term Evolution systems. Small cells will be overlapped with macrocells and possibly also with each other in areas where wider coverage capability and higher throughput are needed. Such heterogeneous network deployment presents many potential benefits but also some challenges that need to be resolved. In particular, multi-antenna techniques and processing in the spatial domain will be key enabling factors to mitigate these issues. This article presents different small cell deployment approaches: sharing the same frequency bands with the macrocell, or using separated higher frequency bands. For each approach, the benefits of beamforming are discussed.

01 Jan 2014
TL;DR: An efficient D2D discovery mechanism that takes advantage of the concept of proximity area (P-Area), a dynamic geographical region wherein UEs activate their D1D capabilities, and indicates that significant energy savings can be obtained using the considered discovery mechanism.
Abstract: Device-to-device (D2D) communications facilitate promising solutions for service optimization and spectrum/capacity efficiency in 3rd Generation Partnership Project (3GPP) networks. A key enabler for D2D communications and proximity services is the device discovery process. Currently, many aggressive peer discovery methods discussed in 3GPP could induce high energy consumption. Therefore, a critical objective in efficient D2D proximity service deployments is how to prolong the battery life time of User Equipments (UEs), by managing discovery cycles intelligently. In this paper, D2D discovery mechanisms discussed in 3GPP are studied in terms of energy consumption aspects. In particular, we consider an efficient D2D discovery mechanism that takes advantage of the concept of proximity area, a dynamic geographical region wherein UEs activate their D2D capabilities. The mechanism enables UEs to perform D2D discovery procedures only when there is a high probability to find other UEs subscribed to the same service. The energy consumption profiles of various discovery mechanisms are evaluated using simulations and the results indicate that significant energy savings can be obtained using the considered discovery mechanism.