scispace - formally typeset
Search or ask a question

Showing papers on "LTE Advanced published in 2019"


Journal ArticleDOI
TL;DR: A physical layer design for NR-MBMS, a system derived, with minor modifications, from the 5G-NR specifications, and suitable for the transmission of linear TV and radio services in either single-cell or SFN operation is outlined.
Abstract: 3GPP LTE eMBMS release (Rel-) 14, also referred to as further evolved multimedia broadcast multicast service (FeMBMS) or enhanced TV (EnTV), is the first mobile broadband technology standard to incorporate a transmission mode designed to deliver terrestrial broadcast services from conventional high power high tower (HPHT) broadcast infrastructure. With respect to the physical layer, the main improvements in FeMBMS are the support of larger inter-site distance for single frequency networks (SFNs) and the ability to allocate 100% of a carrier’s resources to the broadcast payload, with self-contained signaling in the downlink. From the system architecture perspective, a receive-only mode enables free-to-air (FTA) reception with no need for an uplink or SIM card, thus receiving content without user equipment registration with a network. These functionalities are only available in the LTE advanced pro specifications as 5G new radio (NR), standardized in 3GPP from Rel-15, has so far focused entirely on unicast. This paper outlines a physical layer design for NR-MBMS, a system derived, with minor modifications, from the 5G-NR specifications, and suitable for the transmission of linear TV and radio services in either single-cell or SFN operation. This paper evaluates the NR-MBMS proposition and compares it to LTE-based FeMBMS in terms of flexibility, performance, capacity, and coverage.

78 citations


Journal ArticleDOI
TL;DR: A low-complexity iterative suboptimal algorithm called BTFA based joint computation offloading and resource allocation algorithm (FAJORA) is proposed to solve the computation offload problem in a mixed fog and cloud computing system.
Abstract: In order to enable low-latency computation-intensive applications for mobile user equipments (UEs), computation offloading becomes critical necessary. We tackle the computation offloading problem in a mixed fog and cloud computing system, which is composed of an long term evolution-advanced (LTE-A) small-cell based fog node, a powerful cloud center, and a group of UEs. The optimization problem is formulated into a mixed-integer non-linear programming problem, and through a joint optimization of offloading decision making, computation resource allocation, resource block (RB) assignment, and power distribution, the maximum delay among all the UEs is minimized. Due to its mixed combinatory, we propose a low-complexity iterative suboptimal algorithm called BT FA based j oint computation o ffloading and r esource a llocation algorithm (FAJORA) to solve it. In FAJORA, first, offloading decisions are obtained via binary tailored fireworks algorithm; then computation resources are allocated by bisection algorithm. Limited by the uplink LTE-A constraints, we allocate feasible RB patterns instead of RBs, and then distribute transmit power among the RBs of each pattern, where Lagrangian dual decomposition is adopted. Since one UE may be allocated with multiple feasible patterns, we propose a novel heuristic algorithm for each UE to extract the optimal pattern from its allocated patterns. Simulation results verify the convergence of the proposed iterative algorithms, and exhibit significant performance gains could be obtained compared with other algorithms.

59 citations


Journal ArticleDOI
TL;DR: A quality-of-service aware scheme based on the existing handover procedures to support the real-time vehicular services is proposed and a case study based on a realistic vehicle mobility pattern for Luxembourg scenario is carried out.
Abstract: Driven by the increasing number of connected vehicles and related services, powerful communication and computation capabilities are needed for vehicular communications, especially for real-time and safety-related applications. A cellular network consists of radio access technologies, including the current long-term evolution (LTE), the LTE advanced, and the forthcoming 5th generation mobile communication systems. It covers large areas and has the ability to provide high data rate and low latency communication services to mobile users. It is considered the most promising access technology to support real-time vehicular communications. Meanwhile, fog is an emerging architecture for computing, storage, and networking, in which fog nodes can be deployed at base stations to deliver cloud services close to vehicular users. In fog computing-enabled cellular networks, mobility is one of the most critical challenges for vehicular communications to maintain the service continuity and to satisfy the stringent service requirements, especially when the computing and storage resources are limited at the fog nodes. Service migration, relocating services from one fog server to another in a dynamic manner, has been proposed as an effective solution to the mobility problem. To support service migration, both computation and communication techniques need to be considered. Given the importance of protocol design to support the mobility of the vehicles and maintain high network performance, in this paper, we investigate the service migration in the fog computing-enabled cellular networks. We propose a quality-of-service aware scheme based on the existing handover procedures to support the real-time vehicular services. A case study based on a realistic vehicle mobility pattern for Luxembourg scenario is carried out, where the proposed scheme, as well as the benchmarks, are compared by analyzing latency and reliability as well as migration cost.

49 citations


Journal ArticleDOI
TL;DR: Novel models to estimate PAD, PLR, and energy consumption for MIDs, specifically for the group paging mechanism are devised and the superiority of the proposed approach over random grouping approach concerning significant energy consumption of MIDs is demonstrated.
Abstract: The latest evolution of cellular technologies, i.e., 5G including long term evolution-advanced (LTE-A) Pro and 5G new radio promises enhancement to mobile technologies for the Internet of Things (IoT). Despite 5G’s vision to cater to IoT, yet some of the aspects are still optimized for human-to-human (H2H) communication. More specifically, the existing group paging mechanism in LTE-A Pro has not yet clearly defined approaches to group, mobile IoT devices (MIDs) having diverse characteristics, such as discontinuous reception (DRX) and data transmission frequency (DTF) with various mobility patterns. Inappropriate grouping of MIDs may lead to increased energy consumption and degraded quality of service, especially in terms of packet arrival delay (PAD) and packet loss rate (PLR). Therefore, in this paper, we devise novel models to estimate PAD, PLR, and energy consumption for MIDs, specifically for the group paging mechanism. Based on the proposed models, we formulate an optimization problem with the objective to minimize energy consumption of MIDs, while providing required PAD and PLR. The nonlinear convex optimization problem addressed herein is solved using the Lagrangian approach, and the Karush–Kuhn–Tucker conditions have been applied to derive optimal characteristics for MIDs to join the group, namely, DRX and DTF. The extensive numerical results presented verify the effectiveness of the proposed method, and the mathematical models demonstrate the superiority of our proposed approach over random grouping approach concerning significant energy consumption of MIDs.

37 citations


Journal ArticleDOI
TL;DR: It is observed that ATSC 3.0 outperforms both eMBMS solutions, i.e., MBMS over Single Frequency Networks (MBSFN) and Single-Cell PTM (SC-PTM) in terms of spectral efficiency, peak data rate and mobility, among others.
Abstract: This paper provides a detailed performance analysis of the physical layer of two state-of-the-art point-to-multipoint (PTM) technologies: evolved Multimedia Broadcast Multicast Service (eMBMS) and Advanced Television Systems Committee - Third Generation (ATSC 3.0). The performance of these technologies is evaluated and compared using link-level simulations, considering relevant identified scenarios. A selection of Key Performance Indicators for the International Mobile Telecommunications 2020 (IMT-2020) evaluation process has been considered. Representative use cases are also aligned to the test environments as defined in the IMT-2020 evaluation guidelines. It is observed that ATSC 3.0 outperforms both eMBMS solutions, i.e., MBMS over Single Frequency Networks (MBSFN) and Single-Cell PTM (SC-PTM) in terms of spectral efficiency, peak data rate and mobility, among others. This performance evaluation serves as a benchmark for comparison with a potential 5G PTM solution.

33 citations


Proceedings ArticleDOI
12 Jun 2019
TL;DR: This work presents an experimental performance study on the wireless communication of a quadrocopter connected to an LTE-Advanced network and measures of TCP traffic analyze how the received power level, signal-to-interference ratio, and throughput depend on the flight height.
Abstract: This work presents an experimental performance study on the wireless communication of a quadrocopter connected to an LTE-Advanced network. Measurements of TCP traffic analyze how the received power level, signal-to-interference ratio, and throughput depend on the flight height. An average throughput of 20 Mb/s in the downlink and 40 Mb/s in the uplink is achieved at 150 m. We also show how the number of line-of-sight links to base stations rises with height and leads to an increased handover rate.

31 citations


Journal ArticleDOI
TL;DR: A closed-form expression and an efficient recursion to obtain this joint PDF are derived and the results validate the effectiveness of the recursive formulation and show that its computational cost is considerably lower than that of other related approaches.
Abstract: The deployment of machine-type communications (MTC) together with cellular networks has a great potential to create the ubiquitous Internet-of-Things environment. Nevertheless, the simultaneous activation of a large number of MTC devices (named UEs herein) is a situation difficult to manage at the evolved Node B (eNB). The knowledge of the joint probability distribution function (PDF) of the number of successful and collided access requests within a random access opportunity (RAO) is a crucial piece of information for contriving congestion control schemes. A closed-form expression and an efficient recursion to obtain this joint PDF are derived in this paper. Furthermore, we exploit this PDF to design estimators of the number of contending UEs in an RAO. Our numerical results validate the effectiveness of our recursive formulation and show that its computational cost is considerably lower than that of other related approaches. In addition, our estimators can be used by the eNBs to implement highly efficient congestion control methods.

31 citations


Journal ArticleDOI
TL;DR: A new RRM strategy called Fractional Frequency Reuse with Three Sectors and Three Layers (FFR-3SL) technique is proposed, which efficiently utilizes the radio resources and alleviates the effects of CCI in LTE-A HetNet systems, thereby improving the system performance.
Abstract: The Heterogeneous Network (HetNet) has emerged as one of the most promising developments toward achieving the target of the Long Term Evolution-Advanced (LTE-A) systems. However, Co-Channel Interference (CCI) is one of the critical challenges of HetNet, that degrades the overall performance of a system. Therefore, an appropriate Radio Resource Management (RRM) mechanism is required to deploy and expand the HetNets properly. In this paper, a new RRM strategy called Fractional Frequency Reuse with Three Sectors and Three Layers (FFR-3SL) technique is proposed. The FFR-3SL efficiently utilizes the radio resources and alleviates the effects of CCI in LTE-A HetNets and thereby improving the system performance. In order to implement the proposed strategy, the entire macrocell coverage area is segmented into three sectors and three layers, while the total bandwidth is divided into seven subbands. Subsequently, the subbands are accurately distributed among femtocells and macrocells by employing the proposed algorithm. As a result, the co-tier and cross-tier interferences are managed on a prior basis. The Monte Carlo simulation is performed to evaluate and compare the performance of the proposed method with the existing methods. The simulation results show that the proposed method achieves higher throughput and better capacity in LTE-A HetNet system. Furthermore, the efficiency of the system is improved with regard to user satisfaction in terms of signal to interference and noise ratio (SINR) values.

29 citations


Journal ArticleDOI
TL;DR: An overview of user effect on mobile terminal antennas in the last seven years is provided, followed by a discussion on the user's effects on MIMO parameters, channel capacity, and millimeter-wave applications prior to the presentation of recent technologies that are aimed at improving the antenna performance.
Abstract: A dramatic evolution can be observed in cellular wireless communication. It started from analog 2G (global system for mobile communication) toward high-data-rate systems such as 3G (wideband code-division multiplexing access), 3.5G (high-speed packet access), 4G (long-term evolution (LTE) and LTE advanced), and currently converging in a more optimized and compact 5G (orthogonal frequency-division multiple access) form. The main challenges for designing of mobile terminal antennas are the compact size requirement of the built-in structure, multiband capabilities, and the number of integrated antennas that form a multiple-input multiple-output (MIMO) terminal system. Moreover, they are required to fulfill all performance and safety standards. For mobile antennas, their radiation efficiency may be affected by the interaction of user head/hand, which consequently affects the correlation of MIMO antenna systems. This paper provides an overview of user effect on mobile terminal antennas in the last seven years. An overview of the Cellular Telecommunication and Internet Association specifications for mobile terminal antennas will first be explained. This is followed by a discussion on the user's effects on MIMO parameters, channel capacity on mobile terminal antennas, and millimeter-wave applications prior to the presentation of recent technologies that are aimed at improving the antenna performance. Finally, a perspective and potential future investigations on mobile terminal antennas will be discussed in the conclusion.

28 citations


Journal ArticleDOI
TL;DR: In this paper, a new Licensed-Assisted Access DRX mechanism (LAA-DRX) over LTE networks is proposed, which can achieve almost 4 percent higher power savings and up to 58 percent reduction in resource usage.
Abstract: Ever-increasing demand of data rate and subscriber penetration are resulting in continuous growth of mobile data traffic. In order to meet this exponential data traffic growth 3GPP Release 13, we have decided to enable the operation of Long Term Evolution (LTE) in the unlicensed band, termed as Licensed-Assisted Access (LAA). As the unlicensed spectrum is mainly used by WiFi, LTE evolved NodeB (eNB) has to perform Listen Before Talk (LBT) procedure to access the unlicensed channel. If the unlicensed spectrum is occupied by other services, User Equipment (UE) should remain active and wait for unlicensed channel to become idle. UE consumes a high power by remaining active, when unlicensed channel is not available. The UE's energy consumption could be saved by configuring Discontinuous Reception (DRX) in the unlicensed band, similar to DRX in LTE. In this article, we introduce and analyze a new Licensed-Assisted Access DRX mechanism (LAA-DRX) over LTE networks. Using our novel four state semi-Markov model, we show the probabilistic estimation of power saving and wake up latency associated with our proposed LAA-DRX process. Mathematical analysis and simulation results, using real wireless trace, point out that comparing to existing LTE DRX process, our LAA-DRX can achieve almost 4 percent higher power savings and up to 58 percent reduction in resource usage.

27 citations


Journal ArticleDOI
TL;DR: This paper has employed four enabler SG applications for remote power control, namely demand response, advanced metering infrastructure, video surveillance and wide area situational awareness applications for the implementation of the remote-power substation controlling and evaluated the results in terms of throughput, fairness index and spectral efficiency.
Abstract: In recent years, smart grid (SG) applications have been proven a sophisticated technology of immense aptitude, comfort and efficiency not only for the power generation sectors but also for other industrial purposes. The term SG is used to describe a set of systems customized to rapidly and automatically monitor user demand, restore power, isolate faults and maintain stability for more efficient transmission, generation and delivery of electric power. Nevertheless, the quality of service (QoS) guarantee is essential to maintain the networking technology used in different stages and communication of the SG for efficient distribution, which may be drastically obstructed as the sensors of the application increases. Undoubtedly, receiving and transmitting of this information requires two-way, high speed, reliable and secure communication infrastructure. In this paper, we have proposed a scheduling approach guarantees the efficient utilization of existing network resources that satisfy the sensors’ demands sufficiently. The proposed approach is based on hierarchical adaptive weighting method, which helps to overcome the issues of studied scheduling approach and intended to aid SG sensors applications, based on its QoS demands. We have employed four enabler SG applications for remote power control, namely demand response, advanced metering infrastructure, video surveillance and wide area situational awareness applications for the implementation of the remote-power substation controlling. Moreover, the cooperative game theory technique has been incorporated into a solution for the optimal estimation and allocation of bandwidth among different sensors. The results have been evaluated in terms of throughput, fairness index and spectral efficiency and results have been compared with the well-known scheduling approaches such as exponential/proportional fairness (EXP/PF), best channel quality indicator (Best-CQI) and exponential rules (EXP-Rule). The results demonstrated that the proposed approach is providing a better performance in terms fairness index by 25, 66 and 68% compared to EXP/PF, EXP/RULE and Best-CQI, respectively.

Journal ArticleDOI
TL;DR: This paper promotes the use of a mechanism called Distributed Queueing (DQ), aided by a MAC-layer load estimation technique, to effectively resolve contentions between the MTDs to improve delay performance with minimal impacts to LTE access procedure and air interface.
Abstract: Thanks to their ubiquitous coverage, Long-Term Evolution (LTE) networks are considered the most potential enabler for massive Machine-Type Communications (mMTC) service in fifth-generation (5G) context. LTE standard, however, was not designed for mMTC and scenarios where the massive Machine-Type Devices (MTDs) population try to access a network over a short period may overload the Random Access CHannel (RACH). Furthermore, there is no mechanism to prioritize urgent MTDs in such overload situation. The baseline Access Class Barring (B-ACB) scheme is thus adopted by the 3GPP to address both issues at a substantial cost of access delay. This paper follows a different approach and proposes a complete solution to the two main issues of cellular mMTC. We promote the use of a mechanism called Distributed Queueing (DQ), aided by a MAC-layer load estimation technique, to effectively resolve contentions between the MTDs to improve delay performance with minimal impacts to LTE access procedure and air interface. Then, by exploiting information related to congestion level from the DQ process, a dynamic access prioritization scheme can be realized without additional signaling overhead. Computer simulation under an mMTC-oriented traffic model shows that our framework outperforms the B-ACB in terms of both access delay and energy consumption when all devices are of equal importance. On the other hand, when devices of different priorities coexist, our framework with proper tuning also offers lower delay for all classes and lower overall energy consumption compared to both the baseline and a dynamic ACB solutions in massive bursty access scenarios.

Journal ArticleDOI
Ruhui Ma1, Jin Cao1, Dengguo Feng, Hui Li1, Yinghui Zhang, Xixiang Lv1 
01 May 2019
TL;DR: The security and performance evaluations demonstrate that the proposed secure handover authentication scheme can meet various security requirements including the perfect forward/backward secrecy and privacy preserving and achieves ideal efficiency.
Abstract: To ensure secure and seamless handovers inter-Evolved Universal Terrestrial Radio Access Network (E-UTRAN) is a key issue in LTE-A networks. Due to the introduction of Home Evolved NodeB (HeNB), there are three different access modes for the HeNB and the mobility scenarios between the eNB and the HeNB in LTE-A networks are rather complicated. Different handover procedures are executed for different mobility scenarios according to the 3GPP committee, which makes the overall system complex. Thus, it is a key point to propose a unified and secure handover scheme to fit in with all the mobility scenarios in LTE-A networks. In this paper, we propose a secure handover authentication scheme based on the certificateless signcryption technique. Our proposed scheme can achieve the unified and secure handover procedure without sacrificing efficiency. The security and performance evaluations demonstrate that our proposed scheme can meet various security requirements including the perfect forward/backward secrecy and privacy preserving. At the same time, our scheme achieves ideal efficiency.

Proceedings ArticleDOI
01 Feb 2019
TL;DR: A 28nm CMOS LTE-A transceiver, capable of supporting up to 4 inter-band downlink MIMO and 2 inter- band uplink CA concurrently while adopting ADPLLs for RX and TX with 256-QAM capability is presented.
Abstract: With the increasing popularity of social media, artificial intelligence (AI) and mobile computing, higher data-rate wireless communication is required. In the existing limited 4G spectrum, 3GPP standards have defined several methods to increase the data-rate, e.g., carrier aggregation (CA), high-order modulation (HOM) and MIMO technique; however this comes at the cost of extra power consumption and potential sensitivity degradation due to the spurious components generated by CA. In addition, high-power user equipment (HPUE) extends the coverage range and uplink throughput, but introduces more stringent linearity requirements for transmitter design. In this work, we present a 28nm CMOS LTE-A transceiver, capable of supporting up to 4 inter-band downlink (or 2-carrier $4 \times 4$ downlink MIMO) and 2 inter-band uplink CA concurrently while adopting ADPLLs for RX and TX with 256-QAM capability. The techniques adopted to achieve these features are described in this paper.

Journal ArticleDOI
TL;DR: The proposed performance and security enhanced (PSE-AKA) protocol follows the cocktail therapy to generate the authentication vectors that improves the performance in terms of computation and communication overhead and reduces the bandwidth consumption from the network.
Abstract: In the mobile telecommunication network, Long term Evolution (LTE) is the most successful technological development for the industrial services and applications. The Evolved Packet System based Authentication and Key Agreement (EPS-AKA) was the first protocol proposed to authenticate the communication entities in the LTE network. But, the EPS-AKA protocol suffers from the single key exposure problem and is susceptible to various security attacks. Also, the protocol incurs high bandwidth consumption and computation overhead over the communication network. Moreover, the protocol doesn’t support the Internet of Things (IoT) based applications and has several security issues such as the privacy violation of the user identity and key set identifier (KSI). To resolve the above problems, various AKA protocols were proposed by the researchers. Unfortunately, none of the protocols succeeded to overcome the privacy preservation and single key exposure problem from the communication network. In this paper, we propose the performance and security enhanced (PSE-AKA) protocol for IoT enabled LTE/LTE-A network. The proposed protocol follows the cocktail therapy to generate the authentication vectors that improves the performance in terms of computation and communication overhead. The protocol preserves the privacy of objects, protects the KSI and avoids the identified attacks from the communication network. The formal verification and security analysis of the proposed protocol is carried out using the BAN logic and AVISPA tool respectively. The security analysis shows that the protocol achieves the security goals and secure against various known attacks. Finally, the performance analysis shows that the proposed protocol generates the less overhead and reduces the bandwidth consumption from the network.

Journal ArticleDOI
TL;DR: The demand for linear and highly efficient RF amplifiers has continued to rise without showing signs of stopping, as the world looks to the implementation of 5G.
Abstract: The demand for linear and highly efficient RF amplifiers has continued to rise without showing signs of stopping, as the world looks to the implementation of 5G. Modern communication signals gave rise to the demand, as the desire to efficiently utilize the limited electromagnetic spectrum led to widespread use of amplitude- and phase-modulated signals, e.g., LTE and LTE Advanced, that use carrier aggregation to achieve broader bandwidths. The defense industry may provide even more applications, as electronic warfare techniques make use of multitone and sometimes noise-like signals that have statistics similar to communication signals. Radars are required to limit emissions in adjacent bands, but traditional rectangular pulses have high out-of-band emissions. Gaussian pulse shaping can be used to improve spectral efficiency, limiting emissions and sidelobes, while adding amplitude modulation [1].

Proceedings ArticleDOI
03 Jun 2019
TL;DR: LTE on Mars (namely: LTE-M) would provide a robust and flexible communication infrastructure, characterized by large bandwidth availability for rover and lander communications, suitable to be exploited also for efficient human-to-human data exchange when manned missions will be planned.
Abstract: In these last years, the efforts spent by national and international Space agencies to promote the Mars exploration configure a true rush, whose final goal should be a manned mission, with the active participation of human personnel in-situ. In this paper, we are dealing with the communication aspects of Martian missions. The communication tasks of a Space mission are twofold: first the different unmanned (and, in the near future, manned) exploration entities should exchange information among themselves; then, the collected and processed data should be sent to Earth. For the inter-planetary connection, satellites will provide the necessary long-haul. For the Martian planetary segment, the state-of-the-art solutions are mainly based on terrestrial WLAN and/or WSN standards, in order to make different sensors to communicate in short range. Our solution is based on the deployment of a Martian wireless network infrastructure based on LTE. LTE on Mars (namely: LTE-M) would provide a robust and flexible communication infrastructure, characterized by large bandwidth availability for rover and lander communications, suitable to be exploited also for efficient human-to-human data exchange when manned missions will be planned. The adaptability of terrestrial LTE uplink and downlink transmission has been tested by simulating the RF Martian environment, with the most significant propagation impairments. The achieved results will be focused on providing some guidelines for future LTE-M actual deployment.

Proceedings ArticleDOI
02 Jun 2019
TL;DR: This paper presents a broadband high-efficiency linear Doherty power amplifier (DPA) for LTE/LTE-A handset applications implemented in a 130nm SOI technology and packaged using flip-chip on a laminate substrate.
Abstract: This paper presents a broadband high-efficiency linear Doherty power amplifier (DPA) for LTE/LTE-A handset applications. The proposed PA is implemented in a 130nm SOI technology and packaged using flip-chip on a laminate substrate. At 2.5GHz, the PA shows a PAE of 44%, a power gain of 27dB, and an E-UTRA ACLR of −35dBc at 28dBm output power using a 10MHz LTE uplink signal without DPD. Moreover, the PA reaches a maximum FOM (PAE+|ACLR|) of 80 and maintains a FOM greater than 70 over 31% of fractional bandwidth around 2.3GHz without using DPD. When using DPD, the ACLR is improved by 10dB leading to a maximum FOM of 90. To the best of our knowledge, these performances represent the best linearity-efficiency performances compared to recently published LTE PAs for handset applications.

Journal ArticleDOI
06 Sep 2019
TL;DR: In this work, hybrid firefly algorithm (FFA) and particle swarm optimization (PSO) algorithm are utilized to solve 0‐1 multiobjective knapsack problem and it has the best QoS and less interference in the resource allocation in LTE‐A network than state‐of‐art methods in the proposed strategy.

Book ChapterDOI
Qiyue Li1, Yuling Ge1, Yangzhao Yang, Yadong Zhu1, Wei Sun1, Jie Li1 
23 Feb 2019
TL;DR: The numerical experiment results show that with limited resource blocks, the algorithm can maintain low data packets dropping ratios while achieving optimal energy efficiency for a large number of M2M nodes, comparing with other typical counterparts.
Abstract: In future wireless communications, there will be a large number of devices equipped with several different types of sensors need to access networks with diverse quality of service requirements. In cellular network evolution, the long term evolution advanced (LTE-A) networks has standardized Machine-to-Machine (M2M) features. Such M2M technology can provide a promising infrastructure for Internet of things (IoT) sensing applications, which usually require real-time data reporting. However, LTE-A is not designed for directly supporting such low-data-rate devices with optimized energy efficiency since it depends on core technologies of LTE that are originally designed for high-data-rate services. This paper investigate the maximum energy efficient data packets M2M transmission with uplink channels in LTE-A network. We formulate it into a jointed problem of Modulation and-Coding Scheme (MCS) assignment, resource allocation and power control, which can be expressed as a NP-hard mixed-integer linear fractional programming problem. Then we propose a global optimization scheme with Charnes-Cooper transformation and Glover linearization. The numerical experiment results show that with limited resource blocks, our algorithm can maintain low data packets dropping ratios while achieving optimal energy efficiency for a large number of M2M nodes, comparing with other typical counterparts.

Proceedings ArticleDOI
30 Aug 2019
TL;DR: A measurement tool for the performance evaluation of wireless communications with drones over cellular networks and the Android software records various LTE parameters, evaluates the TCP and UDP throughput, and tracks the GPS position.
Abstract: We introduce a measurement tool for the performance evaluation of wireless communications with drones over cellular networks. The Android software records various LTE parameters, evaluates the TCP and UDP throughput, and tracks the GPS position. Example measurement results are presented.

Journal ArticleDOI
TL;DR: A channel‐aware hybrid scheduling technique on the basis of satellite‐LTE spectrum sharing is introduced and results clearly demonstrate the high performance of H‐MUDoS in terms of spectral efficiency in addition to improvement of the quality‐of‐service requirements and capacity maximization.

Journal ArticleDOI
TL;DR: An intelligent QoS-aware bandwidth allocation solution is proposed for the uplink traffic when the channel condition is uncertain and the numerical results show that the proposed scheme provides reliable scheduling for real-time services without harming the performance of non-real-time QoS parameters.
Abstract: Nowadays, resource allocation is one of the major problems in the cellular networks. Due to the increasing number of autonomous heterogeneous devices in future mobile networks, a proper scheduling scheme is required to provide the adequate resources for the service flow. However, provisioning quality-of-service (QoS) for real-time applications with the constraint on transporting delay is hard to achieve without compromising other QoS parameters. In this paper, an intelligent QoS-aware bandwidth allocation solution is proposed for the uplink traffic when the channel condition is uncertain. The system is designed based on a specific maximum latency assurance for real-time applications as well as considering fairness to the throughput of non-real-time services. The scheduling system employs a channel-aware Kalman filter based interval type-2 fuzzy logic controller to estimate channel uncertainty as well as satisfying the QoS requirements for user equipment. Through simulations, the performance of the proposed system in terms of optimal bandwidth allocation, bandwidth wastage, fairness, jitters, various delays and throughputs for delay sensitive and delay tolerant services is analyzed. The numerical results show that the proposed scheme provides reliable scheduling for real-time services without harming the performance of non-real-time QoS parameters.

Proceedings ArticleDOI
01 Apr 2019
TL;DR: A new Fractional Frequency Reuse (FFR) method is proposed, where the whole coverage area of a macrocell is partitioned into three sectors and three layers, and the proposed FFR method divides the total bandwidth into seven subbands.
Abstract: Co-Channel interference appears a significant challenge in LTE-A heterogeneous networks (HetNets), which degrades the overall performance of the system. Therefore, an adequate interference management strategy is necessary to implement the HetNets properly. In this study, a new Fractional Frequency Reuse (FFR) method is proposed, where the whole coverage area of a macrocell is partitioned into three sectors and three layers. Moreover, the proposed FFR method divides the total bandwidth into seven subbands. Macrocells and femtocells efficiently utilize these subbands in different regions. The Monte Carlo simulation is run, to evaluate and examine the system in terms of throughput. The simulation results show that the proposed method achieves higher throughput and improves the overall performance of the LTE-A HetNet system when compared with the existing FFR methods.

Journal ArticleDOI
TL;DR: This study proposes a novel scheme that integrates dynamic cell range expansion (CRE) and dynamic almost blank subframe (ABS) schemes based on 3GPP R10 eICIC that can improve network system capacity and ensures that offloaded users always have access to sufficient resources.
Abstract: With the ever-growing popularity of wireless multimedia services, wireless network base stations are increasingly likely to overload in high population density areas. An economical means exists for operators to establish more picocells in order to reduce the user traffic load on base stations and increase system capacity; however, inter-cell interference is a challenge in such heterogeneous networks. This study proposes a novel scheme that integrates dynamic cell range expansion (CRE) and dynamic almost blank subframe (ABS) schemes based on 3GPP R10 eICIC. The proposed scheme can automatically adjust the CRE bias value according to the cell traffic load, thereby ensuring that users have access to sufficient resources to transmit data when they are offloaded to a smaller cell. Furthermore, the proposed ABS scheme can determine the ABS ratio that yields the highest system capacity. The proposed CRE scheme ensures that offloaded users always have access to sufficient resources. The proposed CRE and ABS scheme can improve network system capacity by more than 13.9% of 3GPP R10 and 11.7% of the competing algorithm.

Proceedings ArticleDOI
01 Jan 2019
TL;DR: In this paper, a three-tier LTE-Advanced air/ground HetNets with UAVs as user equipment (UE) and base-stations (BSs) co-existing with existing terrestrial nodes is considered.
Abstract: Integrating unmanned aerial vehicles (UAVs) as user equipment (UE) and base-stations (BSs) into an existing LTE-Advanced heterogeneous network (HetNet) can further enhance wireless connectivity and support emerging services. However, this would require effective configuration of system-level design parameters for interference management. This paper provides system-level insights into a three-tier LTE-Advanced air/ground HetNet, wherein the UAVs are deployed both as BSs and UEs, and co-exist with existing terrestrial nodes. Moreover, this HetNet leverages on cell range expansion (CRE), intercell interference coordination (ICIC), 3D beamforming, and enhanced support for UAVs. Through Monte-Carlo simulations, we compare system-wide fifth percentile spectral efficiency (5pSE) and coverage probability for different ICIC techniques, while jointly optimizing the ICIC and CRE parameters. Our results show that reduced power subframes defined in 3GPP Rel-11 can provide considerably better 5pSE and coverage probability than the 3GPP Re1–10 with almost blank subframes.

Journal ArticleDOI
TL;DR: This paper presents an adaptive self-organizing frequency reuse approach that is based on dividing every cell into two regions, namely, cell-inner and cell-outer regions; and minimizing the total interference encountered by all users in every region.
Abstract: Orthogonal frequency division multiple access (OFDMA) is extensively utilized for the downlink of cellular systems such as long term evolution (LTE) and LTE advanced. In OFDMA cellular networks, orthogonal resource blocks can be used within each cell. However, the available resources are rare and so those resources have to be reused by adjacent cells in order to achieve high spectral efficiency. This leads to inter-cell interference (ICI). Thus, ICI coordination among neighboring cells is very important for the performance improvement of cellular systems. Fractional frequency reuse (FFR) has been widely adopted as an effective solution that improves the throughput performance of cell edge users. However, FFR does not account for the varying nature of the channel. Moreover, it exaggerates in caring about the cell edge users at the price of cell inner users. Therefore, effective frequency reuse approaches that consider the weak points of FFR should be considered. In this paper, we present an adaptive self-organizing frequency reuse approach that is based on dividing every cell into two regions, namely, cell-inner and cell-outer regions; and minimizing the total interference encountered by all users in every region. Unlike the traditional FFR schemes, the proposed approach adjusts itself to the varying nature of the wireless channel. Furthermore, we derive the optimal value of the inner radius at which the total throughput of the inner users of the home cell is as close as possible to the total throughput of its outer users. Simulation results show that the proposed adaptive approach has better total throughput of both home cell and all 19 cells than the counterparts of strict FFR, even when all cells are fully loaded, where other algorithms in the literature failed to outperform strict FFR. The improved throughput means that higher spectral efficiency can be achieved; i.e., the spectrum, which is the most precious resource in wireless communication, can be utilized efficiently. In addition, the proposed algorithm can provide significant power saving, that can reach 50% compared to strict FFR, while not penalizing the throughput performance.

Journal ArticleDOI
TL;DR: A new QoS‐aware downlink scheduling algorithm (QuAS) is proposed, which increases theQoS‐fairness and overall throughput of the edge users without causing a significant degradation in overall system throughput when compared with other schedulers in the literature.
Abstract: Funding information BAGEP and AGU Foundation; AGU Foundation; BAGEP Summary 4G/LTE‐A (Long‐Term Evolution—Advanced) is the state of the art wireless mobile broadband technology. It allows users to take advantage of high Internet speeds. It makes use of the OFDM technology to offer high speed and provides the system resources both in time and frequency domain. A scheduling algorithm running on the base station holds the allocation of these resources. In this paper, we investigate the performance of existing downlink scheduling algorithms in two ways. First, we look at the performance of the algorithms in terms of throughput and fairness metrics. Second, we suggest a new QoS‐aware fairness criterion, which accepts that the system is fair if it can provide the users with the network traffic speeds that they demand and evaluate the performance of the algorithms according to this metric. We also propose a new QoS‐aware downlink scheduling algorithm (QuAS) according to these two metrics, which increases the QoS‐fairness and overall throughput of the edge users without causing a significant degradation in overall system throughput when compared with other schedulers in the literature.

Journal ArticleDOI
TL;DR: This work proposes a new random access channel scheme based on Q-learning approach to reduce the congestion problem in RACH, which adaptively divides the available preambles between both M2M and H2H devices in a way that provides an acceptable service for the H 2H devices and maximizes the number of active M 2M devices.
Abstract: Due to the wide proliferation of the 3GPP long term evolution (LTE) and LTE-Advanced systems as the air interface for 4th generation (4G) wireless systems and beyond, telecommunication operators became more interested in using the LTE infrastructure to meet the projected surge in demand for machine-to-machine (M2M) and IoT communications. As a result, the LTE-Advanced system must evolve to provide these services as an overlay over the original network that is originally designed for human-to-human (H2H) communication. In this setup, M2M communication devices share the same random access channel (RACH) with the H2H communication devices. Since M2M communication is expected to be massive, consequently the RACH is a new bottleneck in the LTE-A system. This triggered the need to enhance the operation of the RACH to meet the needs to support the low power and low data rates of M2M devices. The current standardized scheme to request access to the system is known to suffer from congestion and overloading in the presence of a huge number of devices. Accordingly, recent research pointed to the need of designing more efficient ways to manage the random access channel in such setups. Most of previous works focused on solving the congestion problem in RACH in the presence of M2M solely, but not with the existence of both M2M and H2H services. In this work we propose a new random access channel scheme based on Q-learning approach to reduce the congestion problem. The scheme adaptively divides the available preambles between both M2M and H2H devices in a way that provides an acceptable service for the H2H devices and maximizes the number of active M2M devices. The adaptation is done based on current demand levels from both the H2H and M2M devices and the observed service levels. The results indicate that the proposed approach provides high random access channel success probability for both M2M and H2H devices even with the huge number of M2M devices.

Journal ArticleDOI
TL;DR: A new definition of neighborhood relations was proposed based on the measurement report (MR) data in the actual network and the experimental results demonstrated that Greedy algorithm not only eliminates conflict and confusion completely, but also reduces the mod 3 interference.
Abstract: In recent years, interference has played an increasingly significant part in bulkier and denser Long Term Evolution (LTE/LTE-Advanced) networks. Though intra-cell interference is successfully improved by Orthogonal Frequency Division Multiple Access (OFDMA), inter-cell interference (ICI) could cause a degradation of throughput and significantly impact Signal-to-Noise-Ratio (SINR) in the downlink (DL) network. Physical Cell ID (PCI) planning, an effective approach to eliminate ICI, is required to reduce collision, confusion and mod $q$ interference, where $q=3$ for Single-Input Single-Output (SISO) system, and $q=6$ for Multiple-Input Multiple-Output (MIMO) system. In this study, a new definition of neighborhood relations was proposed based on the measurement report (MR) data in the actual network. Binary quadratic programming (BQP) model was built for PCI planning through a series of model deductions and mathematical proofs. Since BQP is known as NP-hard, a heuristic Greedy algorithm was proposed and its low complexity both in time and space can ensure large-scale computing. Finally, based on the raw data extracted from the actual SISO system network and the simulation calculation of MATLAB, the experimental results demonstrated that Greedy algorithm not only eliminates conflict and confusion completely, but also reduces the mod 3 interference of 26.213% more than the baseline scheme and far more than the improvement ratio of 4.436% given by the classical graph coloring algorithm.