scispace - formally typeset
Search or ask a question

Showing papers on "Cellular network published in 2018"


Journal ArticleDOI
TL;DR: This paper presents a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching.
Abstract: The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.

643 citations


Journal ArticleDOI
TL;DR: The preliminary outcomes of extensive research on mmWave massive MIMO are presented and emerging trends together with their respective benefits, challenges, and proposed solutions are highlighted to point out current trends, evolving research issues and future directions on this technology.
Abstract: Several enabling technologies are being explored for the fifth-generation (5G) mobile system era. The aim is to evolve a cellular network that remarkably pushes forward the limits of legacy mobile systems across all dimensions of performance metrics. One dominant technology that consistently features in the list of the 5G enablers is the millimeter-wave (mmWave) massive multiple-input-multiple-output (massive MIMO) system. It shows potentials to significantly raise user throughput, enhance spectral and energy efficiencies and increase the capacity of mobile networks using the joint capabilities of the huge available bandwidth in the mmWave frequency bands and high multiplexing gains achievable with massive antenna arrays. In this survey, we present the preliminary outcomes of extensive research on mmWave massive MIMO (as research on this subject is still in the exploratory phase) and highlight emerging trends together with their respective benefits, challenges, and proposed solutions. The survey spans broad areas in the field of wireless communications, and the objective is to point out current trends, evolving research issues and future directions on mmWave massive MIMO as a technology that will open up new frontiers of services and applications for next-generation cellular networks.

491 citations


Journal ArticleDOI
TL;DR: An overview of the evolution of the various localization methods that were standardized from the first to the fourth generation of cellular mobile radio is provided, and what can be expected with the new radio and network aspects for the upcoming generation of fifth generation is looked over.
Abstract: Cellular systems evolved from a dedicated mobile communication system to an almost omnipresent system with unlimited coverage anywhere and anytime for any device. The growing ubiquity of the network stirred expectations to determine the location of the mobile devices themselves. Since the beginning of standardization, each cellular mobile radio generation has been designed for communication services, and satellite navigation systems, such as Global Positioning System (GPS), have provided precise localization as an add-on service to the mobile terminal. Self-contained localization services relying on the mobile network elements have offered only rough position estimates. Moreover, satellite-based technologies suffer a severe degradation of their localization performance in indoors and urban areas. Therefore, only in subsequent cellular standard releases, more accurate cellular-based location methods have been considered to accommodate more challenging localization services. This survey provides an overview of the evolution of the various localization methods that were standardized from the first to the fourth generation of cellular mobile radio, and looks over what can be expected with the new radio and network aspects for the upcoming generation of fifth generation.

418 citations


Journal ArticleDOI
TL;DR: In this paper, a comprehensive overview of the most promising modulation and multiple access (MA) schemes for 5G networks is presented, including modulation techniques in orthogonal MA (OMA) and various types of non-OMA (NOMA).
Abstract: Fifth generation (5G) wireless networks face various challenges in order to support large-scale heterogeneous traffic and users, therefore new modulation and multiple access (MA) schemes are being developed to meet the changing demands. As this research space is ever increasing, it becomes more important to analyze the various approaches, therefore, in this paper we present a comprehensive overview of the most promising modulation and MA schemes for 5G networks. Unlike other surreys of 5G networks, this paper focuses on multiplexing techniques, including modulation techniques in orthogonal MA (OMA) and various types of non-OMA (NOMA) techniques. Specifically, we first introduce different types of modulation schemes, potential for OMA, and compare their performance in terms of spectral efficiency, out-of-band leakage, and bit-error rate. We then pay close attention to various types of NOMA candidates, including power-domain NOMA, code-domain NOMA, and NOMA multiplexing in multiple domains. From this exploration, we can identify the opportunities and challenges that will have the most significant impacts on modulation and MA designs for 5G networks.

371 citations


Proceedings ArticleDOI
16 Apr 2018
TL;DR: In this paper, the authors investigated the problem of dynamic service caching in MEC-enabled dense cellular networks and proposed an efficient online algorithm, called OREO, which jointly optimizes service caching and task offloading to address service heterogeneity, unknown system dynamics, spatial demand coupling and decentralized coordination.
Abstract: Mobile Edge Computing (MEC) pushes computing functionalities away from the centralized cloud to the network edge, thereby meeting the latency requirements of many emerging mobile applications and saving backhaul network bandwidth. Although many existing works have studied computation of-floading policies, service caching is an equally, if not more important, design topic of MEC, yet receives much less attention. Service caching refers to caching application services and their related databases/libraries in the edge server (e.g. MEC-enabled BS), thereby enabling corresponding computation tasks to be executed. Because only a small number of application services can be cached in resource-limited edge server at the same time, which services to cache has to be judiciously decided to maximize the edge computing performance. In this paper, we investigate the extremely compelling but much less studied problem of dynamic service caching in MEC-enabled dense cellular networks. We propose an efficient online algorithm, called OREO, which jointly optimizes dynamic service caching and task offloading to address a number of key challenges in MEC systems, including service heterogeneity, unknown system dynamics, spatial demand coupling and decentralized coordination. Our algorithm is developed based on Lyapunov optimization and Gibbs sampling, works online without requiring future information, and achieves provable close-to-optimal performance. Simulation results show that our algorithm can effectively reduce computation latency for end users while keeping energy consumption low.

326 citations


Proceedings ArticleDOI
01 Feb 2018
TL;DR: An IF interface to the analog baseband is desired for low power consumption in the handset or user equipment (UE) active antenna and to enable use of arrays of transceivers for customer premises equipment (CPE) or basestation (BS) antenna arrays with a low-loss IF power-combining/splitting network implemented on an antenna backplane carrying multiple tiled antenna modules.
Abstract: Developing next-generation cellular technology (5G) in the mm-wave bands will require low-cost phased-array transceivers [1]. Even with the benefit of beamforming, due to space constraints in the mobile form-factor, increasing TX output power while maintaining acceptable PA PAE, LNA NF, and overall transceiver power consumption is important to maximizing link budget allowable path loss and minimizing handset case temperature. Further, the phased-array transceiver will need to be able to support dual-polarization communication. An IF interface to the analog baseband is desired for low power consumption in the handset or user equipment (UE) active antenna and to enable use of arrays of transceivers for customer premises equipment (CPE) or basestation (BS) antenna arrays with a low-loss IF power-combining/splitting network implemented on an antenna backplane carrying multiple tiled antenna modules.

285 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the feasibility of multi-tier drone network architecture over traditional single-tier UAV networks and identified the scenarios in which drone networks can potentially complement the traditional RF-based terrestrial networks.
Abstract: Drones (or unmanned aerial vehicles) are expected to be an important component of 5G/ beyond 5G (B5G) cellular architectures that can potentially facilitate wireless broadcast or point-to-multipoint transmissions. The distinct features of various drones such as the maximum operational altitude, communication, coverage, computation, and endurance impel the use of a multi-tier architecture for future dronecell networks. In this context, this article focuses on investigating the feasibility of multi-tier drone network architecture over traditional single-tier drone networks and identifying the scenarios in which drone networks can potentially complement the traditional RF-based terrestrial networks. We first identify the challenges associated with multi-tier drone networks as well as drone-assisted cellular networks. We then review the existing state-of-the-art innovations in drone networks and drone-assisted cellular networks. We then investigate the performance of a multi-tier drone network in terms of spectral efficiency of downlink transmission while illustrating the optimal intensity and altitude of drones in different tiers numerically. Our results demonstrate the specific network load conditions (i.e., ratio of user intensity and base station intensity) where deployment of drones can be beneficial (in terms of spectral efficiency of downlink transmission) for conventional terrestrial cellular networks.

283 citations


Journal ArticleDOI
TL;DR: The cutting-edge research efforts on service migration in MEC are reviewed, a devisal of taxonomy based on various research directions for efficient service migration is presented, and a summary of three technologies for hosting services on edge servers, i.e., virtual machine, container, and agent are provided.
Abstract: Mobile edge computing (MEC) provides a promising approach to significantly reduce network operational cost and improve quality of service (QoS) of mobile users by pushing computation resources to the network edges, and enables a scalable Internet of Things (IoT) architecture for time-sensitive applications (e-healthcare, real-time monitoring, and so on.). However, the mobility of mobile users and the limited coverage of edge servers can result in significant network performance degradation, dramatic drop in QoS, and even interruption of ongoing edge services; therefore, it is difficult to ensure service continuity. Service migration has great potential to address the issues, which decides when or where these services are migrated following user mobility and the changes of demand. In this paper, two conceptions similar to service migration, i.e., live migration for data centers and handover in cellular networks, are first discussed. Next, the cutting-edge research efforts on service migration in MEC are reviewed, and a devisal of taxonomy based on various research directions for efficient service migration is presented. Subsequently, a summary of three technologies for hosting services on edge servers, i.e., virtual machine, container, and agent, is provided. At last, open research challenges in service migration are identified and discussed.

264 citations


Journal ArticleDOI
TL;DR: A comprehensive tutorial on technologies, requirements, architectures, challenges, and potential solutions on means of achieving an efficient C-RAN optical fronthaul for the next-generation network such as the fifth generation network and beyond is presented.
Abstract: The exponential traffic growth, demand for high speed wireless data communications, as well as incessant deployment of innovative wireless technologies, services, and applications, have put considerable pressure on the mobile network operators (MNOs). Consequently, cellular access network performance in terms of capacity, quality of service, and network coverage needs further considerations. In order to address the challenges, MNOs, as well as equipment vendors, have given significant attention to the small-cell schemes based on cloud radio access network (C-RAN). This is due to its beneficial features in terms of performance optimization, cost-effectiveness, easier infrastructure deployment, and network management. Nevertheless, the C-RAN architecture imposes stringent requirements on the fronthaul link for seamless connectivity. Digital radio over fiber-based common public radio interface (CPRI) is the fundamental means of distributing baseband samples in the C-RAN fronthaul. However, optical links which are based on CPRI have bandwidth and flexibility limitations. Therefore, these limitations might constrain or make them impractical for the next generation mobile systems which are envisaged not only to support carrier aggregation and multi-band but also envisioned to integrate technologies like millimeter-wave (mm-wave) and massive multiple-input multiple-output antennas into the base stations. In this paper, we present comprehensive tutorial on technologies, requirements, architectures, challenges, and proffer potential solutions on means of achieving an efficient C-RAN optical fronthaul for the next-generation network such as the fifth generation network and beyond. A number of viable fronthauling technologies such as mm-wave and wireless fidelity are considered and this paper mainly focuses on optical technologies such as optical fiber and free-space optical. We also present feasible means of reducing the system complexity, cost, bandwidth requirement, and latency in the fronthaul. Furthermore, means of achieving the goal of green communication networks through reduction in the power consumption by the system are considered.

263 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of CR technology is conducted and the key enabling technologies that may be closely related to the study of 5G in the near future are presented in terms of full-duplex spectrum sensing, spectrum-database based Spectrum sensing, auction based spectrum allocation, carrier aggregation based spectrum access.
Abstract: With the development of wireless communication technology, the need for bandwidth is increasing continuously, and the growing need makes wireless spectrum resources more and more scarce. Cognitive radio (CR) has been identified as a promising solution for the spectrum scarcity, and its core idea is the dynamic spectrum access. It can dynamically utilize the idle spectrum without affecting the rights of primary users, so that multiple services or users can share a part of the spectrum, thus achieving the goal of avoiding the high cost of spectrum resetting and improving the utilization of spectrum resources. In order to meet the critical requirements of the fifth generation (5G) mobile network, especially the Wider-Coverage , Massive-Capacity , Massive-Connectivity , and Low-Latency four application scenarios, the spectrum range used in 5G will be further expanded into the full spectrum era, possibly from 1 GHz to 100 GHz. In this paper, we conduct a comprehensive survey of CR technology and focus on the current significant research progress in the full spectrum sharing towards the four scenarios. In addition, the key enabling technologies that may be closely related to the study of 5G in the near future are presented in terms of full-duplex spectrum sensing, spectrum-database based spectrum sensing, auction based spectrum allocation, carrier aggregation based spectrum access. Subsequently, other issues that play a positive role for the development research and practical application of CR, such as common control channel, energy harvesting, non-orthogonal multiple access, and CR based aeronautical communication are discussed. The comprehensive overview provided by this survey is expected to help researchers develop CR technology in the field of 5G further.

249 citations


Posted Content
TL;DR: In this paper, the authors investigated the problem of dynamic service caching in MEC-enabled dense cellular networks and proposed an efficient online algorithm, called OREO, which jointly optimizes service caching and task offloading to address service heterogeneity, unknown system dynamics, spatial demand coupling and decentralized coordination.
Abstract: Mobile Edge Computing (MEC) pushes computing functionalities away from the centralized cloud to the network edge, thereby meeting the latency requirements of many emerging mobile applications and saving backhaul network bandwidth. Although many existing works have studied computation offloading policies, service caching is an equally, if not more important, design topic of MEC, yet receives much less attention. Service caching refers to caching application services and their related databases/libraries in the edge server (e.g. MEC-enabled BS), thereby enabling corresponding computation tasks to be executed. Because only a small number of application services can be cached in resource-limited edge server at the same time, which services to cache has to be judiciously decided to maximize the edge computing performance. In this paper, we investigate the extremely compelling but much less studied problem of dynamic service caching in MEC-enabled dense cellular networks. We propose an efficient online algorithm, called OREO, which jointly optimizes dynamic service caching and task offloading to address a number of key challenges in MEC systems, including service heterogeneity, unknown system dynamics, spatial demand coupling and decentralized coordination. Our algorithm is developed based on Lyapunov optimization and Gibbs sampling, works online without requiring future information, and achieves provable close-to-optimal performance. Simulation results show that our algorithm can effectively reduce computation latency for end users while keeping energy consumption low.

Journal ArticleDOI
TL;DR: Numerical results show that the proposed hybrid network with optimized spectrum sharing and cyclical multiple access design significantly improves the spatial throughput over the conventional GBS-only network; while the spectrum reuse scheme provides further throughput gains at the cost of slightly higher complexity for interference control.
Abstract: In conventional terrestrial cellular networks, mobile terminals (MTs) at the cell edge often pose a performance bottleneck due to their long distances from the serving ground base station (GBS), especially in the hotspot period when the GBS is heavily loaded. This paper proposes a new hybrid network architecture that leverages use of unmanned aerial vehicle (UAV) as an aerial mobile base station, which flies cyclically along the cell edge to offload data traffic for cell-edge MTs. We aim to maximize the minimum throughput of all MTs by jointly optimizing the UAV’s trajectory, bandwidth allocation, and user partitioning. We first consider orthogonal spectrum sharing between the UAV and GBS, and then extend to spectrum reuse where the total bandwidth is shared by both the GBS and UAV with their mutual interference effectively avoided. Numerical results show that the proposed hybrid network with optimized spectrum sharing and cyclical multiple access design significantly improves the spatial throughput over the conventional GBS-only network; while the spectrum reuse scheme provides further throughput gains at the cost of slightly higher complexity for interference control. Moreover, compared with the conventional small-cell offloading scheme, the proposed UAV offloading scheme is shown to outperform in terms of throughput, besides saving the infrastructure cost.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the various sources of end-to-end delay of current wireless networks by taking 4G LTE as an example and propose and evaluate several techniques to reduce the end to end latency from the perspectives of error control coding, signal processing, and radio resource management.
Abstract: Fifth-generation cellular mobile networks are expected to support mission critical URLLC services in addition to enhanced mobile broadband applications. This article first introduces three emerging mission critical applications of URLLC and identifies their requirements on end-to-end latency and reliability. We then investigate the various sources of end-to-end delay of current wireless networks by taking 4G LTE as an example. Then we propose and evaluate several techniques to reduce the end-to-end latency from the perspectives of error control coding, signal processing, and radio resource management. We also briefly discuss other network design approaches with the potential for further latency reduction.

Journal ArticleDOI
TL;DR: A systematical survey of the state-of-the-art caching techniques that were recently developed in cellular networks, including macro-cellular networks, heterogeneous networks, device-to-device networks, cloud-radio access networks, and fog-radioaccess networks.
Abstract: Mobile data traffic is currently growing exponentially and these rapid increases have caused the backhaul data rate requirements to become the major bottleneck to reducing costs and raising revenue for operators. To address this problem, caching techniques have attracted significant attention since they can effectively reduce the backhaul traffic by eliminating duplicate data transmission that carries popular content. In addition, other system performance metrics can also be improved through caching techniques, e.g., spectrum efficiency, energy efficiency, and transmission delay. In this paper, we provide a systematical survey of the state-of-the-art caching techniques that were recently developed in cellular networks, including macro-cellular networks, heterogeneous networks, device-to-device networks, cloud-radio access networks, and fog-radio access networks. In particular, we give a tutorial on the fundamental caching techniques and introduce caching algorithms from three aspects, i.e., content placement, content delivery, and joint placement and delivery. We provide comprehensive comparisons among different algorithms in terms of different performance metrics, including throughput, backhaul cost, power consumption, and network delay. Finally, we summarize the main research achievements in different networks, and highlight main challenges and potential research directions.

Journal ArticleDOI
TL;DR: This article proposes a two-level edge computing architecture for automated driving services in order to make full use of the intelligence at the wireless edge (i.e., base stations and autonomous vehicles) for coordinated content delivery and investigates the research challenges of wireless edge caching and vehicular content sharing.
Abstract: Automated driving is coming with enormous potential for safer, more convenient, and more efficient transportation systems Besides onboard sensing, autonomous vehicles can also access various cloud services such as high definition maps and dynamic path planning through cellular networks to precisely understand the real-time driving environments However, these automated driving services, which have large content volume, are time-varying, location-dependent, and delay-constrained Therefore, cellular networks will face the challenge of meeting this extreme performance demand To cope with the challenge, by leveraging the emerging mobile edge computing technique, in this article, we first propose a two-level edge computing architecture for automated driving services in order to make full use of the intelligence at the wireless edge (ie, base stations and autonomous vehicles) for coordinated content delivery We then investigate the research challenges of wireless edge caching and vehicular content sharing Finally, we propose potential solutions to these challenges and evaluate them using real and synthetic traces Simulation results demonstrate that the proposed solutions can significantly reduce the backhaul and wireless bottlenecks of cellular networks while ensuring the quality of automated driving services

Journal ArticleDOI
TL;DR: By exploiting non-orthogonal multiple access (NOMA) for improving the efficiency of multi-access radio transmission, this paper studies the NOMA-enabled multi- access MEC and proposes efficient algorithms to find the optimal offloading solution.
Abstract: Multi-access mobile edge computing (MEC), which enables mobile users (MUs) to offload their computation-workloads to the computation-servers located at the edge of cellular networks via multi-access radio access, has been considered as a promising technique to address the explosively growing computation-intensive applications in mobile Internet services. In this paper, by exploiting non-orthogonal multiple access (NOMA) for improving the efficiency of multi-access radio transmission, we study the NOMA-enabled multi-access MEC. We aim at minimizing the overall delay of the MUs for finishing their computation requirements, by jointly optimizing the MUs’ offloaded workloads and the NOMA transmission-time. Despite the non-convexity of the formulated joint optimization problem, we propose efficient algorithms to find the optimal offloading solution. For the single-MU case, we exploit the layered structure of the problem and propose an efficient layered algorithm to find the MU's optimal offloading solution that minimizes its overall delay. For the multi-MU case, we propose a distributed algorithm (in which the MUs individually optimize their respective offloaded workloads) to determine the optimal offloading solution for minimizing the sum of all MUs’ overall delay. Extensive numerical results have been provided to validate the effectiveness of our proposed algorithms and the performance advantage of our NOMA-enabled multi-access MEC in comparison with conventional orthogonal multiple access enabled multi-access MEC.

Journal ArticleDOI
TL;DR: A new 5G wireless security architecture is proposed, based on which the analysis of identity management and flexible authentication is provided, and a handover procedure as well as a signaling load scheme are explored to show the advantages of the proposed security architecture.
Abstract: The advanced features of 5G mobile wireless network systems yield new security requirements and challenges. This paper presents a comprehensive study on the security of 5G wireless network systems compared with the traditional cellular networks. The paper starts with a review on 5G wireless networks particularities as well as on the new requirements and motivations of 5G wireless security. The potential attacks and security services are summarized with the consideration of new service requirements and new use cases in 5G wireless networks. The recent development and the existing schemes for the 5G wireless security are presented based on the corresponding security services, including authentication, availability, data confidentiality, key management, and privacy. This paper further discusses the new security features involving different technologies applied to 5G, such as heterogeneous networks, device-to-device communications, massive multiple-input multiple-output, software-defined networks, and Internet of Things. Motivated by these security research and development activities, we propose a new 5G wireless security architecture, based on which the analysis of identity management and flexible authentication is provided. As a case study, we explore a handover procedure as well as a signaling load scheme to show the advantages of the proposed security architecture. The challenges and future directions of 5G wireless security are finally summarized.

Journal ArticleDOI
TL;DR: This study explores this novel architecture of CIoV, as well as research opportunities in vehicular network, and highlights crucial cognitive design issues from three perspectives, namely, intra-vehicle network, inter-Vehicle network and beyond-vehicles network.

Proceedings Article
25 Apr 2018
TL;DR: This work presents a reinforcement learning (RL) based scheduler that can dynamically adapt to traffic variation, and to various reward functions set by network operators, to optimally schedule IoT traffic and can enable mobile networks to carry 14.7% more data with minimal impact on existing traffic.
Abstract: Modern mobile networks are facing unprecedented growth in demand due to a new class of traffic from Internet of Things (IoT) devices such as smart wearables and autonomous cars. Future networks must schedule delay-tolerant software updates, data backup, and other transfers from IoT devices while maintaining strict service guarantees for conventional real-time applications such as voice-calling and video. This problem is extremely challenging because conventional traffic is highly dynamic across space and time, so its performance is significantly impacted if all IoT traffic is scheduled immediately when it originates. In this paper, we present a reinforcement learning (RL) based scheduler that can dynamically adapt to traffic variation, and to various reward functions set by network operators, to optimally schedule IoT traffic. Using 4 weeks of real network data from downtown Melbourne, Australia spanning diverse traffic patterns, we demonstrate that our RL scheduler can enable mobile networks to carry 14.7% more data with minimal impact on existing traffic, and outpeforms heuristic schedulers by more than 2x. Our work is a valuable step towards designing autonomous, "self-driving" networks that learn to manage themselves from past data.

Journal ArticleDOI
TL;DR: An analysis of self-organized network management, with an end-to-end perspective of the network, to survey how network management can significantly benefit from ML solutions.

Proceedings ArticleDOI
07 Aug 2018
TL;DR: This paper presents Agile-Link, a new protocol that can find the best mmWave beam alignment without scanning the space, and shows that it reduces beam alignment delay by orders of magnitude.
Abstract: There is much interest in integrating millimeter wave radios (mmWave) into wireless LANs and 5G cellular networks to benefit from their multi-GHz of available spectrum. Yet, unlike existing technologies, e.g., WiFi, mmWave radios require highly directional antennas. Since the antennas have pencil-beams, the transmitter and receiver need to align their beams before they can communicate. Existing systems scan the space to find the best alignment. Such a process has been shown to introduce up to seconds of delay, and is unsuitable for wireless networks where an access point has to quickly switch between users and accommodate mobile clients. This paper presents Agile-Link, a new protocol that can find the best mmWave beam alignment without scanning the space. Given all possible directions for setting the antenna beam, Agile-Link provably finds the optimal direction in logarithmic number of measurements. Further, Agile-Link works within the existing 802.11ad standard for mmWave LAN, and can support both clients and access points. We have implemented Agile-Link in a mmWave radio and evaluated it empirically. Our results show that it reduces beam alignment delay by orders of magnitude. In particular, for highly directional mmWave devices operating under 802.11ad, the delay drops from over a second to 2.5 ms.

Journal ArticleDOI
TL;DR: This paper investigates the performance of aerial radio connectivity in a typical rural area network deployment using extensive channel measurements and system simulations, and introduces and evaluates a novel downlink inter-cell interference coordination mechanism applied to the aerial command and control traffic.
Abstract: Widely deployed cellular networks are an attractive solution to provide large scale radio connectivity to unmanned aerial vehicles. One main prerequisite is that co-existence and optimal performance for both aerial and terrestrial users can be provided. Today’s cellular networks are, however, not designed for aerial coverage, and deployments are primarily optimized to provide good service for terrestrial users. These considerations, in combination with the strict regulatory requirements, lead to extensive research and standardization efforts to ensure that the current cellular networks can enable reliable operation of aerial vehicles in various deployment scenarios. In this paper, we investigate the performance of aerial radio connectivity in a typical rural area network deployment using extensive channel measurements and system simulations. First, we highlight that downlink and uplink radio interference play a key role, and yield relatively poor performance for the aerial traffic, when load is high in the network. Second, we analyze two potential terminal side interference mitigation solutions: interference cancellation and antenna beam selection. We show that each of these can improve the overall, aerial and terrestrial, system performance to a certain degree, with up to 30% throughput gain, and an increase in the reliability of the aerial radio connectivity to over 99%. Further, we introduce and evaluate a novel downlink inter-cell interference coordination mechanism applied to the aerial command and control traffic. Our proposed coordination mechanism is shown to provide the required aerial downlink performance at the cost of 10% capacity degradation in the serving and interfering cells.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a systematic overview of the state-of-theart design of the NOMA transmission based on a unified transceiver design framework, and some promising use cases in future cellular networks, based on which interested researchers can get a quick start in this area.
Abstract: Non-orthogonal multiple access (NOMA) as an efficient method of radio resource sharing has its roots in network information theory. For generations of wireless communication systems design, orthogonal multiple access schemes in the time, frequency, or code domain have been the main choices due to the limited processing capability in the transceiver hardware, as well as the modest traffic demands in both latency and connectivity. However, for the next generation radio systems, given its vision to connect everything and the much evolved hardware capability, NOMA has been identified as a promising technology to help achieve all the targets in system capacity, user connectivity, and service latency. This article provides a systematic overview of the state-of-theart design of the NOMA transmission based on a unified transceiver design framework, the related standardization progress, and some promising use cases in future cellular networks, based on which interested researchers can get a quick start in this area.

Journal ArticleDOI
TL;DR: Non-orthogonal multiple access (NOMA) is investigated for aerial base station (BS) and results are presented for various environment settings to conclude NOMA manifesting better performance in terms of sum-rate, coverage, and energy efficiency.
Abstract: The future wireless networks promise to provide ubiquitous connectivity to a multitude of devices with diversified traffic patterns wherever and whenever needed. For the sake of boosting resilience against faults, natural disasters, and unexpected traffic, the unmanned aerial vehicle (UAV)-assisted wireless communication systems can provide a unique opportunity to cater for such demands in a timely fashion without relying on the overly engineered cellular network. However, for UAV-assisted communication, issues of capacity, coverage, and energy efficiency are considered of paramount importance. The case of non-orthogonal multiple access (NOMA) is investigated for aerial base station (BS). NOMA’s viability is established by formulating the sum-rate problem constituting a function of power allocation and UAV altitude. The optimization problem is constrained to meet individual user-rates arisen by orthogonal multiple access (OMA) bringing it at par with NOMA. The relationship between energy efficiency and altitude of a UAV inspires the solution to the aforementioned problem considering two cases, namely, altitude fixed NOMA and altitude optimized NOMA. The latter allows exploiting the extra degrees of freedom of UAV-BS mobility to enhance the spectral efficiency and the energy efficiency. Hence, it saves joules in the operational cost of the UAV. Finally, a constrained coverage expansion methodology, facilitated by NOMA user rate gain is also proposed. Results are presented for various environment settings to conclude NOMA manifesting better performance in terms of sum-rate, coverage, and energy efficiency.

Journal ArticleDOI
TL;DR: A novel mechanism to scale 5G core network resources by anticipating traffic load changes through forecasting via ML techniques is proposed, which outperforms the threshold-based solutions in terms of latency to react to traffic change, and delay to have new resources ready to be used by the VNF to reacts to traffic increase.
Abstract: 5G is expected to provide network connectivity to not only classical devices (i.e., tablets, smartphones, etc.) but also to the IoT, which will drastically increase the traffic load carried over the network. 5G will mainly rely on NFV and SDN to build flexible and on-demand instances of functional networking entities via VNFs. Indeed, 3GPP is devising a new architecture for the core network, which replaces point-to-point interfaces used in 3G and 4G by producer/consumer-based communication among 5G core network functions, facilitating deployment over a virtual infrastructure. One big advantage of using VNFs is the possibility of dynamic scaling, depending on traffic load (i.e., instantiate new resources to VNFs when the traffic load increases and reduce the number of resources when the traffic load decreases). In this article, we propose a novel mechanism to scale 5G core network resources by anticipating traffic load changes through forecasting via ML techniques. The traffic load forecast is achieved by using and training a neural network on a real dataset of traffic arrival in a mobile network. Two techniques were used and compared: RNN, more specifically LSTM; and DNN. Simulation results show that the forecast-based scalability mechanism outperforms the threshold-based solutions, in terms of latency to react to traffic change, and delay to have new resources ready to be used by the VNF to react to traffic increase.

Journal ArticleDOI
TL;DR: This article presents User-Level Online Offloading Framework (ULOOF), a lightweight and efficient framework for mobile computation offloading that can offload up to 73 percent of computations, and improve the execution time by 50 percent while at the same time significantly reducing the energy consumption of mobile devices.
Abstract: Mobile devices are equipped with limited processing power and battery charge. A mobile computation offloading framework is a software that provides better user experience in terms of computation time and energy consumption, also taking profit from edge computing facilities. This article presents User-Level Online Offloading Framework (ULOOF), a lightweight and efficient framework for mobile computation offloading. ULOOF is equipped with a decision engine that minimizes remote execution overhead, while not requiring any modification in the device’s operating system. By means of real experiments with Android systems and simulations using large-scale data from a major cellular network provider, we show that ULOOF can offload up to 73 percent of computations, and improve the execution time by 50 percent while at the same time significantly reducing the energy consumption of mobile devices.

Journal ArticleDOI
TL;DR: This paper discusses the recent advances in the techniques of mobile data offloading, and classifies the existing mobile data Offloading technologies into four categories, i.e., data offloaded through small cell networks, data off loading through WiFi networks,Data offloading through opportunistic mobile networks, and data offload through heterogeneous networks.
Abstract: Recently, due to the increasing popularity of enjoying various multimedia services on mobile devices (e.g., smartphones, ipads, and electronic tablets), the generated mobile data traffic has been explosively growing and has become a serve burden on mobile network operators. To address such a serious challenge in mobile networks, an effective approach is to manage data traffic by using complementary technologies (e.g., small cell network, WiFi network, and so on) to achieve mobile data offloading. In this paper, we discuss the recent advances in the techniques of mobile data offloading. Particularly, based on the initiator diversity of data offloading, we classify the existing mobile data offloading technologies into four categories, i.e., data offloading through small cell networks, data offloading through WiFi networks, data offloading through opportunistic mobile networks, and data offloading through heterogeneous networks. Besides, we show a detailed taxonomy of the related mobile data offloading technologies by discussing the pros and cons for various offloading technologies for different problems in mobile networks. Finally, we outline some opening research issues and challenges, which can provide guidelines for future research work.

Journal ArticleDOI
TL;DR: This paper transforms the original optimization problem for NOMA to an equivalent problem which can be solved suboptimally via an iterative power control and time allocation algorithm, and shows that it is optimal for each machine type communication device (MTCD) to transmit with the minimum throughput.
Abstract: This paper studies energy efficient resource allocation for a machine-to-machine enabled cellular network with nonlinear energy harvesting, especially focusing on two different multiple access strategies, namely nonorthogonal multiple access (NOMA) and time division multiple access (TDMA). Our goal is to minimize the total energy consumption of the network via joint power control and time allocation while taking into account circuit power consumption. For both NOMA and TDMA strategies, we show that it is optimal for each machine type communication device (MTCD) to transmit with the minimum throughput, and the energy consumption of each MTCD is a convex function with respect to the allocated transmission time. Based on the derived optimal conditions for the transmission power of MTCDs, we transform the original optimization problem for NOMA to an equivalent problem which can be solved suboptimally via an iterative power control and time allocation algorithm. Through an appropriate variable transformation, we also transform the original optimization problem for TDMA to an equivalent tractable problem, which can be iteratively solved. Numerical results verify the theoretical findings and demonstrate that NOMA consumes less total energy than TDMA at low circuit power regime of MTCDs, while at high circuit power regime of MTCDs TDMA achieves better network energy efficiency than NOMA.

Journal ArticleDOI
TL;DR: This paper investigates the NOMA downlink relay-transmission, and proposes an optimal power allocation problem for the BS and relays to maximize the overall throughput delivered to the MU and proposes a hybrid N OMA (HB-NOMA) relay that adaptively exploits the benefit of NOMa relay and that of the interference-free TDMA relay.
Abstract: The emerging non-orthogonal multiple access (NOMA), which enables mobile users (MUs) to share same frequency channel simultaneously, has been considered as a spectrum-efficient multiple access scheme to accommodate tremendous traffic growth in future cellular networks. In this paper, we investigate the NOMA downlink relay-transmission, in which the macro base station (BS) first uses NOMA to transmit to a group of relays, and all relays then use NOMA to transmit their respectively received data to an MU. In specific, we propose an optimal power allocation problem for the BS and relays to maximize the overall throughput delivered to the MU. Despite the non-convexity of the problem, we adopt the vertical decomposition and propose a layered-algorithm to efficiently compute the optimal power allocation solution. Numerical results show that the proposed NOMA relay-transmission can increase the throughput up to 30 percent compared with the conventional time division multiple access (TDMA) scheme, and we find that increasing the relays’ power capacity can increase the throughput gain of the NOMA relay against the TDMA relay. Furthermore, to improve the throughput under weak channel power gains, we propose a hybrid NOMA (HB-NOMA) relay that adaptively exploits the benefit of NOMA relay and that of the interference-free TDMA relay. By using the throughput provided by the HB-NOMA relay for each individual MU, we study the multi-MUs scenario and investigate the multi-MUs scheduling problem over a long-term period to maximize the overall utility of all MUs. Numerical results demonstrate the performance advantage of the proposed multi-MUs scheduling that adopts the HB-NOMA relay-transmission.

Journal ArticleDOI
11 Apr 2018
TL;DR: A comprehensive overview of solutions proposed by the research community and the latest status of the 3GPP standardization process is provided and key topics that need to be further addressed during the following years are identified.
Abstract: 5G networks aim to support a number of vertical industries that are characterized by diverse performance requirements. Network slicing is considered to be the key enabler to enhance cellular networks with the desired flexibility to achieve this target. During the past years, the network slicing concept has been thoroughly studied, and the main operational principles have been established. However, the added complexity introduced by network slicing creates some issues that are still under investigation. This article provides a comprehensive overview of solutions proposed by the research community and, more importantly, the latest status of the 3GPP standardization process. This survey covers solutions for all network domains (i.e., access, transport, and core) as well the management of network slices. The article also identifies key topics that need to be further addressed during the following years.