scispace - formally typeset
Search or ask a question

Showing papers on "Cellular network published in 2020"


Journal ArticleDOI
TL;DR: In this paper, the uplink spectral efficiencies of four different cell-free implementations are analyzed, with spatially correlated fading and arbitrary linear processing, and it is shown that a centralized implementation with optimal minimum mean-square error (MMSE) processing not only maximizes the SE but largely reduces the fronthaul signaling compared to the standard distributed approach.
Abstract: Cell-free Massive MIMO is considered as a promising technology for satisfying the increasing number of users and high rate expectations in beyond-5G networks. The key idea is to let many distributed access points (APs) communicate with all users in the network, possibly by using joint coherent signal processing. The aim of this paper is to provide the first comprehensive analysis of this technology under different degrees of cooperation among the APs. Particularly, the uplink spectral efficiencies of four different cell-free implementations are analyzed, with spatially correlated fading and arbitrary linear processing. It turns out that it is possible to outperform conventional Cellular Massive MIMO and small cell networks by a wide margin, but only using global or local minimum mean-square error (MMSE) combining. This is in sharp contrast to the existing literature, which advocates for maximum-ratio combining. Also, we show that a centralized implementation with optimal MMSE processing not only maximizes the SE but largely reduces the fronthaul signaling compared to the standard distributed approach. This makes it the preferred way to operate Cell-free Massive MIMO networks. Non-linear decoding is also investigated and shown to bring negligible improvements.

546 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new framework for scalable cell-free massive MIMO systems by exploiting the dynamic cooperation cluster concept from the Network-MIMO literature and provided a novel algorithm for joint initial access, pilot assignment, and cluster formation.
Abstract: Imagine a coverage area with many wireless access points that cooperate to jointly serve the users, instead of creating autonomous cells. Such a cell-free network operation can potentially resolve many of the interference issues that appear in current cellular networks. This ambition was previously called Network MIMO (multiple-input multiple-output) and has recently reappeared under the name Cell-Free Massive MIMO. The main challenge is to achieve the benefits of cell-free operation in a practically feasible way, with computational complexity and fronthaul requirements that are scalable to large networks with many users. We propose a new framework for scalable Cell-Free Massive MIMO systems by exploiting the dynamic cooperation cluster concept from the Network MIMO literature. We provide a novel algorithm for joint initial access, pilot assignment, and cluster formation that is proved to be scalable. Moreover, we adapt the standard channel estimation, precoding, and combining methods to become scalable. A new uplink and downlink duality is proved and used to heuristically design the precoding vectors on the basis of the combining vectors. Interestingly, the proposed scalable precoding and combining outperform conventional maximum ratio (MR) processing and also performs closely to the best unscalable alternatives.

428 citations


Journal ArticleDOI
TL;DR: The most significant use cases expected for 5G including their corresponding scenarios and traffic models are presented and guidelines to help and ease the performance evaluation of current and future 5G innovations, as well as the dimensioning of 5G future deployments are provided.
Abstract: The fifth-generation mobile initiative, 5G, is a tremendous and collective effort to specify, standardize, design, manufacture, and deploy the next cellular network generation. 5G networks will support demanding services such as enhanced Mobile Broadband, Ultra-Reliable and Low Latency Communications and massive Machine-Type Communications, which will require data rates of tens of Gbps, latencies of few milliseconds and connection densities of millions of devices per square kilometer. This survey presents the most significant use cases expected for 5G including their corresponding scenarios and traffic models. First, the paper analyzes the characteristics and requirements for 5G communications, considering aspects such as traffic volume, network deployments, and main performance targets. Secondly, emphasizing the definition of performance evaluation criteria for 5G technologies, the paper reviews related proposals from principal standards development organizations and industry alliances. Finally, well-defined and significant 5G use cases are provided. As a result, these guidelines will help and ease the performance evaluation of current and future 5G innovations, as well as the dimensioning of 5G future deployments.

399 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission in the mMTC scenario and provide a detailed overview of the existing and emerging solutions toward addressing RAN congestion problem.
Abstract: The ever-increasing number of resource-constrained machine-type communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced mobile broadband (eMBB), massive machine type communications (mMTCs), and ultra-reliable and low latency communications (URLLCs), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include quality of service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead, and radio access network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy random access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions toward addressing RAN congestion problem, and then identify potential advantages, challenges, and use cases for the applications of emerging machine learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity $Q$ -learning approach in the mMTC scenario along with the recent advances toward enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions.

290 citations


Proceedings ArticleDOI
04 May 2020
TL;DR: Small cell base stations are introduced orchestrating FEEL among MUs within their cells, and periodically exchanging model updates with the MBS for global consensus, and it is shown that this hierarchical federated learning (HFL) scheme significantly reduces the communication latency without sacrificing the accuracy.
Abstract: We consider federated edge learning (FEEL), where mobile users (MUs) collaboratively learn a global model by sharing local updates on the model parameters rather than their datasets, with the help of a mobile base station (MBS). We optimize the resource allocation among MUs to reduce the communication latency in learning iterations. Observing that the performance in this centralized setting is limited due to the distance of the cell-edge users to the MBS, we introduce small cell base stations (SBSs) orchestrating FEEL among MUs within their cells, and periodically exchanging model updates with the MBS for global consensus. We show that this hierarchical federated learning (HFL) scheme significantly reduces the communication latency without sacrificing the accuracy.

271 citations


Journal ArticleDOI
TL;DR: This survey provides a comprehensive overview of several emerging technologies for 5G systems, such as massive multiple-input multiple-output (MIMO) technologies, multiple access technologies, hybrid analog-digital precoding and combining, non-orthogonal multiple access (NOMA), cell-free massive MIMO, and simultaneous wireless information and power transfer (SWIPT) technologies.
Abstract: Fifth-generation (5G) cellular networks will almost certainly operate in the high-bandwidth, underutilized millimeter-wave (mmWave) frequency spectrum, which offers the potentiality of high-capacity wireless transmission of multi-gigabit-per-second (Gbps) data rates. Despite the enormous available bandwidth potential, mmWave signal transmissions suffer from fundamental technical challenges like severe path loss, sensitivity to blockage, directivity, and narrow beamwidth, due to its short wavelengths. To effectively support system design and deployment, accurate channel modeling comprising several 5G technologies and scenarios is essential. This survey provides a comprehensive overview of several emerging technologies for 5G systems, such as massive multiple-input multiple-output (MIMO) technologies, multiple access technologies, hybrid analog-digital precoding and combining, non-orthogonal multiple access (NOMA), cell-free massive MIMO, and simultaneous wireless information and power transfer (SWIPT) technologies. These technologies induce distinct propagation characteristics and establish specific requirements on 5G channel modeling. To tackle these challenges, we first provide a survey of existing solutions and standards and discuss the radio-frequency (RF) spectrum and regulatory issues for mmWave communications. Second, we compared existing wireless communication techniques like sub-6-GHz WiFi and sub-6 GHz 4G LTE over mmWave communications which come with benefits comprising narrow beam, high signal quality, large capacity data transmission, and strong detection potential. Third, we describe the fundamental propagation characteristics of the mmWave band and survey the existing channel models for mmWave communications. Fourth, we track evolution and advancements in hybrid beamforming for massive MIMO systems in terms of system models of hybrid precoding architectures, hybrid analog and digital precoding/combining matrices, with the potential antenna configuration scenarios and mmWave channel estimation (CE) techniques. Fifth, we extend the scope of the discussion by including multiple access technologies for mmWave systems such as non-orthogonal multiple access (NOMA) and space-division multiple access (SDMA), with limited RF chains at the base station. Lastly, we explore the integration of SWIPT in mmWave massive MIMO systems, with limited RF chains, to realize spectrally and energy-efficient communications.

234 citations


Journal ArticleDOI
TL;DR: An innovative UAV-enabled MEC system involving the interactions among IoT devices, UAV, and edge clouds (ECs) and an efficient algorithm based on the successive convex approximation to obtain suboptimal solutions is proposed.
Abstract: Mobile edge computing (MEC) is an emerging technology to support resource-intensive yet delay-sensitive applications using small cloud-computing platforms deployed at the mobile network edges. However, the existing MEC techniques are not applicable to the situation where the number of mobile users increases explosively or the network facilities are sparely distributed. In view of this insufficiency, unmanned aerial vehicles (UAVs) have been employed to improve the connectivity of ground Internet of Things (IoT) devices due to their high altitude. This article proposes an innovative UAV-enabled MEC system involving the interactions among IoT devices, UAV, and edge clouds (ECs). The system deploys and operates a UAV properly to facilitate the MEC service provisioning to a set of IoT devices in regions where the existing ECs cannot be accessible to IoT devices due to terrestrial signal blockage or shadowing. The UAV and ECs in the system collaboratively provide MEC services to the IoT devices. For optimal service provisioning in this system, we formulate an optimization problem aiming at minimizing the weighted sum of the service delay of all IoT devices and UAV energy consumption by jointly optimizing UAV position, communication and computing resource allocation, and task splitting decisions. However, the resulting optimization problem is highly nonconvex and thus, difficult to solve optimally. To tackle this problem, we develop an efficient algorithm based on the successive convex approximation to obtain suboptimal solutions. Numerical experiments demonstrate that our proposed collaborative UAV-EC offloading scheme largely outperforms baseline schemes that solely rely on UAV or ECs for MEC in IoT.

232 citations


Posted Content
TL;DR: A broad picture of the motivation, methodologies, challenges, and research opportunities of realizing perceptive mobile network is presented, by providing a comprehensive survey for systems and technologies developed mainly in the last ten years.
Abstract: Mobile network is evolving from a communication-only network towards the one with joint communication and radio/radar sensing (JCAS) capabilities, that we call perceptive mobile network (PMN). Radio sensing here refers to information retrieval from received mobile signals for objects of interest in the environment surrounding the radio transceivers. In this paper, we provide a comprehensive survey for systems and technologies that enable JCAS in PMN, with a focus on works in the last ten years. Starting with reviewing the work on coexisting communication and radar systems, we highlight their limits on addressing the interference problem, and then introduce the JCAS technology. We then set up JCAS in the mobile network context, and envisage its potential applications. We continue to provide a brief review for three types of JCAS systems, with particular attention to their differences on the design philosophy. We then introduce a framework of PMN, including the system platform and infrastructure, three types of sensing operations, and signals usable for sensing, and discuss required system modifications to enable sensing on current communication-only infrastructure. Within the context of PMN, we review stimulating research problems and potential solutions, organized under eight topics: mutual information, waveform optimization, antenna array design, clutter suppression, sensing parameter estimation, pattern analysis, networked sensing under cellular topology, and sensing-assisted secure communication. This paper provides a comprehensive picture for the motivation, methodology, challenges, and research opportunities of realizing PMN. The PMN is expected to provide a ubiquitous radio sensing platform and enable a vast number of novel smart applications.

216 citations


Journal ArticleDOI
TL;DR: It is argued that deploying AI in fifth generation (5G) and beyond will require surmounting significant technical barriers in terms of robustness, performance, and complexity.
Abstract: Mobile network operators (MNOs) are in the process of overlaying their conventional macro cellular networks with shorter range cells such as outdoor pico cells. The resultant increase in network complexity creates substantial overhead in terms of operating expenses, time, and labor for their planning and management. Artificial intelligence (AI) offers the potential for MNOs to operate their networks in a more organic and cost-efficient manner. We argue that deploying AI in fifth generation (5G) and beyond will require surmounting significant technical barriers in terms of robustness, performance, and complexity. We outline future research directions, identify top five challenges, and present a possible roadmap to realize the vision of AI-enabled cellular networks for Beyond- 5G and sixth generation (6G) networks.

196 citations


Journal ArticleDOI
TL;DR: Results show that the UC approach, which requires smaller backhaul overhead and is more scalable that the CF deployment, also achieves generally better performance than the CF approach for the vast majority of the users, especially on the uplink.
Abstract: Recently, the so-called cell-free (CF) massive multiple-input multiple-output (MIMO) architecture has been introduced, wherein a very large number of distributed access points (APs) simultaneously and jointly serve a much smaller number of mobile stations (MSs). The paper extends the CF approach to the case in which both the APs and the MSs are equipped with multiple antennas, proposing a beamfoming scheme that, relying on the zero-forcing strategy, does not require channel estimation at the MSs. We contrast the originally proposed formulation of CF massive MIMO with a user-centric (UC) approach wherein each MS is served only by a limited number of APs. Exploiting the framework of successive lower-bound maximization, the paper also proposes and analyzes power allocation strategies aimed at either sum-rate maximization or minimum-rate maximization, both for the uplink and downlink. Results show that the UC approach, which requires smaller backhaul overhead and is more scalable that the CF deployment, also achieves generally better performance than the CF approach for the vast majority of the users, especially on the uplink.

192 citations


Journal ArticleDOI
TL;DR: A reinforcement-learning-based state-action-reward-state-action (RL-SARSA) algorithm to resolve the resource management problem in the edge server, and make the optimal offloading decision for minimizing system cost, including energy consumption and computing time delay is proposed.
Abstract: In recent years, computation offloading has become an effective way to overcome the constraints of mobile devices (MDs) by offloading delay-sensitive and computation-intensive mobile application tasks to remote cloud-based data centers. Smart cities can benefit from offloading to edge points in the framework of the so-called cyber–physical–social systems (CPSS), as for example in traffic violation tracking cameras. We assume that there are mobile edge computing networks (MECNs) in more than one region, and they consist of multiple access points, multi-edge servers, and $N$ MDs, where each MD has $M$ independent real-time massive tasks. The MDs can connect to a MECN through the access points or the mobile network. Each task be can processed locally by the MD itself or remotely. There are three offloading options: nearest edge server, adjacent edge server, and remote cloud. We propose a reinforcement-learning-based state-action-reward-state-action (RL-SARSA) algorithm to resolve the resource management problem in the edge server, and make the optimal offloading decision for minimizing system cost, including energy consumption and computing time delay. We call this method OD-SARSA (offloading decision-based SARSA). We compared our proposed method with reinforcement learning based Q learning (RL-QL), and it is concluded that the performance of the former is superior to that of the latter.

Posted Content
TL;DR: This paper provides a detailed review of massive access from the perspectives of theory, protocols, techniques, coverage, energy, and security, and several future research directions and challenges are identified.
Abstract: Massive access, also known as massive connectivity or massive machine-type communication (mMTC), is one of the main use cases of the fifth-generation (5G) and beyond 5G (B5G) wireless networks. A typical application of massive access is the cellular Internet of Things (IoT). Different from conventional human-type communication, massive access aims at realizing efficient and reliable communications for a massive number of IoT devices. Hence, the main characteristics of massive access include low power, massive connectivity, and broad coverage, which require new concepts, theories, and paradigms for the design of next-generation cellular networks. This paper presents a comprehensive survey of aspects of massive access design for B5G wireless networks. Specifically, we provide a detailed review of massive access from the perspectives of theory, protocols, techniques, coverage, energy, and security. Furthermore, several future research directions and challenges are identified.

Journal ArticleDOI
TL;DR: This survey guides the reader through a comprehensive discussion of the main characteristics of SDN and NFV technologies, and provides a thorough analysis of the different classifications, use cases, and challenges related to UAV-assisted systems.
Abstract: Unmanned Aerial Vehicles (UAVs) have become increasingly important in assisting 5G and beyond 5G (B5G) mobile networks. Indeed, UAVs have all the potentials to both satisfy the ever-increasing mobile data demands of such mobile networks and provide ubiquitous connectivity to different kinds of wireless devices. However, the UAV assistance paradigm faces a set of crucial issues and challenges. For example, the network management of current UAV-assisted systems is time consuming, complicated, and carried out manually, thus causing a multitude of interoperability issues. To efficiently address all these issues, Software-Defined Network (SDN) and Network Function Virtualization (NFV) are two promising technologies to efficiently manage and improve the UAV assistance for the next generation of mobile networks. In the literature, no clear guidelines are describing the different use cases of SDN and NFV in the context of UAV assistance to terrestrial networks, including mobile networks. Motivated by this fact, in this survey, we guide the reader through a comprehensive discussion of the main characteristics of SDN and NFV technologies. Moreover, we provide a thorough analysis of the different classifications, use cases, and challenges related to UAV-assisted systems. We then discuss SDN/NFV-enabled UAV-assisted systems, along with several case studies and issues, such as the involvement of UAVs in cellular communications, monitoring, and routing, to name a few. We furthermore present a set of open research challenges, high-level insights, and future research directions related to UAV-assisted systems.

Journal ArticleDOI
TL;DR: This paper identifies key enabling technologies for an intelligent and open 6G network, all featured with 3C convergence, including 3C-based spectrum management, radio channel construction, delay-aware transmission, wireless distributed computing, and network self-evolution.

Journal ArticleDOI
12 Jun 2020
TL;DR: Analysis of a cellular network deployment where UAV-to-UAV (U2U) transmit-receive pairs share the same spectrum with the uplink of cellular ground users (GUEs) concludes that adopting an overlay spectrum sharing seems the most suitable approach for maintaining a minimum guaranteed rate for UAVs and a high GUE UL performance.
Abstract: We consider a cellular network deployment where UAV-to-UAV (U2U) transmit-receive pairs share the same spectrum with the uplink (UL) of cellular ground users (GUEs). For this setup, we focus on analyzing and comparing the performance of two spectrum sharing mechanisms: (i) underlay, where the same time-frequency resources may be accessed by both UAVs and GUEs, resulting in mutual interference, and (ii) overlay, where the available resources are divided into orthogonal portions for U2U and GUE communications. We evaluate the coverage probability and rate of both link types and their interplay to identify the best spectrum sharing strategy. We do so through an analytical framework that embraces realistic height-dependent channel models, antenna patterns, and practical power control mechanisms. For the underlay, we find that although the presence of U2U direct communications may worsen the uplink performance of GUEs, such effect is limited as base stations receive the power-constrained UAV signals through their antenna sidelobes. In spite of this, our results lead us to conclude that in urban scenarios with a large number of UAV pairs, adopting an overlay spectrum sharing seems the most suitable approach for maintaining a minimum guaranteed rate for UAVs and a high GUE UL performance.

Journal ArticleDOI
TL;DR: The vision of the society development towards 2030 and the new application scenarios for mobile communication, and then the key performance requirements are derived from the service and application perspective are identified.
Abstract: With the 5th Generation (5G) Mobile network being rolled out gradually in 2019, the research for the next generation mobile network has been started and targeted for 2030. To pave the way for the development of the 6th Generation (6G) mobile network, the vision and requirements should be identified first for the potential key technology identification and comprehensive system design. This article first identifies the vision of the society development towards 2030 and the new application scenarios for mobile communication, and then the key performance requirements are derived from the service and application perspective. Taken into account the convergence of information technology, communication technology and big data technology, a logical mobile network architecture is proposed to resolve the lessons from 5G network design. To compromise among the cost, capability and flexibility of the network, the features of the 6G mobile network are proposed based on the latest progress and applications of the relevant fields, namely, on-demand fulfillment, lite network, soft network, native AI and native security. Ultimately, the intent of this article is to serve as a basis for stimulating more promising research on 6G.

Journal ArticleDOI
TL;DR: Numerical analysis results indicate that the proposed IRS-aided network outperforms the benchmark system without IRSs when the IRS installation positions are optimally determined, and is designed to maximize the network performance in terms of the spatial signal-to-interference-plus-noise ratio (SINR).
Abstract: Intelligent reflecting surfaces (IRSs) have emerged as a key enabler for beyond fifth-generation (B5G) communication technology and for realizing sixth-generation (6G) cellular communication. In addition, B5G and 6G networks are expected to support aerial user communications in accordance with the expanded requirements of data transmission for an aerial user. However, there are challenges in providing wireless communication for aerial users owing to the different radio wave propagation properties between terrestrial areas and aerial areas. In this article, we propose an IRS-aided cellular network coverage extension for aerial users. In our proposed network, IRS and base stations (BSs) cooperate with each other to provide air-ground communication to aerial users (AUs), the aim of which is to prevent interference signals from spreading to a wide area. Furthermore, IRS placement is designed to maximize the network performance in terms of the spatial signal-to-interference-plus-noise ratio (SINR) while mitigating inter-cell interference. Numerical analysis results indicate that the proposed IRS-aided network outperforms the benchmark system without IRSs when the IRS installation positions are optimally determined.

Journal ArticleDOI
TL;DR: A UAV-enabled cellular network setup based on tethered UAVs (tUAVs) is proposed, which enables a flight that can last for days and Monte Carlo simulations are provided to compare the performance of tUavals and uUavs in terms of coverage probability.
Abstract: Airborne base stations (BSs) (carried by drones) have a great potential to enhance the coverage and capacity of 6G cellular networks. However, one of the main challenges facing the deployment of airborne BSs is the limited available energy at the drone, which curtails the flight time. In fact, most current unmanned aerial vehicles (UAVs) can only operate for a maximum of 1 h. The need to frequently visit the ground station (GS) to recharge limits the performance of the UAV-enabled cellular network and leaves the UAV?s coverage area temporarily out of service. In this article, we propose a UAV-enabled cellular network setup based on tethered UAVs (tUAVs). In the proposed setup, the tUAV is connected to a GS through a tether, which provides the tUAV with both energy and data. This enables a flight that can last for days. We describe in detail the components of the proposed system. Furthermore, we list the main advantages of a tUAV-enabled cellular network compared to typical untethered UAVs (uUAVs). Next, we discuss the potential applications and use cases for tUAVs. We also provide Monte Carlo simulations to compare the performance of tUAVs and uUAVs in terms of coverage probability. For instance, for a uUAV that is available 70% of the time (while unavailable because of charging or changing the battery 30% of the time), the simulation results show that tUAVs with a 120-m tether length can provide up to a 30% increase in coverage probability compared to uUAVs. Finally, we discuss the challenges, design considerations, and future research directions to realize the proposed setup.

Journal ArticleDOI
TL;DR: A big data based analysis framework to analyze and extract network behaviors in cellular networks for Industry 4.0 applications from a big data perspective, using Hadoop, Hive, HBase, and so on is proposed.
Abstract: In this article, we propose a big data based analysis framework to analyze and extract network behaviors in cellular networks for Industry 4.0 applications from a big data perspective, using Hadoop, Hive, HBase, and so on. The data prehandling and traffic flow extraction approaches are presented to construct effective traffic matrices. Accordingly, we can capture network behaviors in cellular networks from a networkwide perspective. Although there have been a number of prior studies on cellular network usage, to the best of our knowledge, this article is a first study that characterizes network behaviors using the big data analytics to analyze a network big data of call detail records over a longer duration (five months), with more users (five million), more records (several hundred million lines) and nationwide coverage. The call pattern analysis and network behavior extraction approaches are designed to perform big data analysis and feature extractions. Then, the corresponding algorithms are proposed to characterize network behaviors, i.e., cellular call patterns and network resource usage. The detailed evaluation is proposed to validate our method. For example, we find that some unpopular calls can last longer time and thus consume more network resources.

Journal ArticleDOI
TL;DR: These studies demonstrates that big data technologies can indeed be utilized to effectively capture network behaviors and predict network activities so that they can help perform highly effective network managements.
Abstract: This paper uses big data technologies to study base stations’ behaviors and activities and their predictability in mobile cellular networks. With new technologies quickly appearing, current cellular networks have become more larger, more heterogeneous, and more complex. This provides network managements and designs with larger challenges. How to use network big data to capture cellular network behavior and activity patterns and perform accurate predictions is recently one of main problems. To the end, first we exploit big data platform and technologies to analyze cellular network big data, i.e., Call Detail Records (CDRs). Our CDRs data set, which includes more than 1,000 cellular towers, more than million lines of CDRs, and several million users and sustains for more than 100 days, is collected from a national cellular network. Second, we propose our methodology to analyze these big data. The data pre-handling and cleaning approach is proposed to obtain the valuable big data sets for our further studies. The feature extraction and call predictability methods are presented to capture base stations’ behaviors and dissect their predictability. Third, based on our method, we perform the detailed activity pattern analysis, including call distributions, cross correlation features, call behavior patterns, and daily activities. The detailed analysis approaches are also proposed to dig out base stations’ activities. A series of findings are found and observed in the analysis process. Finally, a study case is proposed to validate the predictability of base stations’ behaviors and activities. Our studies demonstrates that big data technologies can indeed be utilized to effectively capture network behaviors and predict network activities so that they can help perform highly effective network managements.

Journal ArticleDOI
TL;DR: The concept of massive access wireless communication and of the contemporary research on this important topic are presented, which exhibits a clean break with current networks by potentially supporting millions of devices in each cellular network.
Abstract: Multiple access technology played an important role in wireless communication in the last decades: it increases the capacity of the channel and allows different users to access the system simultaneously. However, the conventional multiple access technology, as originally designed for current human-centric wireless networks, is not scalable for future machine-centric wireless networks. Massive access (studied in the literature under such names as "massive- device multiple access," "unsourced massive random access," "massive connectivity," "massive machine-type communication," and "many-access channels") exhibits a clean break with current networks by potentially supporting millions of devices in each cellular network. The tremendous growth in the number of connected devices requires a fundamental rethinking of the conventional multiple access technologies in favor of new schemes suited for massive random access. Among the many new challenges arising in this setting, the most relevant are: the fundamental limits of communication from a massive number of bursty devices transmitting simultaneously with short packets, the design of low complexity and energy-efficient massive access coding and communication schemes, efficient methods for the detection of a relatively small number of active users among a large number of potential user devices with sporadic transmission pattern, and the integration of massive access with massive MIMO and other important wireless communication technologies. This article presents an overview of the concept of massive access wireless communication and of the contemporary research on this important topic.

Journal ArticleDOI
TL;DR: A distributed $Q-learning aided power allocation algorithm for two-layer heterogeneous IIoT networks is proposed and the spirit of designing reward functions is discussed, followed by four delicately defined reward functions considering both the QoS of femtocell IoT user equipments and macrocell IoT users and their fairness.
Abstract: To achieve the goal of “Industrial 4.0,” cellular network with wide coverage has gradually become an intensely important carrier for industrial Internet of Things (IIoT). The fifth generation cellular network is expected to be a unifying network that may connect billions of IIoT devices for the sake of supporting advanced IIoT business. In order to realize wide and seamless information coverage, heterogeneous network architecture becomes a beneficial method, which can also improve the near-ceiling network capacity. In order to guarantee the quality of service (QoS) as well as the fairness of different IIoT devices with limited network resources, the network association in IIoT should be performed in a more intelligent manner. In this article, we propose a distributed $Q$ -learning aided power allocation algorithm for two-layer heterogeneous IIoT networks. Moreover, we discuss the spirit of designing reward functions, followed by four delicately defined reward functions considering both the QoS of femtocell IoT user equipments and macrocell IoT user equipments and their fairness. Also, both fixed and dynamic learning rates and different kinds of multiagent cooperation modes are investigated. Finally, simulation results show the effectiveness and superiority of our proposed $Q$ -learning based power allocation algorithm.

Journal ArticleDOI
TL;DR: Under the on-board energy constraints of UAVs and interference temperature constraints from UAV swarm to satellite users, iterative multi-domain resource allocation algorithms are presented to improve network efficiency with guaranteed user fairness and the adaptive cell-free coverage pattern is observed.
Abstract: In fifth generation (5G) and beyond Internet of Things (IoT), it becomes increasingly important to serve a massive number of IoT devices outside the coverage of terrestrial cellular networks. Due to their own limitations, unmanned aerial vehicles (UAVs) and satellites need to coordinate with each other in the coverage holes of 5G, leading to a cognitive satellite-UAV network (CSUN). In this paper, we investigate multi-domain resource allocation for CSUNs consisting of a satellite and a swarm of UAVs, so as to improve the efficiency of massive access in wide areas. Particularly, the cell-free on-demand coverage is established to overcome the cost-ineffectiveness of conventional cellular architecture. Opportunistic spectrum sharing is also implemented to cope with the spectrum scarcity problem. To this end, a process-oriented optimization framework is proposed for jointly allocating subchannels, transmit power and hovering times, which considers the whole flight process of UAVs and uses only the slowly-varying large-scale channel state information (CSI). Under the on-board energy constraints of UAVs and interference temperature constraints from UAV swarm to satellite users, we present iterative multi-domain resource allocation algorithms to improve network efficiency with guaranteed user fairness. Simulation results demonstrate the superiority of the proposed algorithms. Moreover, the adaptive cell-free coverage pattern is observed, which implies a promising way to efficiently serve wide-area IoT devices in the upcoming sixth generation (6G) era.

Posted Content
TL;DR: The disaggregated network architecture proposed by the O-RAN Alliance as a key enabler of NextG networks is explored and the feasibility of radio access network (RAN) control through xApps running on the near-real-time RAN intelligent controller to optimize the scheduling policies of coexisting network slices is shown.
Abstract: Next Generation (NextG) cellular networks will be natively cloud-based and built upon programmable, virtualized, and disaggregated architectures. The separation of control functions from the hardware fabric and the introduction of standardized control interfaces will enable the definition of custom closed-control loops, which will ultimately enable embedded intelligence and real-time analytics, thus effectively realizing the vision of autonomous and self-optimizing networks. This article explores the disaggregated network architecture proposed by the O-RAN Alliance as a key enabler of NextG networks. Within this architectural context, we discuss the potential, the challenges, and the limitations of data-driven optimization approaches to network control over different timescales. We also present the first large-scale integration of O-RAN-compliant software components with an open-source full-stack softwarized cellular network. Experiments conducted on Colosseum, the world's largest wireless network emulator, demonstrate closed-loop integration of real-time analytics and control through deep reinforcement learning agents. We also show the feasibility of Radio Access Network (RAN) control through xApps running on the near real-time RAN Intelligent Controller, to optimize the scheduling policies of co-existing network slices, leveraging the O-RAN open interfaces to collect data at the edge of the network.

Proceedings ArticleDOI
27 Oct 2020
TL;DR: The effects of the COVID-19 emergency on a UK Mobile Network Operator (MNO) is investigated and insights are brought from the decrease in data traffic volume in the cellular network and lower load on the radio network, counterposed to a surge in the conversational voice traffic volume.
Abstract: During early 2020, the SARS-CoV-2 virus rapidly spread worldwide, forcing many governments to impose strict lock-down measures to tackle the pandemic. This significantly changed peoples mobility and habits, subsequently impacting how they use telecommunication networks. In this paper, we investigate the effects of the COVID-19 emergency on a UK Mobile Network Operator (MNO). We quantify the changes in users mobility and investigate how this impacted the cellular network usage and performance. Our analysis spans from the entire country to specific regions, and geodemographic area clusters. We also provide a detailed analysis for London. Our findings bring insights at different geotemporal granularity on the status of the cellular network, from the decrease in data traffic volume in the cellular network and lower load on the radio network, counterposed to a surge in the conversational voice traffic volume.

Journal ArticleDOI
TL;DR: A UAV-to-Everything (U2X) networking is proposed, which enables the UAVs to adjust their communication modes full dimensionally according to the requirements of their sensing applications, and a reinforcement learning-based mathematical framework for U2X communications is proposed.
Abstract: In this paper, we consider an Internet of unmanned aerial vehicles (UAVs) over cellular networks, where UAVs work as aerial users to collect various sensory data, and send the collected data to their transmission destinations over cellular links. Unlike the terrestrial users in the conventional cellular networks, different UAVs have various communication requirements due to their sensing applications, and a more flexible communication framework is in demand. To tackle this problem, we propose a UAV-to-Everything (U2X) networking, which enables the UAVs to adjust their communication modes full dimensionally according to the requirements of their sensing applications. In this article, we first introduce the concept of U2X communications, and elaborate on its three communication modes. Afterwards, we discuss the key techniques of the U2X communications, including joint sensing and transmission protocol, UAV trajectory design, and radio resource management. A reinforcement learning-based mathematical framework for U2X communications is then proposed. Finally, the extensions of the U2X communications are presented.

Journal ArticleDOI
TL;DR: A framework for a novel perceptive mobile/cellular network that integrates radar sensing function into the mobile communication network is developed and a background subtraction method based on simple recursive computation is proposed, and a closed-form expression for performance characterization is provided.
Abstract: In this paper, we develop a framework for a novel perceptive mobile/cellular network that integrates radar sensing function into the mobile communication network. We propose a unified system platform that enables downlink and uplink sensing, sharing the same transmitted signals with communications. We aim to tackle the fundamental sensing parameter estimation problem in perceptive mobile networks, by addressing two key challenges associated with sophisticated mobile signals and rich multipath in mobile networks. To extract sensing parameters from orthogonal frequency division multiple access and spatial division multiple access communication signals, we propose two approaches to formulate it to problems that can be solved by compressive sensing techniques. Most sensing algorithms have limits on the number of multipath signals for their inputs. To reduce the multipath signals, as well as removing unwanted clutter signals, we propose a background subtraction method based on simple recursive computation, and provide a closed-form expression for performance characterization. The effectiveness of these methods is validated in simulations.

Journal ArticleDOI
TL;DR: In this paper, an unmanned aerial vehicle (UAV) aided cellular framework against jamming is presented, in which an UAV uses reinforcement learning to choose the relay policy for a mobile user whose serving base station is attacked by a jammer.
Abstract: Cellular systems have to resist smart jammers that can optimize their selection of jamming channels and powers based on the estimated ongoing network states. In this article, we present an unmanned aerial vehicle (UAV) aided cellular framework against jamming, in which an UAV uses reinforcement learning to choose the relay policy for a mobile user whose serving base station is attacked by a jammer. More specifically, the UAV applies deep reinforcement learning and transfer learning to help cellular systems resist smart jamming without knowing the cellular topology, the message generation model, the server computation model and the jamming model, based on the previous anti-jamming relay experiences and the observed current communication status. The performance bound in terms of the bit error rate and the UAV energy consumption is derived from the Nash equilibrium of the studied dynamic relay game and verified via simulations. Simulation results show that this scheme can reduce the bit error rate and save the UAV energy consumption in comparison with the benchmark.

Journal ArticleDOI
TL;DR: This paper proposes a flow-based policy framework on the basis of two tiers virtualization for vehicular networks using SDNs and presents a proof of concept for leveraging machine learning-enabled resource classification and management through experimental evaluation of special-purpose testbed established in custom mininet setup.
Abstract: The current cellular technology and vehicular networks cannot satisfy the mighty strides of vehicular network demands. Resource management has become a complex and challenging objective to gain expected outcomes in a vehicular environment. The 5G cellular network promises to provide ultra-high-speed, reduced delay, and reliable communications. The development of new technologies such as the network function virtualization (NFV) and software defined networking (SDN) are critical enabling technologies leveraging 5G. The SDN-based 5G network can provide an excellent platform for autonomous vehicles because SDN offers open programmability and flexibility for new services incorporation. This separation of control and data planes enables centralized and efficient management of resources in a very optimized and secure manner by having a global overview of the whole network. The SDN also provides flexibility in communication administration and resource management, which are of critical importance when considering the ad-hoc nature of vehicular network infrastructures, in terms of safety, privacy, and security, in vehicular network environments. In addition, it promises the overall improved performance. In this paper, we propose a flow-based policy framework on the basis of two tiers virtualization for vehicular networks using SDNs. The vehicle to vehicle (V2V) communication is quite possible with wireless virtualization where different radio resources are allocated to V2V communications based on the flow classification, i.e., safety-related flow or non-safety flows, and the controller is responsible for managing the overall vehicular environment and V2X communications. The motivation behind this study is to implement a machine learning-enabled architecture to cater the sophisticated demands of modern vehicular Internet infrastructures. The inclination towards robust communications in 5G-enabled networks has made it somewhat tricky to manage network slicing efficiently. This paper also presents a proof of concept for leveraging machine learning-enabled resource classification and management through experimental evaluation of special-purpose testbed established in custom mininet setup. Furthermore, the results have been evaluated using Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and Deep Neural Network (DNN). While concluding the paper, it is shown that the LSTM has outperformed the rest of classification techniques with promising results.

Journal ArticleDOI
TL;DR: In this article, a distributed spectrum allocation framework based on multi-agent deep reinforcement learning is proposed, named multiagent actor critic (MAAC), which shares global historical states, actions and policies during centralized training, and utilizes cooperation among users to further optimize system performance.
Abstract: Device-to-device (D2D) communication underlay cellular networks is a promising technique to improve spectrum efficiency. In this situation, D2D transmission may cause severe interference to both the cellular and other D2D links, which imposes a great technical challenge to spectrum allocation. Existing centralized schemes require global information, which causes a large signaling overhead. While existing distributed schemes requires frequent information exchange among D2D users and cannot achieve global optimization. In this paper, a distributed spectrum allocation framework based on multi-agent deep reinforcement learning is proposed, named multi-agent actor critic (MAAC). MAAC shares global historical states, actions and policies during centralized training, requires no signal interaction during execution and utilizes cooperation among users to further optimize system performance. Moreover, in order to decrease the computing complexity of the training, we further propose the neighbor-agent actor critic (NAAC) based on the neighbor users’ historical information for centralized training. The simulation results show that the proposed MAAC and NAAC can effectively reduce the outage probability of cellular links, greatly improve the sum rate of D2D links and converge quickly.