scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Wireless Communications in 2020"


Journal ArticleDOI
TL;DR: An overview of HMIMOS communications including the available hardware architectures for reconfiguring such surfaces are provided, and the opportunities and key challenges in designingHMIMOS-enabled wireless communications are highlighted.
Abstract: Future wireless networks are expected to evolve toward an intelligent and software reconfigurable paradigm enabling ubiquitous communications between humans and mobile devices. They will also be capable of sensing, controlling, and optimizing the wireless environment to fulfill the visions of low-power, high-throughput, massively- connected, and low-latency communications. A key conceptual enabler that is recently gaining increasing popularity is the HMIMOS that refers to a low-cost transformative wireless planar structure comprised of sub-wavelength metallic or dielectric scattering particles, which is capable of shaping electromagnetic waves according to desired objectives. In this article, we provide an overview of HMIMOS communications including the available hardware architectures for reconfiguring such surfaces, and highlight the opportunities and key challenges in designing HMIMOS-enabled wireless communications.

925 citations


Journal ArticleDOI
TL;DR: The vision of 5G is extended to more ambitious scenarios in a more distant future and speculates on the visionary technologies that could provide the step changes needed for enabling 6G.
Abstract: While 5G is being tested worldwide and anticipated to be rolled out gradually in 2019, researchers around the world are beginning to turn their attention to what 6G might be in 10+ years time, and there are already initiatives in various countries focusing on the research of possible 6G technologies. This article aims to extend the vision of 5G to more ambitious scenarios in a more distant future and speculates on the visionary technologies that could provide the step changes needed for enabling 6G.

539 citations


Journal ArticleDOI
TL;DR: Five 6G core services are identified and two centricities and eight key performance indices are detailed to describe these services, then enabling technologies to fulfill the KPIs are discussed and possible solutions are proposed.
Abstract: The innovations provided by sixth generation wireless communication (6G) as compared to fifth generation (5G) are considered in this article based on analysis of related works. With the aim of achieving diverse performance improvements for the various 6G requirements, five 6G core services are identified. Two centricities and eight key performance indices (KPIs) are detailed to describe these services, then enabling technologies to fulfill the KPIs are discussed. A 6G architecture is proposed as an integrated system of the enabling technologies and is then illustrated using four typical urban application scenarios. Potential challenges in the development of 6G technology are then discussed and possible solutions are proposed. Finally, opportunities for exploring 6G are analyzed in order to guide future research.

445 citations


Journal ArticleDOI
TL;DR: In this article, the concept of reputation is introduced as a metric and a reliable worker selection scheme is proposed for federated learning tasks to improve the reliability of federatedLearning tasks in mobile networks.
Abstract: Federated learning, as a promising machine learning approach, has emerged to leverage a distributed personalized dataset from a number of nodes, for example, mobile devices, to improve performance while simultaneously providing privacy preservation for mobile users. In federated learning, training data is widely distributed and maintained on the mobile devices as workers. A central aggregator updates a global model by collecting local updates from mobile devices using their local training data to train the global model in each iteration. However, unreliable data may be uploaded by the mobile devices (i.e., workers), leading to frauds in tasks of federated learning. The workers may perform unreliable updates intentionally, for example, the data poisoning attack, or unintentionally, for example, low-quality data caused by energy constraints or high-speed mobility. Therefore, finding out trusted and reliable workers in federated learning tasks becomes critical. In this article, the concept of reputation is introduced as a metric. Based on this metric, a reliable worker selection scheme is proposed for federated learning tasks. Consortium blockchain is leveraged as a decentralized approach for achieving efficient reputation management of the workers without repudiation and tampering. By numerical analysis, the proposed approach is demonstrated to improve the reliability of federated learning tasks in mobile networks.

346 citations


Journal ArticleDOI
TL;DR: In this paper, a comprehensive discussion of 6G is given based on the review of 5G developments, covering visions and requirements, technology trends and challenges, aiming at tackling the challenge of coverage, capacity, the user data rate and movement speed of mobile communication system.
Abstract: Since 5G new radio comes with non-standalone (NSA) and standalone (SA) versions in 3GPP, research on 6G has been on schedule by academics and industries. Though 6G is supposed to have much higher capabilities than 5G, yet there is no clear description about what 6G is. In this article, a comprehensive discussion of 6G is given based on the review of 5G developments, covering visions and requirements, technology trends and challenges, aiming at tackling the challenge of coverage, capacity, the user data rate and movement speed of mobile communication system. The vision of 6G is to fully support the development of a Ubiquitous Intelligent Mobile Society with intelligent life and industries. Finally, the roadmap of the 6G standard is suggested for the future.

292 citations


Journal ArticleDOI
TL;DR: Machine learning techniques are analyzed and the most critical challenges in advancing the intelligent 6G system are introduced, which aims to address the challenges of exponentially increasing number of connected heterogeneous devices.
Abstract: As the 5G standard is being completed, academia and industry have begun to consider a more developed cellular communication technique, 6G, which is expected to achieve high data rates up to 1 Tb/s and broad frequency bands of 100 GHz to 3 THz. Besides the significant upgrade of the key communication metrics, Artificial Intelligence (AI) has been envisioned by many researchers as the most important feature of 6G, since the state-of-the-art machine learning technique has been adopted as the top solution in many extremely complex scenarios. Network intelligentization will be the new trend to address the challenges of exponentially increasing number of connected heterogeneous devices. However, compared with the application of machine learning in other fields, such as computer games, current research on intelligent networking still has a long way to go to realize the automatically- configured cellular communication systems. Various problems in terms of communication system, machine learning architectures, and computation efficiency should be addressed for the full use of this technique in 6G. In this paper, we analyze machine learning techniques and introduce 10 most critical challenges in advancing the intelligent 6G system.

264 citations


Journal ArticleDOI
TL;DR: Novel communication frameworks of NOMA, massive multiple-input multiple-output (MIMO), and millimeter wave (mmWave) are investigated, and their superior performances are demonstrated.
Abstract: The new demands for high-reliability and ultra-high capacity wireless communication have led to extensive research into 5G communications. However, current communication systems, which were designed on the basis of conventional communication theories, significantly restrict further performance improvements and lead to severe limitations. Recently, the emerging deep learning techniques have been recognized as a promising tool for handling the complicated communication systems, and their potential for optimizing wireless communications has been demonstrated. In this article, we first review the development of deep learning solutions for 5G communication, and then propose efficient schemes for deep learning-based 5G scenarios. Specifically, the key ideas for several important deep learning-based communication methods are presented along with the research opportunities and challenges. In particular, novel communication frameworks of NOMA, massive multiple-input multiple-output (MIMO), and millimeter wave (mmWave) are investigated, and their superior performances are demonstrated. We envision that the appealing deep learning- based wireless physical layer frameworks will bring a new direction in communication theories and that this work will move us forward along this road.

254 citations


Journal ArticleDOI
TL;DR: It is argued that deploying AI in fifth generation (5G) and beyond will require surmounting significant technical barriers in terms of robustness, performance, and complexity.
Abstract: Mobile network operators (MNOs) are in the process of overlaying their conventional macro cellular networks with shorter range cells such as outdoor pico cells. The resultant increase in network complexity creates substantial overhead in terms of operating expenses, time, and labor for their planning and management. Artificial intelligence (AI) offers the potential for MNOs to operate their networks in a more organic and cost-efficient manner. We argue that deploying AI in fifth generation (5G) and beyond will require surmounting significant technical barriers in terms of robustness, performance, and complexity. We outline future research directions, identify top five challenges, and present a possible roadmap to realize the vision of AI-enabled cellular networks for Beyond- 5G and sixth generation (6G) networks.

196 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce the emerging paradigm called wireless communication enabled by programmable metasurfaces to address the above issues, by exploiting the powerful capability of metasurus in manipulating electromagnetic waves.
Abstract: Various emerging technologies, such as UM-MIMO, THz communications are under active discussion as promising technologies to support the extremely high access rate and superior network capacity in the future 6G mobile communication systems. However, such technologies are still facing many challenges for practical implementation. In particular, UM-MIMO and THz communication require an extremely large number of RF chains, and hence suffer from prohibitive hardware cost and complexity. In this article, we introduce the emerging paradigm called wireless communication enabled by programmable metasurfaces to address the above issues, by exploiting the powerful capability of metasurfaces in manipulating electromagnetic waves. We will first introduce the basic concept of programmable metasurfaces, followed by the promising paradigm shift in future wireless communication systems enabled by programmable metasurfaces. In particular, we give an overview of the two prospective paradigms of applying programmable metasurfaces in wireless transceivers, namely, RF chain-free transmitter and space-down-conversion receiver, which both have great potential to simplify the architecture and reduce the hardware cost of future wireless transceivers. Furthermore, we present the design architectures, preliminary experimental results and main advantages of these new paradigms and discuss their potential opportunities and challenges toward ultra-massive 6G communications with low hardware complexity, low cost, and high energy efficiency.

143 citations


Journal ArticleDOI
TL;DR: This paper sheds light on three UAV-enabled mobile edge computing (MEC) architectures, and presents a comprehensive survey for the state-of-the-art research in this domain.
Abstract: Unmanned aerial vehicle (UAV)-enabled communication networks are promising in the fifth and beyond wireless communication systems. In this paper, we shed light on three UAV-enabled mobile edge computing (MEC) architectures. Those architectures have been receiving ever increasing research attention for improving computation performance and decreasing execution latency by integrating UAV into MEC networks. We present a comprehensive survey for the state-of-the-art research in this domain. Important implementation issues are clarified. Moreover, in order to provide an enlightening guidance for future research directions, key challenges and open issues are discussed.

130 citations


Journal ArticleDOI
TL;DR: Numerical analysis results indicate that the proposed IRS-aided network outperforms the benchmark system without IRSs when the IRS installation positions are optimally determined, and is designed to maximize the network performance in terms of the spatial signal-to-interference-plus-noise ratio (SINR).
Abstract: Intelligent reflecting surfaces (IRSs) have emerged as a key enabler for beyond fifth-generation (B5G) communication technology and for realizing sixth-generation (6G) cellular communication. In addition, B5G and 6G networks are expected to support aerial user communications in accordance with the expanded requirements of data transmission for an aerial user. However, there are challenges in providing wireless communication for aerial users owing to the different radio wave propagation properties between terrestrial areas and aerial areas. In this article, we propose an IRS-aided cellular network coverage extension for aerial users. In our proposed network, IRS and base stations (BSs) cooperate with each other to provide air-ground communication to aerial users (AUs), the aim of which is to prevent interference signals from spreading to a wide area. Furthermore, IRS placement is designed to maximize the network performance in terms of the spatial signal-to-interference-plus-noise ratio (SINR) while mitigating inter-cell interference. Numerical analysis results indicate that the proposed IRS-aided network outperforms the benchmark system without IRSs when the IRS installation positions are optimally determined.

Journal ArticleDOI
TL;DR: In this article, the authors argue that massive MIMO systems behave differently in large-scale regimes due to spatial non-stationarity, where different regions of the array see different propagation paths.
Abstract: Massive MIMO, a key technology for increasing area spectral efficiency in cellular systems, was developed assuming moderately sized apertures In this article, we argue that massive MIMO systems behave differently in large-scale regimes due to spatial non-stationarity In the large-scale regime, with arrays of around 50 wavelengths, the terminals see the whole array but non-stationarities occur because different regions of the array see different propagation paths At even larger dimensions, which we call the extra-large scale regime, terminals see a portion of the array and inside the first type of non-stationarities might occur We show that the non-stationarity properties of the massive MIMO channel change several important MIMO design aspects In simulations, we demonstrate how non-stationarity is a curse when neglected but a blessing when embraced in terms of computational load and multi-user transceiver design

Journal ArticleDOI
TL;DR: In this paper, the authors provide a comprehensive survey of recent advances and future challenges that result from bringing AI/ML technologies into B5G wireless networks, including channel measurements, modeling, and estimation, physical layer research, and network management and optimization.
Abstract: 5G wireless communication networks are currently being deployed, and B5G networks are expected to be developed over the next decade. AI technologies and, in particular, ML have the potential to efficiently solve the unstructured and seemingly intractable problems by involving large amounts of data that need to be dealt with in B5G. This article studies how AI and ML can be leveraged for the design and operation of B5G networks. We first provide a comprehensive survey of recent advances and future challenges that result from bringing AI/ML technologies into B5G wireless networks. Our survey touches on different aspects of wireless network design and optimization, including channel measurements, modeling, and estimation, physical layer research, and network management and optimization. Then ML algorithms and applications to B5G networks are reviewed, followed by an overview of standard developments of applying AI/ML algorithms to B5G networks. We conclude this study with future challenges on applying AI/ML to B5G networks.

Journal ArticleDOI
TL;DR: The concept of massive access wireless communication and of the contemporary research on this important topic are presented, which exhibits a clean break with current networks by potentially supporting millions of devices in each cellular network.
Abstract: Multiple access technology played an important role in wireless communication in the last decades: it increases the capacity of the channel and allows different users to access the system simultaneously. However, the conventional multiple access technology, as originally designed for current human-centric wireless networks, is not scalable for future machine-centric wireless networks. Massive access (studied in the literature under such names as "massive- device multiple access," "unsourced massive random access," "massive connectivity," "massive machine-type communication," and "many-access channels") exhibits a clean break with current networks by potentially supporting millions of devices in each cellular network. The tremendous growth in the number of connected devices requires a fundamental rethinking of the conventional multiple access technologies in favor of new schemes suited for massive random access. Among the many new challenges arising in this setting, the most relevant are: the fundamental limits of communication from a massive number of bursty devices transmitting simultaneously with short packets, the design of low complexity and energy-efficient massive access coding and communication schemes, efficient methods for the detection of a relatively small number of active users among a large number of potential user devices with sporadic transmission pattern, and the integration of massive access with massive MIMO and other important wireless communication technologies. This article presents an overview of the concept of massive access wireless communication and of the contemporary research on this important topic.

Journal ArticleDOI
TL;DR: To provide a feasible solution to implement network intelligence based on the existing centralized learning strategies, a paradigm of federated learning- enabled intelligent F-RANs is proposed, which can take full advantage of fog computing and AI.
Abstract: The rise of big data and AI boosts the development of future wireless networks. However, due to the high cost of data offloading and model training, it is challenging to implement network intelligence based on the existing centralized learning strategies, especially at the edge of networks. To provide a feasible solution, a paradigm of federated learning- enabled intelligent F-RANs is proposed, which can take full advantage of fog computing and AI. The fundamental theory with respect to the accuracy loss correction and the model compression is studied, which can provide some insights into the design of federated learning in F-RANs. To support the implementation of federated learning, some key techniques are introduced to fully integrate the communication, computation, and storage capability of F-RANs. Moreover, future trends of federated learning-enabled intelligent F-RANs, such as potential applications and open issues, are discussed.

Journal ArticleDOI
TL;DR: A developed SAGIN simulation platform which supports various mobility traces and protocols of space, aerial, and terrestrial networks and a case study where highly mobile vehicular users dynamically choose different radio access networks according to their quality of service (QoS) requirements.
Abstract: Space-air-ground integrated network (SAGIN) is envisioned as a promising solution to provide cost-effective, large-scale, and flexible wireless coverage and communication services. Since realworld deployment for testing of SAGIN is difficult and prohibitive, an efficient SAGIN simulation platform is requisite. In this article, we present our developed SAGIN simulation platform which supports various mobility traces and protocols of space, aerial, and terrestrial networks. Centralized and decentrallized controllers are implemented to optimize the network functions such as access control and resource orchestration. In addition, various interfaces extend the functionality of the platform to facilitate user-defined mobility traces and control algorithms. We also present a case study where highly mobile vehicular users dynamically choose different radio access networks according to their quality of service (QoS) requirements.

Journal ArticleDOI
TL;DR: This work uses the additive secret sharing technique to encrypt raw data into two ciphertexts and construct two classes of secure functions, which are then used to implement a privacy-preserving convolutional neural network (P-CNN).
Abstract: Data sharing among connected and autonomous vehicles without any protection will cause private information leakage. Simply encrypting data introduces a heavy overhead; most importantly, when encrypted data (ciphertext) is decrypted on a vehicle, the receiver will be fully aware of the sender's data, implying potential data leakage. To tackle these issues, we propose an edge-assisted privacy-preserving raw data sharing framework. First, we leverage the additive secret sharing technique to encrypt raw data into two ciphertexts and construct two classes of secure functions. The functions are then used to implement a privacy-preserving convolutional neural network (P-CNN). Finally, two edge servers are deployed to cooperatively execute P-CNN to extract features from two ciphertexts to obtain the same object detection results as the original CNN. We adopt the VGG16 model as a case study to illustrate how to construct P-CNN and employ the KITTI dataset to verify our solution. Experiment results demonstrate that P-CNN offers exactly the same classification results as the VGG16 model with negligible error, and the communication overhead and computational cost on the edge servers are less than existing solutions without leaking private information.

Journal ArticleDOI
TL;DR: This work designs an intelligent task offloading scheme based on deep Q learning, to cope with such a rapidly changing scene, where software-defined network is introduced to achieve information collection and centralized management of the ICVs and the network.
Abstract: Recently, traditional transportation systems have been gradually evolving to ITS, inspired by both artificial intelligence and wireless communications technologies. The vehicles get smarter and connected, and a variety of intelligent applications have emerged. Meanwhile, the shortage of vehicles' computing capacity makes it insufficient to support a growing number of applications due to their compute- intensive nature. This contradiction restricts the development of ICVs and ITS. Under this background, vehicular edge computing networks (VECNs), which integrate MEC and vehicular networks, have been proposed as a promising network paradigm. By deploying MEC servers at the edge of the network, ICVs' computational burden can be greatly eased via MEC offloading. However, existing task offloading schemes had insufficient consideration of fast-moving ICVs and frequent handover with the rapid changes in communications, computing resources, and so on. Toward this end, we design an intelligent task offloading scheme based on deep Q learning, to cope with such a rapidly changing scene, where software-defined network is introduced to achieve information collection and centralized management of the ICVs and the network. Extensive numerical results and analysis demonstrate that our scheme not only has good adaptability, but also can achieve high performance compared to traditional offloading schemes.

Journal ArticleDOI
TL;DR: This work introduces a new deep imitation learning (DIL)-driven edge-cloud computation offloading framework for MEC networks, and discusses the directions and advantages of applying deep learning methods to multiple MEC research areas, including edge data analytics, dynamic resource allocation, security, and privacy.
Abstract: In this work, we propose a new deep imitation learning (DIL)-driven edge-cloud computation offloading framework for MEC networks. A key objective for the framework is to minimize the offloading cost in time-varying network environments through optimal behavioral cloning. Specifically, we first introduce our computation offloading model for MEC in detail. Then we make fine-grained offloading decisions for a mobile device, and the problem is formulated as a multi-label classification problem, with local execution cost and remote network resource usage consideration. To minimize the offloading cost, we train our decision making engine by leveraging the deep imitation learning method, and further evaluate its performance through an extensive numerical study. Simulation results show that our proposal outperforms other benchmark policies in offloading accuracy and offloading cost reduction. At last, we discuss the directions and advantages of applying deep learning methods to multiple MEC research areas, including edge data analytics, dynamic resource allocation, security, and privacy, respectively.

Journal ArticleDOI
TL;DR: In this paper, an unmanned aerial vehicle (UAV) aided cellular framework against jamming is presented, in which an UAV uses reinforcement learning to choose the relay policy for a mobile user whose serving base station is attacked by a jammer.
Abstract: Cellular systems have to resist smart jammers that can optimize their selection of jamming channels and powers based on the estimated ongoing network states. In this article, we present an unmanned aerial vehicle (UAV) aided cellular framework against jamming, in which an UAV uses reinforcement learning to choose the relay policy for a mobile user whose serving base station is attacked by a jammer. More specifically, the UAV applies deep reinforcement learning and transfer learning to help cellular systems resist smart jamming without knowing the cellular topology, the message generation model, the server computation model and the jamming model, based on the previous anti-jamming relay experiences and the observed current communication status. The performance bound in terms of the bit error rate and the UAV energy consumption is derived from the Nash equilibrium of the studied dynamic relay game and verified via simulations. Simulation results show that this scheme can reduce the bit error rate and save the UAV energy consumption in comparison with the benchmark.

Journal ArticleDOI
TL;DR: In this paper, a distributed sense-and-send protocol is proposed to coordinate UAVs for sensing and transmission, and reinforcement learning is applied to solve key problems such as trajectory control and resource management.
Abstract: Unmanned aerial vehicles (UAVs) can be powerful Internet of Things components to execute sensing tasks over the next-generation cellular networks, which are generally referred to as the cellular Internet of UAVs. However, due to the high mobility of UAVs and shadowing in airto- ground channels, UAVs operate in a dynamic and uncertain environment. Therefore, UAVs need to improve the quality of service of sensing and communication without complete information, which makes reinforcement learning suitable for use in the cellular Internet of UAVs. In this article, we propose a distributed sense-and-send protocol to coordinate UAVs for sensing and transmission. Then we apply reinforcement learning in the cellular Internet of UAVs to solve key problems such as trajectory control and resource management. Finally, we point out several potential future research directions.

Journal ArticleDOI
TL;DR: Simulations show that in both uplink and downlink, although secrecy performance deteriorates in short-packet communications, the performance gains of NOMA over traditional orthogonal multiple access are significant, and analyzed the challenges and future trends in this emerging area.
Abstract: The Internet of Things (IoT) is expected to provide ubiquitous wireless machine-type communication devices and extensive information collection, resulting in an unprecedented amount of privacy and secrets exposed to the radio space. Security issues become a major restriction on the further development of IoT. However, secure transmissions in IoT are challenged by low complexity limitation and massive connectivity demand, especially by the use of short packets, which are expected to satisfy the delay requirement in ultra-reliable low-latency communications. Physical layer security can be employed without the constraints of packet length and number of connections. Nevertheless, due to the limitations of complexity, not all existing PLS techniques can be adopted in IoT. Non-orthogonal multiple access (NOMA) is a promising technique for increasing connectivity and reducing delay. Assuming an eavesdropper (Eve) is capable of the same detection capability as legitimate users, this article further exploits the inherent characteristics of NOMA to secure short-packet communications in IoT networks without introducing extra security mechanisms. Both downlink and uplink NOMA schemes are introduced to secure transmission by deliberately increasing the co-channel interference at Eve, which can be viewed as a special cooperative jamming strategy. Simulations show that in both uplink and downlink, although secrecy performance deteriorates in short-packet communications, the performance gains of NOMA over traditional orthogonal multiple access are significant. Finally, we analyze the challenges and future trends in this emerging area.

Journal ArticleDOI
TL;DR: The SAG IoT network paradigm is introduced, including its composition and network architecture, and a cloud-based modular simulation system is introduced for 5G and future IoTs; this system is characterized by high efficiency, flexible configuration and high precision.
Abstract: The integration of multidimensional networks such as space, air and ground is the future trend of the IoT. In this article, we introduce the SAG IoT network paradigm, including its composition and network architecture. Network slicing, the core 5G technology, will be applied to the SAG IoT network. The wide use of mmWave and UAVs is to produce many new scenarios, and it is necessary to study the effects of UAVs on mmWave channels in these new scenarios. In addition, mmWave imaging technology can provide a basis for this research. Machine learning is an important new technology that can be widely used in the SAG IoT. Simulation and measurement are both important means of evaluating the performance of communication systems. To cope with the emerging new applications and technologies, a cloud-based modular simulation system is introduced for 5G and future IoTs; this system is characterized by high efficiency, flexible configuration and high precision.

Journal ArticleDOI
TL;DR: A pair of dominant methodologies of using DL for wireless communications are investigated, including DL-based architecture design, which breaks the classical model-based block design rule of wireless communications in the past decades.
Abstract: Wireless communications are envisioned to bring about dramatic changes in the future, with a variety of emerging applications, such as virtual reality, Internet of Things, and so on, becoming a reality. However, these compelling applications have imposed many new challenges, including unknown channel models, low-latency requirement in large-scale super-dense networks, and so on. The amazing success of deep learning in various fields, particularly in computer science, has recently stimulated increasing interest in applying it to address those challenges. Hence, in this review, a pair of dominant methodologies of using DL for wireless communications are investigated. The first one is DL-based architecture design, which breaks the classical model-based block design rule of wireless communications in the past decades. The second one is DL-based algorithm design, which will be illustrated by several examples in a series of typical techniques conceived for 5G and beyond. Their principles, key features, and performance gains will be discussed. Open problems and future research opportunities will also be pointed out, highlighting the interplay between DL and wireless communications. We expect that this review can stimulate more novel ideas and exciting contributions for intelligent wireless communications.

Journal ArticleDOI
TL;DR: A BCL framework for AI-enabled CAVs to train ML models locally and upload to blockchain network to overall utilize the "collective intelligence" of CAVs while avoiding large amounts of data transmission is presented.
Abstract: The accuracy of the ML model is essential for the further development of AI-enabled CAVs. With the increasing complexity of on-board sensor systems, the large amount of raw data available for learning can however cause big communication burdens and data security issues. To alleviate the communication cost yet improve the accuracy of machine learning with preserved data privacy is an important issue to address in CAVs. In this article, we survey the existing literature toward efficient and secured learning in a dynamic wireless environment. In particular, a BCL framework for AI-enabled CAVs is presented. The framework enables distributed CAVs to train ML models locally and upload to blockchain network to overall utilize the "collective intelligence" of CAVs while avoiding large amounts of data transmission. Blockchain is then applied to protect the distributed learned models. We evaluate the performance of the presented framework by simulations and discuss a range of open research issues that need to be addressed in the future.

Journal ArticleDOI
TL;DR: An emergency communications framework of UAV-enabled SWIPT for IoT networks is established, where the disaster scenarios are classified into three cases, namely, dense areas, wide areas and emergency areas, and a dynamic path planning scheme is established to improve the energy efficiency of the system.
Abstract: Energy-limited devices and connectivity in complicated environments are two main challenges for Internet of Things (IoT)-enabled mobile networks, especially when IoT devices are distributed in a disaster area. Unmanned aerial vehicle (UAV)-enabled simultaneous wireless information and power transfer (SWIPT) is emerging as a promising technique to tackle the above problems. In this article, we establish an emergency communications framework of UAV-enabled SWIPT for IoT networks, where the disaster scenarios are classified into three cases, namely, dense areas, wide areas and emergency areas. First, to realize wireless power transfer for IoT devices in dense areas, a UAV-enabled wireless power transfer system is considered where a UAV acts as a wireless charger and delivers energy to a set of energy receivers. Then, a joint trajectory planning and resource scheduling scheme for a multi-UAVs system is discussed to provide wireless services for IoT devices in wide areas. Furthermore, an intelligent prediction mechanism is designed to predict service requirements (i.e., data transmission and battery charging) of the devices in emergency areas, and accordingly, a dynamic path planning scheme is established to improve the energy efficiency (EE) of the system. Simulation results demonstrate the effectiveness of the above schemes. Finally, potential research directions and challenges are also discussed.

Journal ArticleDOI
TL;DR: A cooperative spectrum sensing network is designed and established to realize wide-area broadband spectrum sensing and obtain big spectrum data, and a novel dual-end machine learning model is proposed to improve the precision and real-time prediction of heterogeneous spectrum states.
Abstract: Although spectrum sensing is commonly used in modern wireless communications to determine spectrum resources, the rapid development of wireless communications has generated massive heterogeneous spectrum data, which has dramatically increased the complexity of spectrum sensing. Machine-learning-assisted spectrum sensing, as an emerging and promising technique, provides an effective way to find available spectrum resources through the analysis of big spectrum data. In this article, a bigdata- based intelligent spectrum sensing method is proposed to improve heterogeneous spectrum sensing. Specifically, a cooperative spectrum sensing network is designed and established to realize wide-area broadband spectrum sensing and obtain big spectrum data. The effectiveness of such a network has been verified through detection probability simulation. To improve the reliability of spectrum sensing data, the correlations of the big spectrum data in time domain, frequency domain and space domain have been investigated, and the spectrum similarity has been obtained. Then a novel dual-end machine learning model is proposed to improve the precision and real-time prediction of heterogeneous spectrum states. Furthermore, a big spectrum data clustering mechanism is adopted to facilitate data matching and heterogeneous spectrum prediction. Finally, the comprehensive spectrum state is obtained through heterogeneous spectrum data fusion.

Journal ArticleDOI
Shanzhi Chen, Shaohui Sun, Guixian Xu1, Su Xin, Yuemin Cai 
TL;DR: The future trends of beam-space multiplexing in 6G and beyond are discussed, including massive beamforming for extremely large-scale MIMO (XL-MIMO), low earth orbit (LEO) satellite communication, data-driven intelligentmassive beamforming, and multi-target spatial signal processing.
Abstract: In this article, the new term, namely beam-space multiplexing, is proposed for the former multi-layer beamforming for 4G TD-LTE in 3GPP releases. We provide a systematic overview of beam-space multiplexing from engineering and theoretical perspectives. First, we clarify the fundamental theory of beam-space multiplexing. Specifically, we provide a comprehensive comparison with antenna-space multiplexing in terms of theoretical analysis, channel state information acquisition, and engineering implementation constraints. Then, we summarize the key technologies and 3GPP standardization of beam-space multiplexing in 4G TD-LTE and 5G new radio (NR) in terms of multi-layer beamforming and massive beamforming, respectively. We also provide system-level performance evaluation of beam-space multiplexing schemes and field results from current commercial TD-LTE networks and field trial of 5G. The practical deployments of 4G TD-LTE and 5G cellular networks demonstrate the superiority of beam-space multiplexing within the limitations of implementation complexity and practical deployment scenarios. Finally, the future trends of beam-space multiplexing in 6G and beyond are discussed, including massive beamforming for extremely large-scale MIMO (XL-MIMO), low earth orbit (LEO) satellite communication, data-driven intelligent massive beamforming, and multi-target spatial signal processing, that is, joint communication and sensing, positioning, and so on.

Journal ArticleDOI
TL;DR: The hot zone, safe zone, and UAV-based secondary authorization entity, among others, are introduced to increase the resilience and confidentiality of 5G radio access networks and services.
Abstract: Unmanned aerial vehicles (UAVs) are emerging as enablers for supporting many applications and services, such as precision agriculture, search and rescue, temporary network deployment or coverage extension, and security. UAVs are being considered for integration into emerging 5G networks as aerial users or network support nodes. We propose to leverage UAVs in 5G to assist in the prevention, detection, and recovery of attacks on 5G networks. Specifically, we consider jamming, spoofing, eavesdropping, and the corresponding mitigation mechanisms that are enabled by the versatility of UAVs. We introduce the hot zone, safe zone, and UAV-based secondary authorization entity, among others, to increase the resilience and confidentiality of 5G radio access networks and services. We present simulation results and discuss open issues and research directions, including the need for experimental evaluation and a research platform for prototyping and testing the proposed technologies.

Journal ArticleDOI
TL;DR: In this article, the integration of UAVs with existing MCNs is investigated, and the potential gains of hybrid satellite-UAV-terrestrial networks for maritime coverage are investigated.
Abstract: Current fifth generation (5G) cellular networks mainly focus on the terrestrial scenario. Due to the difficulty of deploying communications infrastructure on the ocean, the performance of existing maritime communication networks (MCNs) is far behind 5G. This problem can be solved by using unmanned aerial vehicles (UAVs) as agile aerial platforms to enable on-demand maritime coverage, as a supplement to marine satellites and shore-based terrestrial based stations (TBSs). In this article, we study the integration of UAVs with existing MCNs, and investigate the potential gains of hybrid satellite-UAV-terrestrial networks for maritime coverage. Unlike the terrestrial scenario, vessels on the ocean keep to sea lanes and are sparsely distributed. This provides new opportunities to ease the scheduling of UAVs. Also, new challenges arise due to the more complicated maritime prorogation environment, as well as the mutual interference between UAVs and existing satellites/TBSs. We discuss these issues and show possible solutions considering practical constraints.