scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Journal on Selected Areas in Communications in 2021"


Journal ArticleDOI
TL;DR: In this article, a comprehensive survey of massive access design for B5G wireless networks is presented, from the perspectives of theory, protocols, techniques, coverage, energy, and security.
Abstract: Massive access, also known as massive connectivity or massive machine-type communication (mMTC), is one of the main use cases of the fifth-generation (5G) and beyond 5G (B5G) wireless networks. A typical application of massive access is the cellular Internet of Things (IoT). Different from conventional human-type communication, massive access aims at realizing efficient and reliable communications for a massive number of IoT devices. Hence, the main characteristics of massive access include low power, massive connectivity, and broad coverage, which require new concepts, theories, and paradigms for the design of next-generation cellular networks. This paper presents a comprehensive survey of massive access design for B5G wireless networks. Specifically, we provide a detailed review of massive access from the perspectives of theory, protocols, techniques, coverage, energy, and security. Furthermore, several future research directions and challenges are identified.

311 citations


Journal ArticleDOI
TL;DR: In this paper, the authors summarize recent contributions in the broad area of AoI and present general AoI evaluation analysis that are applicable to a wide variety of sources and systems, starting from elementary single-server queues, and applying these AoI methods to a range of increasingly complex systems, including energy harvesting sensors transmitting over noisy channels, parallel server systems, queueing networks, and various single-hop and multi-hop wireless networks.
Abstract: We summarize recent contributions in the broad area of age of information (AoI). In particular, we describe the current state of the art in the design and optimization of low-latency cyberphysical systems and applications in which sources send time-stamped status updates to interested recipients. These applications desire status updates at the recipients to be as timely as possible; however, this is typically constrained by limited system resources. We describe AoI timeliness metrics and present general methods of AoI evaluation analysis that are applicable to a wide variety of sources and systems. Starting from elementary single-server queues, we apply these AoI methods to a range of increasingly complex systems, including energy harvesting sensors transmitting over noisy channels, parallel server systems, queueing networks, and various single-hop and multi-hop wireless networks. We also explore how update age is related to MMSE methods of sampling, estimation and control of stochastic processes. The paper concludes with a review of efforts to employ age optimization in cyberphysical applications.

213 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a hybrid beamforming scheme for the multi-hop RIS-assisted communication networks to improve the coverage range at the TeraHertz-band frequencies.
Abstract: Wireless communication in the TeraHertz band (0.1–10 THz) is envisioned as one of the key enabling technologies for the future sixth generation (6G) wireless communication systems scaled up beyond massive multiple input multiple output (Massive-MIMO) technology. However, very high propagation attenuations and molecular absorptions of THz frequencies often limit the signal transmission distance and coverage range. Benefited from the recent breakthrough on the reconfigurable intelligent surfaces (RIS) for realizing smart radio propagation environment, we propose a novel hybrid beamforming scheme for the multi-hop RIS-assisted communication networks to improve the coverage range at THz-band frequencies. Particularly, multiple passive and controllable RISs are deployed to assist the transmissions between the base station (BS) and multiple single-antenna users. We investigate the joint design of digital beamforming matrix at the BS and analog beamforming matrices at the RISs, by leveraging the recent advances in deep reinforcement learning (DRL) to combat the propagation loss. To improve the convergence of the proposed DRL-based algorithm, two algorithms are then designed to initialize the digital beamforming and the analog beamforming matrices utilizing the alternating optimization technique. Simulation results show that our proposed scheme is able to improve 50% more coverage range of THz communications compared with the benchmarks. Furthermore, it is also shown that our proposed DRL-based method is a state-of-the-art method to solve the NP-hard beamforming problem, especially when the signals at RIS-assisted THz communication networks experience multiple hops.

206 citations


Journal ArticleDOI
TL;DR: In this article, a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-WBANs and beyond WBANs, is presented.
Abstract: The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.

202 citations


Journal ArticleDOI
TL;DR: This paper designs a deep learning (DL)-enabled semantic communication system for speech signals, named DeepSC-S, developed based on an attention mechanism by utilizing a squeeze-and-excitation (SE) network, which outperforms the traditional communications in both cases in terms of the speech signals metrics.
Abstract: Semantic communications could improve the transmission efficiency significantly by exploring the semantic information. In this paper, we make an effort to recover the transmitted speech signals in the semantic communication systems, which minimizes the error at the semantic level rather than the bit or symbol level. Particularly, we design a deep learning (DL)-enabled semantic communication system for speech signals, named DeepSC-S. In order to improve the recovery accuracy of speech signals, especially for the essential information, DeepSC-S is developed based on an attention mechanism by utilizing a squeeze-and-excitation (SE) network. The motivation behind the attention mechanism is to identify the essential speech information by providing higher weights to them when training the neural network. Moreover, in order to facilitate the proposed DeepSC-S for dynamic channel environments, we find a general model to cope with various channel conditions without retraining. Furthermore, we investigate DeepSC-S in telephone systems as well as multimedia transmission systems to verify the model adaptation in practice. The simulation results demonstrate that our proposed DeepSC-S outperforms the traditional communications in both cases in terms of the speech signals metrics, such as signal-to-distortion ration and perceptual evaluation of speech distortion. Besides, DeepSC-S is more robust to channel variations, especially in the low signal-to-noise (SNR) regime.

195 citations


Journal ArticleDOI
TL;DR: From the simulation results, the MADDPG-based method can converge within 200 training episodes, comparable to the single-agent DDPG (SADDPG)-based one, and can achieve higher delay/QoS satisfaction ratios than the SADDPg-based and random schemes.
Abstract: In this paper, we investigate multi-dimensional resource management for unmanned aerial vehicles (UAVs) assisted vehicular networks. To efficiently provide on-demand resource access, the macro eNodeB and UAV, both mounted with multi-access edge computing (MEC) servers, cooperatively make association decisions and allocate proper amounts of resources to vehicles. Since there is no central controller, we formulate the resource allocation at the MEC servers as a distributive optimization problem to maximize the number of offloaded tasks while satisfying their heterogeneous quality-of-service (QoS) requirements, and then solve it with a multi-agent deep deterministic policy gradient (MADDPG)-based method. Through centrally training the MADDPG model offline, the MEC servers, acting as learning agents, then can rapidly make vehicle association and resource allocation decisions during the online execution stage. From our simulation results, the MADDPG-based method can converge within 200 training episodes, comparable to the single-agent DDPG (SADDPG)-based one. Moreover, the proposed MADDPG-based resource management scheme can achieve higher delay/QoS satisfaction ratios than the SADDPG-based and random schemes.

184 citations


Journal ArticleDOI
TL;DR: This paper proposes a lite distributed semantic communication system based on DL, named L-DeepSC, for text transmission with low complexity, where the data transmission from the IoT devices to the cloud/edge works at the semantic level to improve transmission efficiency.
Abstract: The rapid development of deep learning (DL) and widespread applications of Internet-of-Things (IoT) have made the devices smarter than before, and enabled them to perform more intelligent tasks. However, it is challenging for any IoT device to train and run DL models independently due to its limited computing capability. In this paper, we consider an IoT network where the cloud/edge platform performs the DL based semantic communication (DeepSC) model training and updating while IoT devices perform data collection and transmission based on the trained model. To make it affordable for IoT devices, we propose a lite distributed semantic communication system based on DL, named L-DeepSC, for text transmission with low complexity, where the data transmission from the IoT devices to the cloud/edge works at the semantic level to improve transmission efficiency. Particularly, by pruning the model redundancy and lowering the weight resolution, the L-DeepSC becomes affordable for IoT devices and the bandwidth required for model weight transmission between IoT devices and the cloud/edge is reduced significantly. Through analyzing the effects of fading channels in forward-propagation and back-propagation during the training of L-DeepSC, we develop a channel state information (CSI) aided training processing to decrease the effects of fading channels on transmission. Meanwhile, we tailor the semantic constellation to make it implementable on capacity-limited IoT devices. Simulation demonstrates that the proposed L-DeepSC achieves competitive performance compared with traditional methods, especially in the low signal-to-noise (SNR) region. In particular, while it can reach as large as $40\times $ compression ratio without performance degradation.

180 citations


Journal ArticleDOI
TL;DR: In this paper, a federated edge learning framework is proposed to aggregate local learning updates at the network edge in lieu of users' raw data to accelerate the training process of deep neural networks.
Abstract: Training task in classical machine learning models, such as deep neural networks, is generally implemented at a remote cloud center for centralized learning, which is typically time-consuming and resource-hungry. It also incurs serious privacy issue and long communication latency since a large amount of data are transmitted to the centralized node. To overcome these shortcomings, we consider a newly-emerged framework, namely federated edge learning , to aggregate local learning updates at the network edge in lieu of users’ raw data. Aiming at accelerating the training process, we first define a novel performance evaluation criterion, called learning efficiency . We then formulate a training acceleration optimization problem in the CPU scenario, where each user device is equipped with CPU. The closed-form expressions for joint batchsize selection and communication resource allocation are developed and some insightful results are highlighted. Further, we extend our learning framework to the GPU scenario. The optimal solution in this scenario is manifested to have the similar structure as that of the CPU scenario, recommending that our proposed algorithm is applicable in more general systems. Finally, extensive experiments validate the theoretical analysis and demonstrate that the proposed algorithm can reduce the training time and improve the learning accuracy simultaneously.

167 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the proposed D3QN based algorithm outperforms the benchmarks, while the NOMA-enhanced RIS system is capable of achieving higher energy efficiency than orthogonal multiple access (OMA) enabled RIS system.
Abstract: A novel framework is proposed for the deployment and passive beamforming design of a reconfigurable intelligent surface (RIS) with the aid of non-orthogonal multiple access (NOMA) technology. The problem of joint deployment, phase shift design, as well as power allocation in the multiple-input-single-output (MISO) NOMA network is formulated for maximizing the energy efficiency with considering users particular data requirements. To tackle this pertinent problem, machine learning approaches are adopted in two steps. Firstly, a novel long short-term memory (LSTM) based echo state network (ESN) algorithm is proposed to predict users’ tele-traffic demand by leveraging a real dataset. Secondly, a decaying double deep Q-network (D3QN) based position-acquisition and phase-control algorithm is proposed to solve the joint problem of deployment and design of the RIS. In the proposed algorithm, the base station, which controls the RIS by a controller, acts as an agent. The agent periodically observes the state of the RIS-enhanced system for attaining the optimal deployment and design policies of the RIS by learning from its mistakes and the feedback of users. Additionally, it is proved that the proposed D3QN based deployment and design algorithm is capable of converging within mild conditions. Simulation results are provided for illustrating that the proposed LSTM-based ESN algorithm is capable of striking a tradeoff between the prediction accuracy and computational complexity. Finally, it is demonstrated that the proposed D3QN based algorithm outperforms the benchmarks, while the NOMA-enhanced RIS system is capable of achieving higher energy efficiency than orthogonal multiple access (OMA) enabled RIS system.

157 citations


Journal ArticleDOI
TL;DR: Numerical results demonstrate that the energy dissipation of the UAV can be significantly reduced by integrating RISs in UAV-enabled wireless networks and the proposed D-DQN based algorithm is capable of converging with minor constraints.
Abstract: A novel framework is proposed for integrating reconfigurable intelligent surfaces (RIS) in unmanned aerial vehicle (UAV) enabled wireless networks, where an RIS is deployed for enhancing the service quality of the UAV. Non-orthogonal multiple access (NOMA) technique is invoked to further improve the spectrum efficiency of the network, while mobile users (MUs) are considered as roaming continuously. The energy consumption minimizing problem is formulated by jointly designing the movement of the UAV, phase shifts of the RIS, power allocation policy from the UAV to MUs, as well as determining the dynamic decoding order. A decaying deep Q-network (D-DQN) based algorithm is proposed for tackling this pertinent problem. In the proposed D-DQN based algorithm, the central controller is selected as an agent for periodically observing the state of UAV-enabled wireless network and for carrying out actions to adapt to the dynamic environment. In contrast to the conventional DQN algorithm, the decaying learning rate is leveraged in the proposed D-DQN based algorithm for attaining a tradeoff between accelerating training speed and converging to the local optimal. Numerical results demonstrate that: 1) In contrast to the conventional Q-learning algorithm, which cannot converge when being adopted for solving the formulated problem, the proposed D-DQN based algorithm is capable of converging with minor constraints; 2) The energy dissipation of the UAV can be significantly reduced by integrating RISs in UAV-enabled wireless networks; 3) By designing the dynamic decoding order and power allocation policy, the RIS-NOMA case consumes 11.7% less energy than the RIS-OMA case.

147 citations


Journal ArticleDOI
TL;DR: This paper presents the first single-server Byzantine-resilient secure aggregation framework (BREA) for secure federated learning, based on an integrated stochastic quantization, verifiable outlier detection, and secure model aggregation approach to guarantee Byzantine- Resilience, privacy, and convergence simultaneously.
Abstract: Secure federated learning is a privacy-preserving framework to improve machine learning models by training over large volumes of data collected by mobile users. This is achieved through an iterative process where, at each iteration, users update a global model using their local datasets. Each user then masks its local update via random keys, and the masked models are aggregated at a central server to compute the global model for the next iteration. As the local updates are protected by random masks, the server cannot observe their true values. This presents a major challenge for the resilience of the model against adversarial (Byzantine) users, who can manipulate the global model by modifying their local updates or datasets. Towards addressing this challenge, this paper presents the first single-server Byzantine-resilient secure aggregation framework (BREA) for secure federated learning. BREA is based on an integrated stochastic quantization, verifiable outlier detection, and secure model aggregation approach to guarantee Byzantine-resilience, privacy, and convergence simultaneously. We provide theoretical convergence and privacy guarantees and characterize the fundamental trade-offs in terms of the network size, user dropouts, and privacy protection. Our experiments demonstrate convergence in the presence of Byzantine users, and comparable accuracy to conventional federated learning benchmarks.

Journal ArticleDOI
TL;DR: In this paper, an indoor 3D spatial channel model for mmWave and sub-THz frequencies based on extensive radio propagation measurements at 28 and 140 GHz conducted in an indoor office environment from 2014 to 2020 is presented.
Abstract: Millimeter-wave (mmWave) and sub-Terahertz (THz) frequencies are expected to play a vital role in 6G wireless systems and beyond due to the vast available bandwidth of many tens of GHz. This paper presents an indoor 3-D spatial statistical channel model for mmWave and sub-THz frequencies based on extensive radio propagation measurements at 28 and 140 GHz conducted in an indoor office environment from 2014 to 2020. Omnidirectional and directional path loss models and channel statistics such as the number of time clusters, cluster delays, and cluster powers were derived from over 15,000 measured power delay profiles. The resulting channel statistics show that the number of time clusters follows a Poisson distribution and the number of subpaths within each cluster follows a composite exponential distribution for both LOS and NLOS environments at 28 and 140 GHz. This paper proposes a unified indoor statistical channel model for mmWave and sub-Terahertz frequencies following the mathematical framework of the previous outdoor NYUSIM channel models. A corresponding indoor channel simulator is developed, which can recreate 3-D omnidirectional, directional, and multiple input multiple output (MIMO) channels for arbitrary mmWave and sub-THz carrier frequency up to 150 GHz, signal bandwidth, and antenna beamwidth. The presented statistical channel model and simulator will guide future air-interface, beamforming, and transceiver designs for 6G and beyond.

Journal ArticleDOI
TL;DR: A comprehensive overview of the latest research efforts on integrating UAVs into cellular networks, with an emphasis on how to exploit advanced techniques to meet the diversified service requirements of next-generation wireless systems is provided.
Abstract: Due to the advancements in cellular technologies and the dense deployment of cellular infrastructure, integrating unmanned aerial vehicles (UAVs) into the fifth-generation (5G) and beyond cellular networks is a promising solution to achieve safe UAV operation as well as enabling diversified applications with mission-specific payload data delivery. In particular, 5G networks need to support three typical usage scenarios, namely, enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC), and massive machine-type communications (mMTC). On the one hand, UAVs can be leveraged as cost-effective aerial platforms to provide ground users with enhanced communication services by exploiting their high cruising altitude and controllable maneuverability in three-dimensional (3D) space. On the other hand, providing such communication services simultaneously for both UAV and ground users poses new challenges due to the need for ubiquitous 3D signal coverage as well as the strong air-ground network interference. Besides the requirement of high-performance wireless communications, the ability to support effective and efficient sensing as well as network intelligence is also essential for 5G-and-beyond 3D heterogeneous wireless networks with coexisting aerial and ground users. In this paper, we provide a comprehensive overview of the latest research efforts on integrating UAVs into cellular networks, with an emphasis on how to exploit advanced techniques (e.g., intelligent reflecting surface, short packet transmission, energy harvesting, joint communication and radar sensing, and edge intelligence) to meet the diversified service requirements of next-generation wireless systems. Moreover, we highlight important directions for further investigation in future work.

Journal ArticleDOI
TL;DR: The proposed initial access algorithm and pilot assignment schemes outperform their corresponding benchmarks, P-LSFD achieves scalability with a negligible performance loss compared to the conventional optimal large-scale fading decoding, and scalable fractional power control provides a controllable trade-off between user fairness and the average SE.
Abstract: How to meet the demand for increasing number of users, higher data rates, and stringent quality-of-service (QoS) in the beyond fifth-generation (B5G) networks? Cell-free massive multiple-input multiple-output (MIMO) is considered as a promising solution, in which many wireless access points cooperate to jointly serve the users by exploiting coherent signal processing. However, there are still many unsolved practical issues in cell-free massive MIMO systems, whereof scalable massive access implementation is one of the most vital. In this paper, we propose a new framework for structured massive access in cell-free massive MIMO systems, which comprises one initial access algorithm, a partial large-scale fading decoding (P-LSFD) strategy, two pilot assignment schemes, and one fractional power control policy. New closed-form spectral efficiency (SE) expressions with maximum ratio (MR) combining are derived. The simulation results show that our proposed framework provides high SE when using local partial minimum mean-square error (LP-MMSE) and MR combining. Specifically, the proposed initial access algorithm and pilot assignment schemes outperform their corresponding benchmarks, P-LSFD achieves scalability with a negligible performance loss compared to the conventional optimal large-scale fading decoding (LSFD), and scalable fractional power control provides a controllable trade-off between user fairness and the average SE.

Journal ArticleDOI
TL;DR: This work studies the image retrieval problem at the wireless edge, where an edge device captures an image, which is then used to retrieve similar images from an edge server, and proposes two alternative schemes based on digital and analog communications.
Abstract: We study the image retrieval problem at the wireless edge, where an edge device captures an image, which is then used to retrieve similar images from an edge server. These can be images of the same person or a vehicle taken from other cameras at different times and locations. Our goal is to maximize the accuracy of the retrieval task under power and bandwidth constraints over the wireless link. Due to the stringent delay constraint of the underlying application, sending the whole image at a sufficient quality is not possible. We propose two alternative schemes based on digital and analog communications, respectively. In the digital approach, we first propose a deep neural network (DNN) aided retrieval-oriented image compression scheme, whose output bit sequence is transmitted over the channel using conventional channel codes. In the analog joint source and channel coding (JSCC) approach, the feature vectors are directly mapped into channel symbols. We evaluate both schemes on image based re-identification (re-ID) tasks under different channel conditions, including both static and fading channels. We show that the JSCC scheme significantly increases the end-to-end accuracy, speeds up the encoding process, and provides graceful degradation with channel conditions. The proposed architecture is evaluated through extensive simulations on different datasets and channel conditions, as well as through ablation studies.

Journal ArticleDOI
TL;DR: Numerical results validate the analysis and show that the proposed scheme can significantly improve the energy efficiency of NOMA-enabled MEC in IoT networks compared to the existing baselines.
Abstract: Integrating mobile edge computing (MEC) into the Internet of Things (IoT) enables the IoT devices of limited computation capabilities and energy to offload their computation-intensive and delay-sensitive tasks to the network edge, thereby providing high quality of service to the devices. In this article, we apply non-orthogonal multiple access (NOMA) technique to enable massive connectivity and investigate how it can be exploited to achieve energy-efficient MEC in IoT networks. In order to maximize the energy efficiency for offloading, while simultaneously satisfying the maximum tolerable delay constraints of IoT devices, a joint radio and computation resource allocation problem is formulated, which takes both intra- and inter-cell interference into consideration. To tackle this intractable mixed integer non-convex problem, we first decouple it into separated radio and computation resource allocation problems. Then, the radio resource allocation problem is further decomposed into a subchannel allocation problem and a power allocation problem, which can be solved by matching and sequential convex programming algorithms, respectively. Based on the obtained radio resource allocation solution, the computation resource allocation problem can be solved by utilizing the Knapsack method. Numerical results validate our analysis and show that our proposed scheme can significantly improve the energy efficiency of NOMA-enabled MEC in IoT networks compared to the existing baselines.

Journal ArticleDOI
TL;DR: In this paper, a deep neural network (DNN) was used to optimize both the beamforming at the BS and the reflective coefficients at the RIS based on a system objective.
Abstract: Intelligent reflecting surface (IRS), which consists of a large number of tunable reflective elements, is capable of enhancing the wireless propagation environment in a cellular network by intelligently reflecting the electromagnetic waves from the base-station (BS) toward the users. The optimal tuning of the phase shifters at the IRS is, however, a challenging problem, because due to the passive nature of reflective elements, it is difficult to directly measure the channels between the IRS, the BS, and the users. Instead of following the traditional paradigm of first estimating the channels then optimizing the system parameters, this paper advocates a machine learning approach capable of directly optimizing both the beamformers at the BS and the reflective coefficients at the IRS based on a system objective. This is achieved by using a deep neural network to parameterize the mapping from the received pilots (plus any additional information, such as the user locations) to an optimized system configuration, and by adopting a permutation invariant/equivariant graph neural network (GNN) architecture to capture the interactions among the different users in the cellular network. Simulation results show that the proposed implicit channel estimation based approach is generalizable, can be interpreted, and can efficiently learn to maximize a sum-rate or minimum-rate objective from a much fewer number of pilots than the traditional explicit channel estimation based approaches.

Journal ArticleDOI
TL;DR: In this article, the effect of large-scale deployment of RISs on the performance of cellular networks was studied using tools from stochastic geometry to derive the probability that a typical mobile user associates with a BS using an RIS.
Abstract: One of the promising technologies for the next generation wireless networks is the reconfigurable intelligent surfaces (RISs). This technology provides planar surfaces the capability to manipulate the reflected waves of impinging signals, which leads to a more controllable wireless environment. One potential use case of such technology is providing indirect line-of-sight (LoS) links between mobile users and base stations (BSs) which do not have direct LoS channels. Objects that act as blockages for the communication links, such as buildings or trees, can be equipped with RISs to enhance the coverage probability of the cellular network through providing extra indirect LoS-links. In this article, we use tools from stochastic geometry to study the effect of large-scale deployment of RISs on the performance of cellular networks. In particular, we model the blockages using the line Boolean model. For this setup, we study how equipping a subset of the blockages with RISs will enhance the performance of the cellular network. We first derive the ratio of the blind-spots to the total area. Next, we derive the probability that a typical mobile user associates with a BS using an RIS. Finally, we derive the probability distribution of the path-loss between the typical user and its associated BS. We draw multiple useful system-level insights from the proposed analysis. For instance, we show that deployment of RISs highly improves the coverage regions of the BSs. Furthermore, we show that to ensure that the ratio of blind-spots to the total area is below $10^{-5}$ , the required density of RISs increases from just 6 RISs/km2 when the density of the blockages is 300 blockage/km2 to 490 RISs/km2 when the density of the blockages is 700 blockage/km2.

Journal ArticleDOI
TL;DR: An electroencephalogram (EEG)-based remote pathology detection system that uses a deep convolutional network consisting of 1D and 2D convolutions and a fusion network is proposed, and its performance is found to be comparable with the performance obtained using only a local server.
Abstract: An electroencephalogram (EEG)-based remote pathology detection system is proposed in this study. The system uses a deep convolutional network consisting of 1D and 2D convolutions. Features from different convolutional layers are fused using a fusion network. Various types of networks are investigated; the types include a multilayer perceptron (MLP) with a varying number of hidden layers, and an autoencoder. Experiments are done using a publicly available EEG signal database that contains two classes: normal and abnormal. The experimental results demonstrate that the proposed system achieves greater than 89% accuracy using the convolutional network followed by the MLP with two hidden layers. The proposed system is also evaluated in a cloud-based framework, and its performance is found to be comparable with the performance obtained using only a local server.

Journal ArticleDOI
TL;DR: In this paper, adaptive power allocation for distributed gradient descent in wireless FL with the aim of minimizing the learning optimality gap under privacy and power constraints is studied. And the importance of dynamic PA and the potential benefits of NOMA versus OMA are demonstrated through extensive simulations.
Abstract: Federated Learning (FL) refers to distributed protocols that avoid direct raw data exchange among the participating devices while training for a common learning task. This way, FL can potentially reduce the information on the local data sets that is leaked via communications. In order to provide formal privacy guarantees, however, it is generally necessary to put in place additional masking mechanisms. When FL is implemented in wireless systems via uncoded transmission, the channel noise can directly act as a privacy-inducing mechanism. This paper demonstrates that, as long as the privacy constraint level, measured via differential privacy (DP), is below a threshold that decreases with the signal-to-noise ratio (SNR), uncoded transmission achieves privacy “for free”, i.e., without affecting the learning performance. More generally, this work studies adaptive power allocation (PA) for distributed gradient descent in wireless FL with the aim of minimizing the learning optimality gap under privacy and power constraints. Both orthogonal multiple access (OMA) and non-orthogonal multiple access (NOMA) transmission with “over-the-air-computing” are studied, and solutions are obtained in closed form for an offline optimization setting. Furthermore, heuristic online methods are proposed that leverage iterative one-step-ahead optimization. The importance of dynamic PA and the potential benefits of NOMA versus OMA are demonstrated through extensive simulations.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning.
Abstract: The next-generation of wireless networks will enable many machine learning (ML) tools and applications to efficiently analyze various types of data collected by edge devices for inference, autonomy, and decision making purposes. However, due to resource constraints, delay limitations, and privacy challenges, edge devices cannot offload their entire collected datasets to a cloud server for centrally training their ML models or inference purposes. To overcome these challenges, distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges, thus reducing the communication overhead and latency as well as improving data privacy. However, deploying distributed learning over wireless networks faces several challenges including the uncertain wireless environment (e.g., dynamic channel and interference), limited wireless resources (e.g., transmit power and radio spectrum), and hardware resources (e.g., computational power). This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks. We present a detailed overview of several emerging distributed learning paradigms, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning. For each learning framework, we first introduce the motivation for deploying it over wireless networks. Then, we present a detailed literature review on the use of communication techniques for its efficient deployment. We then introduce an illustrative example to show how to optimize wireless networks to improve its performance. Finally, we introduce future research opportunities. In a nutshell, this paper provides a holistic set of guidelines on how to deploy a broad range of distributed learning frameworks over real-world wireless communication networks.

Journal ArticleDOI
TL;DR: Simulation results demonstrated that the proposed CVNN-based SEI method is superior to the existing DL-based methods in both identification performance and convergence speed, and the identification accuracy of CVNN can reach up to nearly 100% at high signal-to-noise ratios (SNRs).
Abstract: Specific emitter identification (SEI) is a promising technology to discriminate the individual emitter and enhance the security of various wireless communication systems. SEI is generally based on radio frequency fingerprinting (RFF) originated from the imperfection of emitter’s hardware, which is difficult to forge. SEI is generally modeled as a classification task and deep learning (DL), which exhibits powerful classification capability, has been introduced into SEI for better identification performance. In the recent years, a novel DL model, named as complex-valued neural network (CVNN), has been applied into SEI methods for directly processing complex baseband signal and improving identification performance, but it also brings high model complexity and large model size, which is not conducive to the deployment of SEI, especially in Internet-of-things (IoT) scenarios. Thus, we propose an efficient SEI method based on CVNN and network compression, and the former is for performance improvement, while the latter is to reduce model complexity and size with ensuring satisfactory identification performance. Simulation results demonstrated that our proposed CVNN-based SEI method is superior to the existing DL-based methods in both identification performance and convergence speed, and the identification accuracy of CVNN can reach up to nearly 100% at high signal-to-noise ratios (SNRs). In addition, SlimCVNN just has 10% $\sim 30$ % model sizes of the basic CVNN, and its computing complexity has different degrees of decline at different SNRs; there is almost no performance gap between SlimCVNN and CVNN. These results demonstrated the feasibility and potential of CVNN and model compression.

Journal ArticleDOI
TL;DR: To tackle the formulated mixed-integer non-convex optimization problem with coupled variables, a block coordinate descent (BCD)-based iterative algorithm is developed and is demonstrated to be able to obtain a stationary point of the original problem with polynomial time complexity.
Abstract: Intelligent reflecting surface (IRS) enhanced multi-unmanned aerial vehicle (UAV) non-orthogonal multiple access (NOMA) networks are investigated. A new transmission framework is proposed, where multiple UAV-mounted base stations employ NOMA to serve multiple groups of ground users with the aid of an IRS. The three-dimensional (3D) placement and transmit power of UAVs, the reflection matrix of the IRS, and the NOMA decoding orders among users are jointly optimized for maximization of the sum rate of considered networks. To tackle the formulated mixed-integer non-convex optimization problem with coupled variables, a block coordinate descent (BCD)-based iterative algorithm is developed. Specifically, the original problem is decomposed into three subproblems, which are alternately solved by exploiting the penalty-based method and the successive convex approximation technique. The proposed BCD-based algorithm is demonstrated to be able to obtain a stationary point of the original problem with polynomial time complexity. Numerical results show that: 1) the proposed NOMA-IRS scheme for multi-UAV networks achieves a higher sum rate compared to the benchmark schemes, i.e., orthogonal multiple access (OMA)-IRS and NOMA without IRS; 2) the use of IRS is capable of providing performance gain for multi-UAV networks by both enhancing channel qualities of UAVs to their served users and mitigating the inter-UAV interference; and 3) optimizing the UAV placement can make the sum rate gain brought by NOMA more distinct due to the flexible decoding order design.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a vision for scalable and trustworthy edge AI systems with integrated design of wireless communication strategies and decentralized machine learning models, as well as a holistic end-to-end system architecture to support edge AI.
Abstract: The thriving of artificial intelligence (AI) applications is driving the further evolution of wireless networks. It has been envisioned that 6G will be transformative and will revolutionize the evolution of wireless from “connected things” to “connected intelligence”. However, state-of-the-art deep learning and big data analytics based AI systems require tremendous computation and communication resources, causing significant latency, energy consumption, network congestion, and privacy leakage in both of the training and inference processes. By embedding model training and inference capabilities into the network edge, edge AI stands out as a disruptive technology for 6G to seamlessly integrate sensing, communication, computation, and intelligence, thereby improving the efficiency, effectiveness, privacy, and security of 6G networks. In this paper, we shall provide our vision for scalable and trustworthy edge AI systems with integrated design of wireless communication strategies and decentralized machine learning models. New design principles of wireless networks, service-driven resource allocation optimization methods, as well as a holistic end-to-end system architecture to support edge AI will be described. Standardization, software and hardware platforms, and application scenarios are also discussed to facilitate the industrialization and commercialization of edge AI systems.

Journal ArticleDOI
TL;DR: This approach leverages the nearby edge devices to create the decoupled blocks in blockchain so as to securely transmit the healthcare data from sensors to the edge nodes and transmit and store the data at the cloud using the incremental tensor-based scheme.
Abstract: The in-house health monitoring sensors form a large network of Internet of things (IoT) that continuously monitors and sends the data to the nearby devices or server. However, the connectivity of these IoT-based sensors with different entities leads to security loopholes wherein the adversary can exploit the vulnerabilities due to the openness of the data. This is a major concern especially in the healthcare sector where the change in data values from sensors can change the course of diagnosis which can cause severe health issues. Therefore, in order to prevent the data tempering and preserve the privacy of patients, we present a decoupled blockchain-based approach in the edge-envisioned ecosystem. This approach leverages the nearby edge devices to create the decoupled blocks in blockchain so as to securely transmit the healthcare data from sensors to the edge nodes. The edge nodes then transmit and store the data at the cloud using the incremental tensor-based scheme. This helps to reduce the data duplication of the huge amount of data transmitted in the large IoT healthcare network. The results show the effectiveness of the proposed approach in terms of the block preparation time, header generation time, tensor reduction ratio, and approximation error.

Journal ArticleDOI
TL;DR: In this article, a review of key factors that drove the adoption and growth of the IoT-based in-home remote monitoring system architecture and key building blocks is presented, as well as future outlook and recommendations of the in home remote monitoring applications going forward.
Abstract: Internet of Things has been one of the catalysts in revolutionizing conventional healthcare services. With the growing society, traditional healthcare systems reach their capacity in providing sufficient and high-quality services. The world is facing the aging population and the inherent need for assisted-living environments for senior citizens. There is also a commitment by national healthcare organizations to increase support for personalized, integrated care to prevent and manage chronic conditions. Many applications related to In-Home Health Monitoring have been introduced over the last few decades, thanks to the advances in mobile and Internet of Things technologies and services. Such advances include improvements in optimized network architecture, indoor networks coverage, increased device reliability and performance, ultra-low device cost, low device power consumption, and improved device and network security and privacy. Current studies of in-home health monitoring systems presented many benefits including improved safety, quality of life and reduction in hospitalization and cost. However, many challenges of such a paradigm shift still exist, that need to be addressed to support scale-up and wide uptake of such systems, including technology acceptance and adoption by patients, healthcare providers and policymakers. The aim of this paper is three folds: First, review of key factors that drove the adoption and growth of the IoT-based in-home remote monitoring; Second, present the latest advances of IoT based in-home remote monitoring system architecture and key building blocks; Third, discuss future outlook and our recommendations of the in-home remote monitoring applications going forward.

Journal ArticleDOI
TL;DR: This work introduces a low-complexity beam squint mitigation scheme based on true-time-delay and proposes a novel variant of the popular orthogonal matching pursuit (OMP) algorithm to accurately estimate the channel with low training overhead.
Abstract: Terahertz (THz) communication is widely considered as a key enabler for future 6G wireless systems. However, THz links are subject to high propagation losses and inter-symbol interference due to the frequency selectivity of the channel. Massive multiple-input multiple-output (MIMO) along with orthogonal frequency division multiplexing (OFDM) can be used to deal with these problems. Nevertheless, when the propagation delay across the base station (BS) antenna array exceeds the symbol period, the spatial response of the BS array varies over the OFDM subcarriers. This phenomenon, known as beam squint, renders narrowband combining approaches ineffective. Additionally, channel estimation becomes challenging in the absence of combining gain during the training stage. In this work, we address the channel estimation and hybrid combining problems in wideband THz massive MIMO with uniform planar arrays. Specifically, we first introduce a low-complexity beam squint mitigation scheme based on true-time-delay. Next, we propose a novel variant of the popular orthogonal matching pursuit (OMP) algorithm to accurately estimate the channel with low training overhead. Our channel estimation and hybrid combining schemes are analyzed both theoretically and numerically. Moreover, the proposed schemes are extended to the multi-antenna user case. Simulation results are provided showcasing the performance gains offered by our design compared to standard narrowband combining and OMP-based channel estimation.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a reinforcement on federated learning (RoF) scheme, based on deep multi-agent reinforcement learning, to solve the problem of joint decision of device selection and computing and spectrum resource allocation in distributed industrial IoT networks.
Abstract: In this paper, we aim to make the best joint decision of device selection and computing and spectrum resource allocation for optimizing federated learning (FL) performance in distributed industrial Internet of Things (IIoT) networks. To implement efficient FL over geographically dispersed data, we introduce a three-layer collaborative FL architecture to support deep neural network (DNN) training. Specifically, using the data dispersed in IIoT devices, the industrial gateways locally train the DNN model and the local models can be aggregated by their associated edge servers every FL epoch or by a cloud server every a few FL epochs for obtaining the global model. To optimally select participating devices and allocate computing and spectrum resources for training and transmitting the model parameters, we formulate a stochastic optimization problem with the objective of minimizing FL evaluating loss while satisfying delay and long-term energy consumption requirements. Since the objective function of the FL evaluating loss is implicit and the energy consumption is temporally correlated, it is difficult to solve the problem via traditional optimization methods. Thus, we propose a “ Reinforcement on Federated ” (RoF) scheme, based on deep multi-agent reinforcement learning, to solve the problem. Specifically, the RoF scheme is executed decentralizedly at edge servers, which can cooperatively make the optimal device selection and resource allocation decisions. Moreover, a device refinement subroutine is embedded into the RoF scheme to accelerate convergence while effectively saving the on-device energy. Simulation results demonstrate that the RoF scheme can facilitate efficient FL and achieve better performance compared with state-of-the-art benchmarks.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a fast-convergent federated learning algorithm, called ''mathsf {FOLB}$'', which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed.
Abstract: Federated learning has emerged recently as a promising solution for distributing machine learning tasks through modern networks of mobile devices. Recent studies have obtained lower bounds on the expected decrease in model loss that is achieved through each round of federated learning. However, convergence generally requires a large number of communication rounds, which induces delay in model training and is costly in terms of network resources. In this paper, we propose a fast-convergent federated learning algorithm, called $\mathsf {FOLB}$ , which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed. We first theoretically characterize a lower bound on improvement that can be obtained in each round if devices are selected according to the expected improvement their local models will provide to the current global model. Then, we show that $\mathsf {FOLB}$ obtains this bound through uniform sampling by weighting device updates according to their gradient information. $\mathsf {FOLB}$ is able to handle both communication and computation heterogeneity of devices by adapting the aggregations according to estimates of device’s capabilities of contributing to the updates. We evaluate $\mathsf {FOLB}$ in comparison with existing federated learning algorithms and experimentally show its improvement in trained model accuracy, convergence speed, and/or model stability across various machine learning tasks and datasets.

Journal ArticleDOI
Wen Wu1, Nan Chen1, Conghao Zhou1, Mushu Li1, Xuemin Shen1, Weihua Zhuang1, Xu Li2 
TL;DR: This paper proposes a two-layer constrained RL algorithm, named RAWS, which effectively reduces the system cost while satisfying QoS requirements with a high probability, as compared with benchmarks.
Abstract: In this paper, we investigate a radio access network (RAN) slicing problem for Internet of vehicles (IoV) services with different quality of service (QoS) requirements, in which multiple logically-isolated slices are constructed on a common roadside network infrastructure. A dynamic RAN slicing framework is presented to dynamically allocate radio spectrum and computing resource, and distribute computation workloads for the slices. To obtain an optimal RAN slicing policy for accommodating the spatial-temporal dynamics of vehicle traffic density, we first formulate a constrained RAN slicing problem with the objective to minimize long-term system cost. This problem cannot be directly solved by traditional reinforcement learning (RL) algorithms due to complicated coupled constraints among decisions. Therefore, we decouple the problem into a resource allocation subproblem and a workload distribution subproblem, and propose a two-layer constrained RL algorithm, named R esource A llocation and W orkload di S tribution (RAWS) to solve them. Specifically, an outer layer first makes the resource allocation decision via an RL algorithm, and then an inner layer makes the workload distribution decision via an optimization subroutine. Extensive trace-driven simulations show that the RAWS effectively reduces the system cost while satisfying QoS requirements with a high probability, as compared with benchmarks.