scispace - formally typeset
Search or ask a question

Showing papers by "Mehdi Bennis published in 2020"


Journal ArticleDOI
TL;DR: This article identifies the primary drivers of 6G systems, in terms of applications and accompanying technological trends, and identifies the enabling technologies for the introduced 6G services and outlines a comprehensive research agenda that leverages those technologies.
Abstract: The ongoing deployment of 5G cellular systems is continuously exposing the inherent limitations of this system, compared to its original premise as an enabler for Internet of Everything applications. These 5G drawbacks are spurring worldwide activities focused on defining the next-generation 6G wireless system that can truly integrate far-reaching applications ranging from autonomous systems to extended reality. Despite recent 6G initiatives (one example is the 6Genesis project in Finland), the fundamental architectural and performance components of 6G remain largely undefined. In this article, we present a holistic, forward-looking vision that defines the tenets of a 6G system. We opine that 6G will not be a mere exploration of more spectrum at high-frequency bands, but it will rather be a convergence of upcoming technological trends driven by exciting, underlying services. In this regard, we first identify the primary drivers of 6G systems, in terms of applications and accompanying technological trends. Then, we propose a new set of service classes and expose their target 6G performance requirements. We then identify the enabling technologies for the introduced 6G services and outline a comprehensive research agenda that leverages those technologies. We conclude by providing concrete recommendations for the roadmap toward 6G. Ultimately, the intent of this article is to serve as a basis for stimulating more out-of-the-box research around 6G.

2,416 citations


Journal ArticleDOI
TL;DR: The vision of 5G is extended to more ambitious scenarios in a more distant future and speculates on the visionary technologies that could provide the step changes needed for enabling 6G.
Abstract: While 5G is being tested worldwide and anticipated to be rolled out gradually in 2019, researchers around the world are beginning to turn their attention to what 6G might be in 10+ years time, and there are already initiatives in various countries focusing on the research of possible 6G technologies. This article aims to extend the vision of 5G to more ambitious scenarios in a more distant future and speculates on the visionary technologies that could provide the step changes needed for enabling 6G.

539 citations


Journal ArticleDOI
TL;DR: In this article, a blockchained federated learning (BlockFL) architecture is proposed, where local learning model updates are exchanged and verified by utilizing a consensus mechanism in blockchain.
Abstract: By leveraging blockchain, this letter proposes a blockchained federated learning (BlockFL) architecture where local learning model updates are exchanged and verified. This enables on-device machine learning without any centralized training data or coordination by utilizing a consensus mechanism in blockchain. Moreover, we analyze an end-to-end latency model of BlockFL and characterize the optimal block generation rate by considering communication, computation, and consensus delays.

394 citations


Journal ArticleDOI
TL;DR: The problem of joint power and resource allocation (JPRA) for ultra-reliable low-latency communication (URLLC) in vehicular networks is studied and a novel distributed approach based on federated learning (FL) is proposed to estimate the tail distribution of the queues.
Abstract: In this paper, the problem of joint power and resource allocation (JPRA) for ultra-reliable low-latency communication (URLLC) in vehicular networks is studied. Therein, the network-wide power consumption of vehicular users (VUEs) is minimized subject to high reliability in terms of probabilistic queuing delays. Using extreme value theory (EVT), a new reliability measure is defined to characterize extreme events pertaining to vehicles’ queue lengths exceeding a predefined threshold. To learn these extreme events, assuming they are independently and identically distributed over VUEs, a novel distributed approach based on federated learning (FL) is proposed to estimate the tail distribution of the queue lengths. Considering the communication delays incurred by FL over wireless links, Lyapunov optimization is used to derive the JPRA policies enabling URLLC for each VUE in a distributed manner. The proposed solution is then validated via extensive simulations using a Manhattan mobility model. Simulation results show that FL enables the proposed method to estimate the tail distribution of queues with an accuracy that is close to a centralized solution with up to 79% reductions in the amount of exchanged data. Furthermore, the proposed method yields up to 60% reductions of VUEs with large queue lengths, while reducing the average power consumption by two folds, compared to an average queue-based baseline.

353 citations


Journal ArticleDOI
TL;DR: A proactive algorithm based on long short-term memory and deep reinforcement learning techniques to address the partial observability and the curse of high dimensionality in local network state space faced by each VUE-pair is proposed.
Abstract: In this paper, we investigate the problem of age of information (AoI)-aware radio resource management for expected long-term performance optimization in a Manhattan grid vehicle-to-vehicle network. With the observation of global network state at each scheduling slot, the roadside unit (RSU) allocates the frequency bands and schedules packet transmissions for all vehicle user equipment-pairs (VUE-pairs). We model the stochastic decision-making procedure as a discrete-time single-agent Markov decision process (MDP). The technical challenges in solving the optimal control policy originate from high spatial mobility and temporally varying traffic information arrivals of the VUE-pairs. To make the problem solving tractable, we first decompose the original MDP into a series of per-VUE-pair MDPs. Then we propose a proactive algorithm based on long short-term memory and deep reinforcement learning techniques to address the partial observability and the curse of high dimensionality in local network state space faced by each VUE-pair. With the proposed algorithm, the RSU makes the optimal frequency band allocation and packet scheduling decision at each scheduling slot in a decentralized way in accordance with the partial observations of the global network state at the VUE-pairs. Numerical experiments validate the theoretical analysis and demonstrate the significant performance improvements from the proposed algorithm.

123 citations


Proceedings ArticleDOI
07 Jun 2020
TL;DR: A novel framework is proposed to implement distributed federated learning (FL) algorithms within a UAV swarm that consists of a leading UAV and several following UAVs and shows that the joint design strategy can reduce the number of communication rounds needed for convergence by as much as 35% compared with the baseline design.
Abstract: Unmanned aerial vehicle (UAV) swarms must exploit machine learning (ML) in order to execute various tasks ranging from coordinated trajectory planning to cooperative target recognition. However, due to the lack of continuous connections between the UAV swarm and ground base stations (BSs), using centralized ML will be challenging, particularly when dealing with a large volume of data. In this paper, a novel framework is proposed to implement distributed federated learning (FL) algorithms within a UAV swarm that consists of a leading UAV and several following UAVs. Each following UAV trains a local FL model based on its collected data and then sends this trained local model to the leading UAV who will aggregate the received models, generate a global FL model, and transmit it to followers over the intra-swarm network. To identify how wireless factors, like fading, transmission delay, and UAV antenna angle deviations resulting from wind and mechanical vibrations, impact the performance of FL, a rigorous convergence analysis for FL is performed. Then, a joint power allocation and scheduling design is proposed to optimize the convergence rate of FL while taking into account the energy consumption during convergence and the delay requirement imposed by the swarm's control system. Simulation results validate the effectiveness of the FL convergence analysis and show that the joint design strategy can reduce the number of communication rounds needed for convergence by as much as 35% compared with the baseline design.

107 citations


Journal ArticleDOI
TL;DR: In this article, the authors identify the primary drivers of VR systems in terms of applications and use cases and map the human perception requirements to corresponding QoS requirements for four phases of VR technology development.
Abstract: Cellular-connected wireless connectivity provides new opportunities for virtual reality (VR) to offer seamless user experience from anywhere at anytime. To realize this vision, the quality-of-service (QoS) for wireless VR needs to be carefully defined to reflect human perception requirements. In this article, we first identify the primary drivers of VR systems in terms of applications and use cases. We then map the human perception requirements to corresponding QoS requirements for four phases of VR technology development. To shed light on how to provide short/long-range mobility for VR services, we further list four main use cases for cellular-connected wireless VR and identify their unique research challenges along with their corresponding enabling technologies and solutions in 5G systems and beyond. Last but not least, we present a case study to demonstrate the effectiveness of our proposed solution and the unique QoS performance requirements of VR transmission compared to that of traditional video service in cellular networks.

99 citations


Journal ArticleDOI
TL;DR: A deep-learning aided scheme for maximizing the quality of the delivered video chunks with low-latency in multi-user VR wireless video streaming, and the correlations in the predicted field of view (FoV) and locations of viewers watching 360° HD VR videos are capitalized on.
Abstract: Immersive virtual reality (VR) applications require ultra-high data rate and low-latency for smooth operation. Hence in this paper, aiming to improve VR experience in multi-user VR wireless video streaming, a deep-learning aided scheme for maximizing the quality of the delivered video chunks with low-latency is proposed. Therein the correlations in the predicted field of view (FoV) and locations of viewers watching 360° HD VR videos are capitalized on to realize a proactive FoV-centric millimeter wave (mmWave) physical-layer multicast transmission. The problem is cast as a frame quality maximization problem subject to tight latency constraints and network stability. The problem is then decoupled into an HD frame request admission and scheduling subproblems and a matching theory game is formulated to solve the scheduling subproblem by associating requests from clusters of users to mmWave small cell base stations (SBSs) for their unicast/multicast transmission. Furthermore, for realistic modeling and simulation purposes, a real VR head-tracking dataset and a deep recurrent neural network (DRNN) based on gated recurrent units (GRUs) are leveraged. Extensive simulation results show how the content-reuse for clusters of users with highly overlapping FoVs brought in by multicasting reduces the VR frame delay in 12%. This reduction is further boosted by proactiveness that cuts by half the average delays of both reactive unicast and multicast baselines while preserving HD delivery rates above 98%. Finally, enforcing tight latency bounds shortens the delay-tail as evinced by 13% lower delays in the 99th percentile.

86 citations


Journal ArticleDOI
TL;DR: This work introduces a new deep imitation learning (DIL)-driven edge-cloud computation offloading framework for MEC networks, and discusses the directions and advantages of applying deep learning methods to multiple MEC research areas, including edge data analytics, dynamic resource allocation, security, and privacy.
Abstract: In this work, we propose a new deep imitation learning (DIL)-driven edge-cloud computation offloading framework for MEC networks. A key objective for the framework is to minimize the offloading cost in time-varying network environments through optimal behavioral cloning. Specifically, we first introduce our computation offloading model for MEC in detail. Then we make fine-grained offloading decisions for a mobile device, and the problem is formulated as a multi-label classification problem, with local execution cost and remote network resource usage consideration. To minimize the offloading cost, we train our decision making engine by leveraging the deep imitation learning method, and further evaluate its performance through an extensive numerical study. Simulation results show that our proposal outperforms other benchmark policies in offloading accuracy and offloading cost reduction. At last, we discuss the directions and advantages of applying deep learning methods to multiple MEC research areas, including edge data analytics, dynamic resource allocation, security, and privacy, respectively.

82 citations


Posted Content
TL;DR: The intent of this article is to spearhead beyond-5G/6G mission-critical applications by laying out a holistic vision of xURLLC, its research challenges and enabling technologies, while providing key insights grounded in selected use cases.
Abstract: Notwithstanding the significant traction gained by ultra-reliable and low-latency communication (URLLC) in both academia and 3GPP standardization, fundamentals of URLLC remain elusive. Meanwhile, new immersive and high-stake control applications with much stricter reliability, latency and scalability requirements are posing unprecedented challenges in terms of system design and algorithmic solutions. This article aspires at providing a fresh and in-depth look into URLLC by first examining the limitations of 5G URLLC, and putting forward key research directions for the next generation of URLLC, coined eXtreme ultra-reliable and low-latency communication (xURLLC). xURLLC is underpinned by three core concepts: (1) it leverages recent advances in machine learning (ML) for faster and reliable data-driven predictions; (2) it fuses both radio frequency (RF) and non-RF modalities for modeling and combating rare events without sacrificing spectral efficiency; and (3) it underscores the much needed joint communication and control co-design, as opposed to the communication-centric 5G URLLC. The intent of this article is to spearhead beyond-5G/6G mission-critical applications by laying out a holistic vision of xURLLC, its research challenges and enabling technologies, while providing key insights grounded in selected use cases.

81 citations


Posted Content
TL;DR: This article aims to provide a holistic overview of relevant communication and ML principles and present communication-efficient and distributed learning frameworks with selected use cases.
Abstract: Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond. By imbuing intelligence into the network edge, edge nodes can proactively carry out decision-making, and thereby react to local environmental changes and disturbances while experiencing zero communication latency. To achieve this goal, it is essential to cater for high ML inference accuracy at scale under time-varying channel and network dynamics, by continuously exchanging fresh data and ML model updates in a distributed way. Taming this new kind of data traffic boils down to improving the communication efficiency of distributed learning by optimizing communication payload types, transmission techniques, and scheduling, as well as ML architectures, algorithms, and data processing methods. To this end, this article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.

Journal ArticleDOI
TL;DR: The transmission power minimization problem is studied under stringent URLLC constraints in terms of probabilistic AoI for both deterministic and Markovian traffic arrivals, and an efficient novel mapping between AoI and queue-related distributions is proposed.
Abstract: While the notion of age of information (AoI) has recently been proposed for analyzing ultra-reliable low-latency communications (URLLC), most of the existing works have focused on the average AoI measure. Designing a wireless network based on average AoI will fail to characterize the performance of URLLC systems, as it cannot account for extreme AoI events, occurring with very low probabilities. In contrast, this paper goes beyond the average AoI to improve URLLC in a vehicular communication network by characterizing and controlling the AoI tail distribution. In particular, the transmission power minimization problem is studied under stringent URLLC constraints in terms of probabilistic AoI for both deterministic and Markovian traffic arrivals. Accordingly, an efficient novel mapping between AoI and queue-related distributions is proposed. Subsequently, extreme value theory (EVT) and Lyapunov optimization techniques are adopted to formulate and solve the problem considering both long and short packets transmissions. Simulation results show over a two-fold improvement, in shortening the AoI distribution tail, versus a baseline that models the maximum queue length distribution, in addition to a tradeoff between arrival rate and AoI.

Posted Content
TL;DR: A quantification of the risk for an unreliable VR performance is conducted through a novel and rigorous characterization of the tail of the end-to-end (E2E) delay, and system reliability for scenarios with guaranteed line-of-sight (LoS) is derived as a function of THz network parameters after deriving a novel expression for the probability distribution function of the THz transmission delay.
Abstract: Wireless virtual reality (VR) imposes new visual and haptic requirements that are directly linked to the quality-of-experience (QoE) of VR users. These QoE requirements can only be met by wireless connectivity that offers high-rate and high-reliability low latency communications (HRLLC), unlike the low rates usually considered in vanilla ultra-reliable low latency communication scenarios. The high rates for VR over short distances can only be supported by an enormous bandwidth, which is available in terahertz (THz) wireless networks. Guaranteeing HRLLC requires dealing with the uncertainty that is specific to the THz channel. To explore the potential of THz for meeting HRLLC requirements, a quantification of the risk for an unreliable VR performance is conducted through a novel and rigorous characterization of the tail of the end-to-end (E2E) delay. Then, a thorough analysis of the tail-value-at-risk (TVaR) is performed to concretely characterize the behavior of extreme wireless events crucial to the real-time VR experience. System reliability for scenarios with guaranteed line-of-sight (LoS) is then derived as a function of THz network parameters after deriving a novel expression for the probability distribution function of the THz transmission delay. Numerical results show that abundant bandwidth and low molecular absorption are necessary to improve the reliability. However, their effect remains secondary compared to the availability of LoS, which significantly affects the THz HRLLC performance. In particular, for scenarios with guaranteed LoS, a reliability of 99.999% (with an E2E delay threshold of 20 ms) for a bandwidth of 15 GHz can be achieved by the THz network, compared to a reliability of 96% for twice the bandwidth, when blockages are considered.

Journal ArticleDOI
TL;DR: This work shows an incentive-based interaction between the crowdsourcing platform and the participating client’s independent strategies for training a global learning model, where each side maximizes its own benefit and proposes a novel crowdsourcing framework to leverage FL that considers the communication efficiency during parameters exchange.
Abstract: Federated learning (FL) rests on the notion of training a global model in a decentralized manner. Under this setting, mobile devices perform computations on their local data before uploading the required updates to improve the global model. However, when the participating clients implement an uncoordinated computation strategy, the difficulty is to handle the communication efficiency (i.e., the number of communications per iteration) while exchanging the model parameters during aggregation. Therefore, a key challenge in FL is how users participate to build a high-quality global model with communication efficiency. We tackle this issue by formulating a utility maximization problem, and propose a novel crowdsourcing framework to leverage FL that considers the communication efficiency during parameters exchange. First, we show an incentive-based interaction between the crowdsourcing platform and the participating client’s independent strategies for training a global learning model, where each side maximizes its own benefit. We formulate a two-stage Stackelberg game to analyze such scenario and find the game’s equilibria. Second, we formalize an admission control scheme for participating clients to ensure a level of local accuracy. Simulated results demonstrate the efficacy of our proposed solution with up to 22% gain in the offered reward.

Proceedings ArticleDOI
07 Jun 2020
TL;DR: A novel approach based on deep reinforcement learning is proposed, in which the BS receives the state information, consisting of the users’ channel state information feedback and the available energy reported by the RIS, and optimizes its action composed of the BS transmit power allocation and RIS phase shift configuration using a neural network.
Abstract: When deployed as reflectors for existing wireless base stations (BSs), reconfigurable intelligent surfaces (RISs) can be a promising approach to achieve high spectrum and energy efficiency. However, due to the large number of RIS elements, the joint optimization of the BS and reflector RIS configuration is challenging. In essence, the BS transmit power and RIS's reflecting configuration must be optimized so as to improve users' data rates and reduce the BS power consumption. In this paper, the problem of energy efficiency optimization is studied in an RIS-assisted cellular network endowed with an RIS reflector powered via energy harvesting technologies. The goal of this proposed framework is to maximize the average energy efficiency by enabling a BS to determine the transmit power and RIS configuration, under uncertainty on the wireless channel and harvested energy of the RIS system. To solve this problem, a novel approach based on deep reinforcement learning is proposed, in which the BS receives the state information, consisting of the users' channel state information feedback and the available energy reported by the RIS. Then, the BS optimizes its action composed of the BS transmit power allocation and RIS phase shift configuration using a neural network. Due to the intractability of the formulated problem under uncertainty, a case study is conducted to analyze the performance of the studied RIS-assisted downlink system by asymptotically deriving the upper bound of the energy efficiency. Simulation results show that the proposed framework improves energy efficiency up to 77.3% when the number of RIS elements increases from 9 to 25.

MonographDOI
02 Apr 2020
TL;DR: A thorough treatment of UAV wireless communications and networking research challenges and opportunities, Featuring discussion of practical applications including drone delivery systems, public safety, IoT, virtual reality, and smart cities, is provided.
Abstract: A thorough treatment of UAV wireless communications and networking research challenges and opportunities. Detailed, step-by-step development of carefully selected research problems that pertain to UAV network performance analysis and optimization, physical layer design, trajectory path planning, resource management, multiple access, cooperative communications, standardization, control, and security is provided. Featuring discussion of practical applications including drone delivery systems, public safety, IoT, virtual reality, and smart cities, this is an essential tool for researchers, students, and engineers interested in broadening their knowledge of the deployment and operation of communication systems that integrate or rely on unmanned aerial vehicles.

Proceedings ArticleDOI
07 Jun 2020
TL;DR: In this paper, a risk-based framework based on the entropic value-at-risk is proposed for rate optimization and reliability performance for a wireless VR network, and a Lyapunov optimization technique is used to reformulate the problem as a linear weighted function, while ensuring that higher order statistics of the queue length are maintained under a threshold.
Abstract: In this paper, the problem of associating reconfigurable intelligent surfaces (RISs) to virtual reality (VR) users is studied for a wireless VR network. In particular, this problem is considered within a cellular network that employs terahertz (THz) operated RISs acting as base stations. To provide a seamless VR experience, high data rates and reliable low latency need to be continuously guaranteed. To address these challenges, a novel risk-based framework based on the entropic value-at-risk is proposed for rate optimization and reliability performance. Furthermore, a Lyapunov optimization technique is used to reformulate the problem as a linear weighted function, while ensuring that higher order statistics of the queue length are maintained under a threshold. To address this problem, given the stochastic nature of the channel, a policy-based reinforcement learning (RL) algorithm is proposed. Since the state space is extremely large, the policy is learned through a deep-RL algorithm. In particular, a recurrent neural network (RNN) RL framework is proposed to capture the dynamic channel behavior and improve the speed of conventional RL policy-search algorithms. Simulation results demonstrate that the maximal queue length resulting from the proposed approach is only within 1% of the optimal solution. The results show a high accuracy and fast convergence for the RNN with a validation accuracy of 91.92%.

Posted Content
TL;DR: In this paper, a distributed federated learning (FL) algorithm for UAV swarms is proposed, where each following UAV trains a local FL model based on its collected data and then sends this trained local model to the leading UAV who will aggregate the received models, generate a global FL model, and transmit it to followers over the intra-swarm network.
Abstract: Unmanned aerial vehicle (UAV) swarms must exploit machine learning (ML) in order to execute various tasks ranging from coordinated trajectory planning to cooperative target recognition. However, due to the lack of continuous connections between the UAV swarm and ground base stations (BSs), using centralized ML will be challenging, particularly when dealing with a large volume of data. In this paper, a novel framework is proposed to implement distributed federated learning (FL) algorithms within a UAV swarm that consists of a leading UAV and several following UAVs. Each following UAV trains a local FL model based on its collected data and then sends this trained local model to the leading UAV who will aggregate the received models, generate a global FL model, and transmit it to followers over the intra-swarm network. To identify how wireless factors, like fading, transmission delay, and UAV antenna angle deviations resulting from wind and mechanical vibrations, impact the performance of FL, a rigorous convergence analysis for FL is performed. Then, a joint power allocation and scheduling design is proposed to optimize the convergence rate of FL while taking into account the energy consumption during convergence and the delay requirement imposed by the swarm's control system. Simulation results validate the effectiveness of the FL convergence analysis and show that the joint design strategy can reduce the number of communication rounds needed for convergence by as much as 35% compared with the baseline design.

Journal Article
TL;DR: It is proved that GADMM converges to the optimal solution for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication- efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets.
Abstract: When the data is distributed across multiple servers, lowering the communication cost between the servers (or workers) while solving the distributed learning problem is an important problem and is the focus of this paper. In particular, we propose a fast, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm, Group Alternating Direction Method of Multipliers (GADMM) is based on the Alternating Direction Method of Multipliers (ADMM) framework. The key novelty in GADMM is that it solves the problem in a decentralized topology where at most half of the workers are competing for the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges to the optimal solution for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets. Furthermore, we propose Dynamic GADMM (D-GADMM), a variant of GADMM, and prove its convergence under the time-varying network topology of the workers.

Journal ArticleDOI
TL;DR: The federated learning (FL) approach which can share the model parameters of NNs at drones, is proposed with NN based MFG to satisfy the required conditions and the stability analysis and performance of the proposed FL-MFG are presented.
Abstract: This paper investigates the control of a massive population of UAVs such as drones The straightforward method of control of UAVs by considering the interactions among them to make a flock requires a huge inter-UAV communication which is impossible to implement in real-time applications One method of control is to apply the mean field game (MFG) framework which substantially reduces communications among the UAVs However, to realize this framework, powerful processors are required to obtain the control laws at different UAVs This requirement limits the usage of the MFG framework for real-time applications such as massive UAV control Thus, a function approximator based on neural networks (NN) is utilized to approximate the solutions of Hamilton-Jacobi-Bellman (HJB) and Fokker-Planck-Kolmogorov (FPK) equations Nevertheless, using an approximate solution can violate the conditions for convergence of the MFG framework Therefore, the federated learning (FL) approach which can share the model parameters of NNs at drones, is proposed with NN based MFG to satisfy the required conditions The stability analysis of the NN based MFG approach is presented and the performance of the proposed FL-MFG is elaborated by the simulations

Posted Content
TL;DR: This work develops a privacy-preserving XOR based mixup data augmentation technique, coined XorMixup, and proposes a novel one-shot FL framework, termedXorMixFL, to collect other devices' encoded data samples that are decoded only using each device's own data samples.
Abstract: User-generated data distributions are often imbalanced across devices and labels, hampering the performance of federated learning (FL). To remedy to this non-independent and identically distributed (non-IID) data problem, in this work we develop a privacy-preserving XOR based mixup data augmentation technique, coined XorMixup, and thereby propose a novel one-shot FL framework, termed XorMixFL. The core idea is to collect other devices' encoded data samples that are decoded only using each device's own data samples. The decoding provides synthetic-but-realistic samples until inducing an IID dataset, used for model training. Both encoding and decoding procedures follow the bit-wise XOR operations that intentionally distort raw samples, thereby preserving data privacy. Simulation results corroborate that XorMixFL achieves up to 17.6% higher accuracy than Vanilla FL under a non-IID MNIST dataset.

Posted Content
TL;DR: In this paper, the authors provide a vision for 6G Edge Intelligence and present edge computing along with other 6G enablers as a key component to establish the future intelligent Internet technologies as shown in this series of 6G White Papers.
Abstract: In this white paper we provide a vision for 6G Edge Intelligence. Moving towards 5G and beyond the future 6G networks, intelligent solutions utilizing data-driven machine learning and artificial intelligence become crucial for several real-world applications including but not limited to, more efficient manufacturing, novel personal smart device environments and experiences, urban computing and autonomous traffic settings. We present edge computing along with other 6G enablers as a key component to establish the future 2030 intelligent Internet technologies as shown in this series of 6G White Papers. In this white paper, we focus in the domains of edge computing infrastructure and platforms, data and edge network management, software development for edge, and real-time and distributed training of ML/AI algorithms, along with security, privacy, pricing, and end-user aspects. We discuss the key enablers and challenges and identify the key research questions for the development of the Intelligent Edge services. As a main outcome of this white paper, we envision a transition from Internet of Things to Intelligent Internet of Intelligent Things and provide a roadmap for development of 6G Intelligent Edge.

Proceedings ArticleDOI
25 May 2020
TL;DR: In this article, a joint client scheduling and resource block allocation policy is proposed to minimize the loss of accuracy in federated learning over wireless compared to a centralized training-based solution, under imperfect channel state information (CSI).
Abstract: In this work, we propose a novel joint client scheduling and resource block (RB) allocation policy to minimize the loss of accuracy in federated learning (FL) over wireless compared to a centralized training-based solution, under imperfect channel state information (CSI). First, the problem is cast as a stochastic optimization problem over a predefined training duration and solved using the Lyapunov optimization framework. In order to learn and track the wireless channel, a Gaussian process regression (GPR)-based channel prediction method is leveraged and incorporated into the scheduling decision. The proposed scheduling policies are evaluated via numerical simulations, under both perfect and imperfect CSI. Results show that the proposed method reduces the loss of accuracy up to 25.8% compared to state-of-the-art client scheduling and RB allocation methods.

Proceedings ArticleDOI
01 Jan 2020
TL;DR: It is proved that Q-GADMM converges to the optimal solution for convex loss functions, and numerically show that Q -G ADMM yields 7x less communication cost while achieving almost the same accuracy and convergence speed compared to GADMM without quantization.
Abstract: In this article, we propose a communication-efficient decentralized machine learning (ML) algorithm, coined quantized group ADMM (Q-GADMM) . To reduce the number of communication links, every worker in Q-GADMM communicates only with two neighbors, while updating its model via the group alternating direction method of multipliers (GADMM). Moreover, each worker transmits the quantized difference between its current model and its previously quantized model, thereby decreasing the communication payload size. However, due to the lack of centralized entity in decentralized ML, the spatial sparsity and payload compression may incur error propagation, hindering model training convergence. To overcome this, we develop a novel stochastic quantization method to adaptively adjust model quantization levels and their probabilities, while proving the convergence of Q-GADMM for convex objective functions. Furthermore, to demonstrate the feasibility of Q-GADMM for non-convex and stochastic problems, we propose quantized stochastic GADMM (Q-SGADMM) that incorporates deep neural network architectures and stochastic sampling. Simulation results corroborate that Q-GADMM significantly outperforms GADMM in terms of communication efficiency while achieving the same accuracy and convergence speed for a linear regression task. Similarly, for an image classification task using DNN, Q-SGADMM achieves significantly less total communication cost with identical accuracy and convergence speed compared to its counterpart without quantization, i.e., stochastic GADMM (SGADMM).

Journal ArticleDOI
TL;DR: A distributed edge caching scheme to jointly minimize the request service delay and fronthaul traffic load and a greedy algorithm is developed that enables each F-AP to obtain the final caching policy subject to the caching capacity constraint.
Abstract: In this paper, the edge caching optimization problem in fog radio access networks (F-RANs) is investigated. Taking into account time-variant user requests and ultra-dense deployment of fog access points (F-APs), we propose a distributed edge caching scheme to jointly minimize the request service delay and fronthaul traffic load. Considering the interactive relationship among F-APs, we model the optimization problem as a stochastic differential game (SDG) which captures the dynamics of F-AP states. To address both the intractability problem of the SDG and the caching capacity constraint, we propose to solve the optimization problem in a distributive manner. Firstly, a mean field game (MFG) is converted from the original SDG by exploiting the ultra-dense property of F-RANs, and the states of all F-APs are characterized by a mean field distribution. Then, an iterative algorithm is developed that enables each F-AP to obtain the mean field equilibrium and caching control without extra information exchange with other F-APs. Secondly, a fractional knapsack problem is formulated based on the mean field equilibrium, and a greedy algorithm is developed that enables each F-AP to obtain the final caching policy subject to the caching capacity constraint. Simulation results show that the proposed scheme outperforms the baselines.

Journal ArticleDOI
TL;DR: A novel joint video quality selection and resource allocation technique is proposed for increasing the quality-of-experience (QoE) of vehicular devices and results show that the proposed algorithm ensures high video quality experience compared to the baseline.
Abstract: Vehicle-to-everything (V2X) communication is a key enabler that connects vehicles to neighboring vehicles, infrastructure and pedestrians. In the past few years, multimedia services have seen an enormous growth and it is expected to increase as more devices will utilize infotainment services in the future i.e. vehicular devices. Therefore, it is important to focus on user centric measures i.e. quality-of-experience (QoE) such as video quality (resolution) and fluctuations therein. In this paper, a novel joint video quality selection and resource allocation technique is proposed for increasing the QoE of vehicular devices. The proposed approach exploits the queuing dynamics and channel states of vehicular devices, to maximize the QoE while ensuring seamless video playback at the end users with high probability. The network wide QoE maximization problem is decoupled into two subparts. First, a network slicing based clustering algorithm is applied to partition the vehicles into multiple logical networks. Secondly, vehicle scheduling and quality selection is formulated as a stochastic optimization problem which is solved using the Lyapunov drift plus penalty method. Numerical results show that the proposed algorithm ensures high video quality experience compared to the baseline. Simulation results also show that the proposed technique achieves low latency and high-reliability communication.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the problem of asynchronous coded caching in fog radio access networks (F-RANs) to minimize the fronthaul load, the encoding set collapsing rule and encoding set partition method are proposed to establish the relationship between the coded-multicasting contents for asynchronous and synchronous coded caching.
Abstract: In this paper, we investigate the problem of asynchronous coded caching in fog radio access networks (F-RANs). To minimize the fronthaul load, the encoding set collapsing rule and encoding set partition method are proposed to establish the relationship between the coded-multicasting contents for asynchronous and synchronous coded caching. Furthermore, a decentralized asynchronous coded caching scheme is proposed, which provides asynchronous and synchronous transmission methods for different delay requirements. The closed-form expression of the fronthaul load is established for the special case where the number of requests during each time slot is fixed, and the upper and lower bounds of the fronthaul load are given for the general case where the number of requests during each time slot is random. The simulation results show that our proposed scheme can create considerable coded-multicasting opportunities in asynchronous request scenarios.

Journal ArticleDOI
TL;DR: This paper proposes a low complexity algorithm that approximates the solution of the proposed optimization problem of vehicle-cell association in millimeter wave (mmWave) communication networks and achieves up to 15% gains in terms of sum rate and 20% reduction in VUE outages.
Abstract: Vehicle-to-everything (V2X) communication is a growing area of communication with a variety of use cases. This paper investigates the problem of vehicle-cell association in millimeter wave (mmWave) communication networks. The aim is to maximize the time average rate per vehicular user (VUE) while ensuring a target minimum rate for all VUEs with low signaling overhead. We first formulate the user (vehicle) association problem as a discrete non-convex optimization problem. Then, by leveraging tools from machine learning, specifically distributed deep reinforcement learning (DDRL) and the asynchronous actor critic algorithm (A3C), we propose a low complexity algorithm that approximates the solution of the proposed optimization problem. The proposed DDRL-based algorithm endows every road side unit (RSU) with a local RL agent that selects a local action based on the observed input state. Actions of different RSUs are forwarded to a central entity, that computes a global reward which is then fed back to RSUs. It is shown that each independently trained RL performs the vehicle-RSU association action with low control overhead and less computational complexity compared to running an online complex algorithm to solve the non-convex optimization problem. Finally, simulation results show that the proposed solution achieves up to 15\% gains in terms of sum rate and 20\% reduction in VUE outages compared to several baseline designs.

Journal ArticleDOI
TL;DR: In this article, a neural network aided remote unmanned aerial vehicle (UAV) online control algorithm, coined oHJB, is proposed by downloading a UAV state, a base station (BS) trains an HJB NN that solves the Hamilton-Jacobi-Bellman equation (HJB) in real time, yielding a sub-optimal control action.
Abstract: This letter proposes a neural network (NN) aided remote unmanned aerial vehicle (UAV) online control algorithm, coined oHJB . By downloading a UAV’s state, a base station (BS) trains an HJB NN that solves the Hamilton-Jacobi-Bellman equation (HJB) in real time, yielding a sub-optimal control action. Initially, the BS uploads this control action to the UAV. If the HJB NN is sufficiently trained and the UAV is far away, the BS uploads the HJB NN model, enabling to locally carry out control decisions even when the connection is lost. Simulations corroborate the effectiveness of oHJB in reducing the UAV’s travel time and energy by utilizing the trade-off between uploading delays and control robustness in poor channel conditions.

Journal ArticleDOI
TL;DR: Mix2FLD as discussed by the authors proposes a novel communication-efficient and privacy-preserving distributed machine learning framework to address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in FL.
Abstract: This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD To address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in federated learning (FL) This requires a model output-to-parameter conversion at the server, after collecting additional data samples from devices To preserve privacy while not compromising accuracy, linearly mixed-up local samples are uploaded, and inversely mixed up across different devices at the server Numerical evaluations show that Mix2FLD achieves up to 167% higher test accuracy while reducing convergence time by up to 188% under asymmetric uplink-downlink channels compared to FL