scispace - formally typeset
Search or ask a question

Showing papers on "Server published in 2020"


Journal ArticleDOI
TL;DR: The concept of federated learning (FL) as mentioned in this paperederated learning has been proposed to enable collaborative training of an ML model and also enable DL for mobile edge network optimization in large-scale and complex mobile edge networks, where heterogeneous devices with varying constraints are involved.
Abstract: In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloud-based Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.

895 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose sparse ternary compression (STC), a new compression framework that is specifically designed to meet the requirements of the federated learning environment, which extends the existing compression technique of top- $k$ gradient sparsification with a novel mechanism to enable downstream compression as well as ternarization and optimal Golomb encoding of the weight updates.
Abstract: Federated learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server. This form of privacy-preserving collaborative learning, however, comes at the cost of a significant communication overhead during training. To address this problem, several compression methods have been proposed in the distributed training literature that can reduce the amount of required communication by up to three orders of magnitude. These existing methods, however, are only of limited utility in the federated learning setting, as they either only compress the upstream communication from the clients to the server (leaving the downstream communication uncompressed) or only perform well under idealized conditions, such as i.i.d. distribution of the client data, which typically cannot be found in federated learning. In this article, we propose sparse ternary compression (STC), a new compression framework that is specifically designed to meet the requirements of the federated learning environment. STC extends the existing compression technique of top- $k$ gradient sparsification with a novel mechanism to enable downstream compression as well as ternarization and optimal Golomb encoding of the weight updates. Our experiments on four different learning tasks demonstrate that STC distinctively outperforms federated averaging in common federated learning scenarios. These results advocate for a paradigm shift in federated optimization toward high-frequency low-bitwidth communication, in particular in the bandwidth-constrained learning environments.

618 citations


Proceedings ArticleDOI
07 Jun 2020
TL;DR: In this paper, the authors proposed a client-edge-cloud hierarchical federated learning system, supported with a HierFAVG algorithm that allows multiple edge servers to perform partial model aggregation.
Abstract: Federated Learning is a collaborative machine learning framework to train a deep learning model without accessing clients' private data. Previous works assume one central parameter server either at the cloud or at the edge. The cloud server can access more data but with excessive communication overhead and long latency, while the edge server enjoys more efficient communications with the clients. To combine their advantages, we propose a client-edge-cloud hierarchical Federated Learning system, supported with a HierFAVG algorithm that allows multiple edge servers to perform partial model aggregation. In this way, the model can be trained faster and better communication-computation trade-offs can be achieved. Convergence analysis is provided for HierFAVG and the effects of key parameters are also investigated, which lead to qualitative design guidelines. Empirical experiments verify the analysis and demonstrate the benefits of this hierarchical architecture in different data distribution scenarios. Particularly, it is shown that by introducing the intermediate edge servers, the model training time and the energy consumption of the end devices can be simultaneously reduced compared to cloud-based Federated Learning.

433 citations


Journal ArticleDOI
TL;DR: VerifyNet is proposed, the first privacy-preserving and verifiable federated learning framework that claims that it is impossible that an adversary can deceive users by forging Proof, unless it can solve the NP-hard problem adopted in the model.
Abstract: As an emerging training model with neural networks, federated learning has received widespread attention due to its ability to update parameters without collecting users’ raw data. However, since adversaries can track and derive participants’ privacy from the shared gradients, federated learning is still exposed to various security and privacy threats. In this paper, we consider two major issues in the training process over deep neural networks (DNNs): 1) how to protect user’s privacy (i.e., local gradients) in the training process and 2) how to verify the integrity (or correctness) of the aggregated results returned from the server. To solve the above problems, several approaches focusing on secure or privacy-preserving federated learning have been proposed and applied in diverse scenarios. However, it is still an open problem enabling clients to verify whether the cloud server is operating correctly, while guaranteeing user’s privacy in the training process. In this paper, we propose VerifyNet, the first privacy-preserving and verifiable federated learning framework. In specific, we first propose a double-masking protocol to guarantee the confidentiality of users’ local gradients during the federated learning. Then, the cloud server is required to provide the Proof about the correctness of its aggregated results to each user. We claim that it is impossible that an adversary can deceive users by forging Proof , unless it can solve the NP-hard problem adopted in our model. In addition, VerifyNet is also supportive of users dropping out during the training process. The extensive experiments conducted on real-world data also demonstrate the practical performance of our proposed scheme.

388 citations


Journal ArticleDOI
TL;DR: The results demonstrate that the proposed asynchronous federated deep learning outperforms the baseline algorithm both in terms of communication cost and model accuracy.
Abstract: Federated learning obtains a central model on the server by aggregating models trained locally on clients. As a result, federated learning does not require clients to upload their data to the server, thereby preserving the data privacy of the clients. One challenge in federated learning is to reduce the client–server communication since the end devices typically have very limited communication bandwidth. This article presents an enhanced federated learning technique by proposing an asynchronous learning strategy on the clients and a temporally weighted aggregation of the local models on the server. In the asynchronous learning strategy, different layers of the deep neural networks (DNNs) are categorized into shallow and deep layers, and the parameters of the deep layers are updated less frequently than those of the shallow layers. Furthermore, a temporally weighted aggregation strategy is introduced on the server to make use of the previously trained local models, thereby enhancing the accuracy and convergence of the central model. The proposed algorithm is empirically on two data sets with different DNNs. Our results demonstrate that the proposed asynchronous federated deep learning outperforms the baseline algorithm both in terms of communication cost and model accuracy.

364 citations


Journal ArticleDOI
TL;DR: The incentive mechanism for federated learning to motivate edge nodes to contribute model training is studied and a deep reinforcement learning-based (DRL) incentive mechanism has been designed to determine the optimal pricing strategy for the parameter server and the optimal training strategies for edge nodes.
Abstract: Internet of Things (IoT) generates large amounts of data at the network edge. Machine learning models are often built on these data, to enable the detection, classification, and prediction of the future events. Due to network bandwidth, storage, and especially privacy concerns, it is often impossible to send all the IoT data to the data center for centralized model training. To address these issues, federated learning has been proposed to let nodes use the local data to train models, which are then aggregated to synthesize a global model. Most of the existing work has focused on designing learning algorithms with provable convergence time, but other issues, such as incentive mechanism, are unexplored. Although incentive mechanisms have been extensively studied in network and computation resource allocation, yet they cannot be applied to federated learning directly due to the unique challenges of information unsharing and difficulties of contribution evaluation. In this article, we study the incentive mechanism for federated learning to motivate edge nodes to contribute model training. Specifically, a deep reinforcement learning-based (DRL) incentive mechanism has been designed to determine the optimal pricing strategy for the parameter server and the optimal training strategies for edge nodes. Finally, numerical experiments have been implemented to evaluate the efficiency of the proposed DRL-based incentive mechanism.

327 citations


Journal ArticleDOI
TL;DR: A more thorough summary of the most relevant protocols, platforms, and real-life use-cases of FL is provided to enable data scientists to build better privacy-preserved solutions for industries in critical need of FL.
Abstract: This paper provides a comprehensive study of Federated Learning (FL) with an emphasis on enabling software and hardware platforms, protocols, real-life applications and use-cases. FL can be applicable to multiple domains but applying it to different industries has its own set of obstacles. FL is known as collaborative learning, where algorithm(s) get trained across multiple devices or servers with decentralized data samples without having to exchange the actual data. This approach is radically different from other more established techniques such as getting the data samples uploaded to servers or having data in some form of distributed infrastructure. FL on the other hand generates more robust models without sharing data, leading to privacy-preserved solutions with higher security and access privileges to data. This paper starts by providing an overview of FL. Then, it gives an overview of technical details that pertain to FL enabling technologies, protocols, and applications. Compared to other survey papers in the field, our objective is to provide a more thorough summary of the most relevant protocols, platforms, and real-life use-cases of FL to enable data scientists to build better privacy-preserving solutions for industries in critical need of FL. We also provide an overview of key challenges presented in the recent literature and provide a summary of related research work. Moreover, we explore both the challenges and advantages of FL and present detailed service use-cases to illustrate how different architectures and protocols that use FL can fit together to deliver desired results.

312 citations


Journal ArticleDOI
TL;DR: Key advances in Galaxy's user interface include enhancements for analyzing large dataset collections as well as interactive tools for exploratory data analysis and support for federated identity and access management and increased ability to distribute analysis jobs to remote resources.
Abstract: Galaxy (https://galaxyproject.org) is a web-based computational workbench used by tens of thousands of scientists across the world to analyze large biomedical datasets. Since 2005, the Galaxy project has fostered a global community focused on achieving accessible, reproducible, and collaborative research. Together, this community develops the Galaxy software framework, integrates analysis tools and visualizations into the framework, runs public servers that make Galaxy available via a web browser, performs and publishes analyses using Galaxy, leads bioinformatics workshops that introduce and use Galaxy, and develops interactive training materials for Galaxy. Over the last two years, all aspects of the Galaxy project have grown: code contributions, tools integrated, users, and training materials. Key advances in Galaxy's user interface include enhancements for analyzing large dataset collections as well as interactive tools for exploratory data analysis. Extensions to Galaxy's framework include support for federated identity and access management and increased ability to distribute analysis jobs to remote resources. New community resources include large public servers in Europe and Australia, an increasing number of regional and local Galaxy communities, and substantial growth in the Galaxy Training Network.

276 citations


Journal ArticleDOI
TL;DR: This work proposes adapting FedAvg to use a distributed form of Adam optimization, greatly reducing the number of rounds to convergence, along with the novel compression techniques, to produce communication-efficient FedAvg (CE-FedAvg), which can converge to a target accuracy and is more robust to aggressive compression.
Abstract: The rapidly expanding number of Internet of Things (IoT) devices is generating huge quantities of data, but public concern over data privacy means users are apprehensive to send data to a central server for machine learning (ML) purposes. The easily changed behaviors of edge infrastructure that software-defined networking (SDN) provides makes it possible to collate IoT data at edge servers and gateways, where federated learning (FL) can be performed: building a central model without uploading data to the server. FedAvg is an FL algorithm which has been the subject of much study, however, it suffers from a large number of rounds to convergence with non-independent identically distributed (non-IID) client data sets and high communication costs per round. We propose adapting FedAvg to use a distributed form of Adam optimization, greatly reducing the number of rounds to convergence, along with the novel compression techniques, to produce communication-efficient FedAvg (CE-FedAvg). We perform extensive experiments with the MNIST/CIFAR-10 data sets, IID/non-IID client data, varying numbers of clients, client participation rates, and compression rates. These show that CE-FedAvg can converge to a target accuracy in up to $\mathbf {6\times }$ less rounds than similarly compressed FedAvg, while uploading up to $\mathbf {3\times }$ less data, and is more robust to aggressive compression. Experiments on an edge-computing-like testbed using Raspberry Pi clients also show that CE-FedAvg is able to reach a target accuracy in up to $\mathbf {1.7\times }$ less real time than FedAvg.

271 citations


Journal ArticleDOI
TL;DR: This article incorporates local differential privacy into federated learning for protecting the privacy of updated local models and proposes a random distributed update scheme to get rid of the security threats led by a centralized curator.
Abstract: Driven by technologies such as mobile edge computing and 5G, recent years have witnessed the rapid development of urban informatics, where a large amount of data is generated. To cope with the growing data, artificial intelligence algorithms have been widely exploited. Federated learning is a promising paradigm for distributed edge computing, which enables edge nodes to train models locally without transmitting their data to a server. However, the security and privacy concerns of federated learning hinder its wide deployment in urban applications such as vehicular networks. In this article, we propose a differentially private asynchronous federated learning scheme for resource sharing in vehicular networks. To build a secure and robust federated learning scheme, we incorporate local differential privacy into federated learning for protecting the privacy of updated local models. We further propose a random distributed update scheme to get rid of the security threats led by a centralized curator. Moreover, we perform the convergence boosting in our proposed scheme by updates verification and weighted aggregation. We evaluate our scheme on three real-world datasets. Numerical results show the high accuracy and efficiency of our proposed scheme, whereas preserve the data privacy.

248 citations


Journal ArticleDOI
TL;DR: This work proposes EUAGame, a game-theoretic approach that formulates the EUA problem as a potential game and designs a novel decentralized algorithm for finding a Nash equilibrium in the game as a solution to theEUA problem.
Abstract: Edge Computing provides mobile and Internet-of-Things (IoT) app vendors with a new distributed computing paradigm which allows an app vendor to deploy its app at hired edge servers distributed near app users at the edge of the cloud. This way, app users can be allocated to hired edge servers nearby to minimize network latency and energy consumption. A cost-effective edge user allocation (EUA) requires maximum app users to be served with minimum overall system cost. Finding a centralized optimal solution to this EUA problem is NP-hard. Thus, we propose EUAGame, a game-theoretic approach that formulates the EUA problem as a potential game. We analyze the game and show that it admits a Nash equilibrium. Then, we design a novel decentralized algorithm for finding a Nash equilibrium in the game as a solution to the EUA problem. The performance of this algorithm is theoretically analyzed and experimentally evaluated. The results show that the EUA problem can be solved effectively and efficiently.

Journal ArticleDOI
TL;DR: This article develops an asynchronous advantage actor–critic-based cooperation computation offloading and resource allocation algorithm to solve the MDP problem and designs a multiobjective function to maximize the computation rate of MEC systems and the transaction throughput of blockchain systems.
Abstract: Mobile-edge computing (MEC) is a promising paradigm to improve the quality of computation experience of mobile devices because it allows mobile devices to offload computing tasks to MEC servers, benefiting from the powerful computing resources of MEC servers. However, the existing computation-offloading works have also some open issues: 1) security and privacy issues; 2) cooperative computation offloading; and 3) dynamic optimization. To address the security and privacy issues, we employ the blockchain technology that ensures the reliability and irreversibility of data in MEC systems. Meanwhile, we jointly design and optimize the performance of blockchain and MEC. In this article, we develop a cooperative computation offloading and resource allocation framework for blockchain-enabled MEC systems. In the framework, we design a multiobjective function to maximize the computation rate of MEC systems and the transaction throughput of blockchain systems by jointly optimizing offloading decision, power allocation, block size, and block interval. Due to the dynamic characteristics of the wireless fading channel and the processing queues at MEC servers, the joint optimization is formulated as a Markov decision process (MDP). To tackle the dynamics and complexity of the blockchain-enabled MEC system, we develop an asynchronous advantage actor–critic-based cooperation computation offloading and resource allocation algorithm to solve the MDP problem. In the algorithm, deep neural networks are optimized by utilizing asynchronous gradient descent and eliminating the correlation of data. The simulation results show that the proposed algorithm converges fast and achieves significant performance improvements over existing schemes in terms of total reward.

Journal ArticleDOI
TL;DR: A software-defined networking (SDN) based load-balancing task offloading scheme in FiWi enhanced VECNs is proposed, where SDN is introduced to provide supports for the centralized network and vehicle information management.
Abstract: Recently, the rapid advance of vehicular networks has led to the emergence of diverse delay-sensitive vehicular applications such as automatic driving, auto navigation. Note that existing resource-constrained vehicles cannot adequately meet these demands on low / ultra-low latency. By offloading parts of the vehicles’ compute-intensive tasks to the edge servers in proximity, mobile edge computing is envisioned as a promising paradigm, giving rise to the vehicular edge computing networks (VECNs). However, most existing works on task offloading in VECNs did not take the load balancing of the computation resources at the edge servers into account. To address these issues and given the high dynamics of vehicular networks, we introduce fiber-wireless (FiWi) technology to enhance VECNs, due to its advantages on centralized network management and supporting multiple communication techniques. Aiming to minimize the processing delay of the vehicles’ computation tasks, we propose a software-defined networking (SDN) based load-balancing task offloading scheme in FiWi enhanced VECNs, where SDN is introduced to provide supports for the centralized network and vehicle information management. Extensive analysis and numerical results corroborate that our proposed load-balancing scheme can achieve superior performance on processing delay reduction by utilizing the edge servers’ computation resources more efficiently.

Journal ArticleDOI
TL;DR: This article proposes a learning-based channel selection framework with service reliability awareness, energy awareness, backlog awareness, and conflict awareness, by leveraging the combined power of machine learning, Lyapunov optimization, and matching theory, and proves that the proposed framework can achieve guaranteed performance.
Abstract: Edge computing provides a promising paradigm to support the implementation of Industrial Internet of Things (IIoT) by offloading computational-intensive tasks from resource-limited machine-type devices (MTDs) to powerful edge servers. However, the performance gain of edge computing may be severely compromised due to limited spectrum resources, capacity-constrained batteries, and context unawareness. In this article, we consider the optimization of channel selection that is critical for efficient and reliable task delivery. We aim at maximizing the long-term throughput subject to long-term constraints of energy budget and service reliability. We propose a learning-based channel selection framework with service reliability awareness, energy awareness, backlog awareness, and conflict awareness, by leveraging the combined power of machine learning, Lyapunov optimization, and matching theory. We provide rigorous theoretical analysis, and prove that the proposed framework can achieve guaranteed performance with a bounded deviation from the optimal performance with global state information (GSI) based on only local and causal information. Finally, simulations are conducted under both single-MTD and multi-MTD scenarios to verify the effectiveness and reliability of the proposed framework.

Journal ArticleDOI
TL;DR: In this paper, the problem of joint computing, caching, communication, and control (4C) in big data MEC is formulated as an optimization problem whose goal is to jointly optimize a linear combination of the bandwidth consumption and network latency.
Abstract: The concept of Multi-access Edge Computing (MEC) has been recently introduced to supplement cloud computing by deploying MEC servers to the network edge so as to reduce the network delay and alleviate the load on cloud data centers. However, compared to the resourceful cloud, MEC server has limited resources. When each MEC server operates independently, it cannot handle all computational and big data demands stemming from users devices. Consequently, the MEC server cannot provide significant gains in overhead reduction of data exchange between users devices and remote cloud. Therefore, joint Computing, Caching, Communication, and Control (4C) at the edge with MEC server collaboration is needed. To address these challenges, in this paper, the problem of joint 4C in big data MEC is formulated as an optimization problem whose goal is to jointly optimize a linear combination of the bandwidth consumption and network latency. However, the formulated problem is shown to be non-convex. As a result, a proximal upper bound problem of the original formulated problem is proposed. To solve the proximal upper bound problem, the block successive upper bound minimization method is applied. Simulation results show that the proposed approach satisfies computation deadlines and minimizes bandwidth consumption and network latency.

Journal ArticleDOI
TL;DR: A reinforcement-learning-based state-action-reward-state-action (RL-SARSA) algorithm to resolve the resource management problem in the edge server, and make the optimal offloading decision for minimizing system cost, including energy consumption and computing time delay is proposed.
Abstract: In recent years, computation offloading has become an effective way to overcome the constraints of mobile devices (MDs) by offloading delay-sensitive and computation-intensive mobile application tasks to remote cloud-based data centers. Smart cities can benefit from offloading to edge points in the framework of the so-called cyber–physical–social systems (CPSS), as for example in traffic violation tracking cameras. We assume that there are mobile edge computing networks (MECNs) in more than one region, and they consist of multiple access points, multi-edge servers, and $N$ MDs, where each MD has $M$ independent real-time massive tasks. The MDs can connect to a MECN through the access points or the mobile network. Each task be can processed locally by the MD itself or remotely. There are three offloading options: nearest edge server, adjacent edge server, and remote cloud. We propose a reinforcement-learning-based state-action-reward-state-action (RL-SARSA) algorithm to resolve the resource management problem in the edge server, and make the optimal offloading decision for minimizing system cost, including energy consumption and computing time delay. We call this method OD-SARSA (offloading decision-based SARSA). We compared our proposed method with reinforcement learning based Q learning (RL-QL), and it is concluded that the performance of the former is superior to that of the latter.

Journal ArticleDOI
TL;DR: A comprehensive survey on the use of ML in MEC systems is provided, offering an insight into the current progress of this research area and helpful guidance is supplied by pointing out which MEC challenges can be solved by ML solutions, what are the current trending algorithms in frontier ML research and how they could be used in M EC.
Abstract: Mobile Edge Computing (MEC) is considered an essential future service for the implementation of 5G networks and the Internet of Things, as it is the best method of delivering computation and communication resources to mobile devices. It is based on the connection of the users to servers located on the edge of the network, which is especially relevant for real-time applications that demand minimal latency. In order to guarantee a resource-efficient MEC (which, for example, could mean improved Quality of Service for users or lower costs for service providers), it is important to consider certain aspects of the service model, such as where to offload the tasks generated by the devices, how many resources to allocate to each user (specially in the wired or wireless device-server communication) and how to handle inter-server communication. However, in the MEC scenarios with many and varied users, servers and applications, these problems are characterized by parameters with exceedingly high levels of dimensionality, resulting in too much data to be processed and complicating the task of finding efficient configurations. This will be particularly troublesome when 5G networks and Internet of Things roll out, with their massive amounts of devices. To address this concern, the best solution is to utilize Machine Learning (ML) algorithms, which enable the computer to draw conclusions and make predictions based on existing data without human supervision, leading to quick near-optimal solutions even in problems with high dimensionality. Indeed, in scenarios with too much data and too many parameters, ML algorithms are often the only feasible alternative. In this paper, a comprehensive survey on the use of ML in MEC systems is provided, offering an insight into the current progress of this research area. Furthermore, helpful guidance is supplied by pointing out which MEC challenges can be solved by ML solutions, what are the current trending algorithms in frontier ML research and how they could be used in MEC. These pieces of information should prove fundamental in encouraging future research that combines ML and MEC.

Journal ArticleDOI
TL;DR: A botnet detection system based on a two-level deep learning framework for semantically discriminating botnets and legitimate behaviors at the application layer of the domain name system (DNS) services is proposed.
Abstract: Internet of Things applications for smart cities have currently become a primary target for advanced persistent threats of botnets. This article proposes a botnet detection system based on a two-level deep learning framework for semantically discriminating botnets and legitimate behaviors at the application layer of the domain name system (DNS) services. In the first level of the framework, the similarity measures of DNS queries are estimated using siamese networks based on a predefined threshold for selecting the most frequent DNS information across Ethernet connections. In the second level of the framework, a domain generation algorithm based on deep learning architectures is suggested for categorizing normal and abnormal domain names. The framework is highly scalable on a commodity hardware server due to its potential design of analyzing DNS data. The proposed framework was evaluated using two datasets and was compared with recent deep learning models. Various visualization methods were also employed to understand the characteristics of the dataset and to visualize the embedding features. The experimental results revealed substantial improvements in terms of F 1-score, speed of detection, and false alarm rate.

Journal ArticleDOI
TL;DR: A single edge server that assists a mobile user in executing a sequence of computation tasks is considered, and a mixed integer non-linear programming (MINLP) is formulated that jointly optimizes the service caching placement, computation offloading decisions, and system resource allocation.
Abstract: In mobile edge computing (MEC) systems, edge service caching refers to pre-storing the necessary programs for executing computation tasks at MEC servers. Service caching effectively reduces the real-time delay/bandwidth cost on acquiring and initializing service applications when computation tasks are offloaded to the MEC servers. The limited caching space at resource-constrained edge servers calls for careful design of caching placement to determine which programs to cache over time. This is in general a complicated problem that highly correlates to the computation offloading decisions of computation tasks, i.e., whether or not to offload a task for edge execution. In this paper, we consider a single edge server that assists a mobile user (MU) in executing a sequence of computation tasks. In particular, the MU can upload and run its customized programs at the edge server, while the server can selectively cache the previously generated programs for future reuse. To minimize the computation delay and energy consumption of the MU, we formulate a mixed integer non-linear programming (MINLP) that jointly optimizes the service caching placement, computation offloading decisions, and system resource allocation (e.g., CPU processing frequency and transmit power of MU). To tackle the problem, we first derive the closed-form expressions of the optimal resource allocation solutions, and subsequently transform the MINLP into an equivalent pure 0-1 integer linear programming (ILP) that is much simpler to solve. To further reduce the complexity in solving the ILP, we exploit the underlying structures of caching causality and task dependency models, and accordingly devise a reduced-complexity alternating minimization technique to update the caching placement and offloading decision alternately. Extensive simulations show that the proposed joint optimization techniques achieve substantial resource savings of the MU compared to other representative benchmark methods considered.

Journal ArticleDOI
TL;DR: A review on the ML-based computation offloading mechanisms in the MEC environment in the form of a classical taxonomy to identify the contemporary mechanisms on this crucial topic and to offer open issues as well.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed optimization method is able to find optimized neural network models that can not only significantly reduce communication costs but also improve the learning performance of federated learning compared with the standard fully connected neural networks.
Abstract: Federated learning is an emerging technique used to prevent the leakage of private information. Unlike centralized learning that needs to collect data from users and store them collectively on a cloud server, federated learning makes it possible to learn a global model while the data are distributed on the users’ devices. However, compared with the traditional centralized approach, the federated setting consumes considerable communication resources of the clients, which is indispensable for updating global models and prevents this technique from being widely used. In this paper, we aim to optimize the structure of the neural network models in federated learning using a multi-objective evolutionary algorithm to simultaneously minimize the communication costs and the global model test errors. A scalable method for encoding network connectivity is adapted to federated learning to enhance the efficiency in evolving deep neural networks. Experimental results on both multilayer perceptrons and convolutional neural networks indicate that the proposed optimization method is able to find optimized neural network models that can not only significantly reduce communication costs but also improve the learning performance of federated learning compared with the standard fully connected neural networks.

Journal ArticleDOI
TL;DR: A smart contract is used to notarize integrity metadata of outsourced data recognized by users and servers on the blockchain, and the blockchain network is utilized as the self-recording channel for achieving non-repudiation verification interactions and a fairly arbitrable data auditing protocol is proposed.
Abstract: The maturity of network storage technology drives users to outsource local data to remote servers. Since these servers are not reliable enough for keeping users’ data, remote data auditing mechanisms are studied for mitigating the threat to data integrity. However, many traditional schemes achieve verifiable data integrity for users only without resolutions to data possession disputes, while others depend on centralized third-party auditors (TPAs) for credible arbitrations. Recently, the emergence of blockchain technology promotes inspiring countermeasures. In this article, we propose a decentralized arbitrable remote data auditing scheme for network storage service based on blockchain techniques. We use a smart contract to notarize integrity metadata of outsourced data recognized by users and servers on the blockchain, and also utilize the blockchain network as the self-recording channel for achieving non-repudiation verification interactions. We also propose a fairly arbitrable data auditing protocol with the support of the commutative hash technique, defending against dishonest provers and verifiers. Additionally, a decentralized adjudication mechanism is implemented by using the smart contract technique for creditably resolving data possession disputes without TPAs. The theoretical analysis and experimental evaluation reveal its effectiveness in undisputable data auditing and the limited requirement of costs.

Journal ArticleDOI
TL;DR: This work presents some important edge computing architectures and classify the previous works on computation offloading into different categories, and discusses some basic models such as channel model, computation and communication model, and energy harvesting model that have been proposed in offloading modeling.

Journal ArticleDOI
TL;DR: A distributed deep learning-driven task offloading (DDTO) algorithm is proposed to generate near-optimal offloading decisions over the MDs, edge cloud server, and central cloud server and achieves high performance and greatly reduces the computational complexity when compared with other offloading schemes that neglect the collaboration of heterogeneous clouds.
Abstract: City Internet-of-Things (IoT) applications are becoming increasingly complicated and thus require large amounts of computational resources and strict latency requirements. Mobile cloud computing (MCC) is an effective way to alleviate the limitation of computation capacity by offloading complex tasks from mobile devices (MDs) to central clouds. Besides, mobile-edge computing (MEC) is a promising technology to reduce latency during data transmission and save energy by providing services in a timely manner. However, it is still difficult to solve the task offloading challenges in heterogeneous cloud computing environments, where edge clouds and central clouds work collaboratively to satisfy the requirements of city IoT applications. In this article, we consider the heterogeneity of edge and central cloud servers in the offloading destination selection. To jointly optimize the system utility and the bandwidth allocation for each MD, we establish a hybrid offloading model, including the collaboration of MCC and MEC. A distributed deep learning-driven task offloading (DDTO) algorithm is proposed to generate near-optimal offloading decisions over the MDs, edge cloud server, and central cloud server. Experimental results demonstrate the accuracy of the DDTO algorithm, which can effectively and efficiently generate near-optimal offloading decisions in the edge and cloud computing environments. Furthermore, it achieves high performance and greatly reduces the computational complexity when compared with other offloading schemes that neglect the collaboration of heterogeneous clouds. More precisely, the DDTO scheme can improve computational performance by 63%, compared with the local-only scheme.

Journal ArticleDOI
TL;DR: A new vision of Digital Twin Edge Networks (DITEN) where digital twins of edge servers estimate edge servers’ states and DT of the entire MEC system provides training data for offloading decision is presented, which effectively diminishes the average offloading latency, the offloading failure rate, and the service migration rate, while saving the system cost with DT assistance.
Abstract: 6G is envisioned to empower wireless communication and computation through the digitalization and connectivity of everything, by establishing a digital representation of the real network environment. Mobile edge computing (MEC), as one of the key enabling factors, meets unprecedented challenges during mobile offloading due to the extremely complicated and unpredictable network environment in 6G. The existing works on offloading in MEC mainly ignore the effects of user mobility and the unpredictable MEC environment. In this paper, we present a new vision of Digital Twin Edge Networks (DITEN) where digital twins (DTs) of edge servers estimate edge servers’ states and DT of the entire MEC system provides training data for offloading decision. A mobile offloading scheme is proposed in DITEN to minimize the offloading latency under the constraints of accumulated consumed service migration cost during user mobility. The Lyapunov optimization method is leveraged to simplify the long-term migration cost constraint to a multi-objective dynamic optimization problem, which is then solved by $Actor$ - $Critic$ deep reinforcement learning. Simulations results show that our proposed scheme effectively diminishes the average offloading latency, the offloading failure rate, and the service migration rate, as compared with benchmark schemes, while saving the system cost with DT assistance.

Journal ArticleDOI
TL;DR: A decentralized Fair and Privacy-Preserving Deep Learning (FPPDL) framework to incorporate fairness into federated deep learning models, and a local credibility mutual evaluation mechanism to guarantee fairness and a three-layer onion-style encryption scheme to guarantee both accuracy and privacy.
Abstract: The current standalone deep learning framework tends to result in overfitting and low utility. This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates. Server-based solutions are prone to the problem of a single-point-of-failure. In this respect, collaborative learning frameworks, such as federated learning (FL), are more robust. Existing federated learning frameworks overlook an important aspect of participation: fairness. All parties are given the same final model without regard to their contributions. To address these issues, we propose a decentralized Fair and Privacy-Preserving Deep Learning (FPPDL) framework to incorporate fairness into federated deep learning models. In particular, we design a local credibility mutual evaluation mechanism to guarantee fairness, and a three-layer onion-style encryption scheme to guarantee both accuracy and privacy. Different from existing FL paradigm, under FPPDL, each participant receives a different version of the FL model with performance commensurate with his contributions. Experiments on benchmark datasets demonstrate that FPPDL balances fairness, privacy and accuracy. It enables federated learning ecosystems to detect and isolate low-contribution parties, thereby promoting responsible participation.

Journal ArticleDOI
Siqi Luo1, Xu Chen1, Qiong Wu1, Zhi Zhou1, Shuai Yu1 
TL;DR: A novel Hierarchical Federated Edge Learning (HFEL) framework is introduced in which model aggregation is partially migrated to edge servers from the cloud and achieves better training performance compared to conventional federated learning.
Abstract: Federated Learning (FL) has been proposed as an appealing approach to handle data privacy issue of mobile devices compared to conventional machine learning at the remote cloud with raw user data uploading By leveraging edge servers as intermediaries to perform partial model aggregation in proximity and relieve core network transmission overhead, it enables great potentials in low-latency and energy-efficient FL Hence we introduce a novel Hierarchical Federated Edge Learning (HFEL) framework in which model aggregation is partially migrated to edge servers from the cloud We further formulate a joint computation and communication resource allocation and edge association problem for device users under HFEL framework to achieve global cost minimization To solve the problem, we propose an efficient resource scheduling algorithm in the HFEL framework It can be decomposed into two subproblems: resource allocation given a scheduled set of devices for each edge server and edge association of device users across all the edge servers With the optimal policy of the convex resource allocation subproblem for a set of devices under a single edge server, an efficient edge association strategy can be achieved through iterative global cost reduction adjustment process, which is shown to converge to a stable system point Extensive performance evaluations demonstrate that our HFEL framework outperforms the proposed benchmarks in global cost saving and achieves better training performance compared to conventional federated learning

Journal ArticleDOI
TL;DR: In this article, the problem of PIR with side information was studied in the presence of prior side information, where side information can be obtained opportunistically from other users or has previously downloaded some messages using classical PIR schemes.
Abstract: We study the problem of Private Information Retrieval (PIR) in the presence of prior side information. The problem setup includes a database of $K$ independent messages possibly replicated on several servers, and a user that needs to retrieve one of these messages. In addition, the user has some prior side information in the form of a subset of $M$ messages, not containing the desired message and unknown to the servers. This problem is motivated by practical settings in which the user can obtain side information opportunistically from other users or has previously downloaded some messages using classical PIR schemes. The objective of the user is to retrieve the required message with downloading minimum amount of data from the servers while achieving information-theoretic privacy in one of the following two scenarios: (i) the user wants to protect jointly the identities of the demand and the side information; (ii) the user wants to protect only the identity of the demand, but not necessarily the side information. To highlight the role of side information, we focus first on the case of a single server (single database). In the first scenario, we prove that the minimum download cost is $K-M$ messages, and in the second scenario it is $\lceil K/(M+1)\rceil $ messages, which should be compared to $K$ messages—the minimum download cost in the case of no side information. Then, we extend some of our results to the case of the database replicated on multiple servers. Our proof techniques relate PIR with side information to the index coding problem. We leverage this connection to prove converse results, as well as to design achievability schemes.

Journal ArticleDOI
TL;DR: A two-phase offloading optimization strategy is put forward for joint optimization of offloading utility and privacy in EC enabled IoT, devised first to obtain the goal of maximizing the resource utilization of ECUs and minimizing the implementation time cost.
Abstract: Currently, edge computing (EC), emerging as a burgeoning paradigm, is powerful in handling real-time resource provision for Internet of Things (IoT) applications. However, due to the spatial distribution of geographically sparse IoT devices and the resource limitations of EC units (ECUs), the resource utilization of corresponding edge servers is relatively insufficient and the execution performance is ineffective to some extent. A privacy leakage, including personal information, location, media data, etc., during the transmission process from IoT devices to edge servers severely restricts the application of ECUs in IoT. To address these challenges, a two-phase offloading optimization strategy is put forward for joint optimization of offloading utility and privacy in EC enabled IoT. Technically, a utility-aware task offloading method, named UTO, is devised first to obtain the goal of maximizing the resource utilization of ECUs and minimizing the implementation time cost. Then a joint optimization method, named JOM, for utility and privacy tradeoffs is designed to balance the privacy preservation and execution performance. Eventually, the experimental evaluations are designed to illustrate the efficiency and reliability of UTO and JOM.

Journal ArticleDOI
TL;DR: An efficient framework for mobile edge-cloud computing networks, which enables the edge and the cloud to share their computing resources in the form of wholesale and buyback and an optimal cloud computing resource management to maximize the social welfare is proposed.
Abstract: Both the edge and the cloud can provide computing services for mobile devices to enhance their performance. The edge can reduce the conveying delay by providing local computing services while the cloud can support enormous computing requirements. Their cooperation can improve the utilization of computing resources and ensure the QoS, and thus is critical to edge-cloud computing business models. This paper proposes an efficient framework for mobile edge-cloud computing networks, which enables the edge and the cloud to share their computing resources in the form of wholesale and buyback. To optimize the computing resource sharing process, we formulate the computing resource management problems for the edge servers to manage their wholesale and buyback scheme and the cloud to determine the wholesale price and its local computing resources. Then, we solve these problems from two perspectives: i) social welfare maximization and ii) profit maximization for the edge and the cloud. For i), we have proved the concavity of the social welfare and proposed an optimal cloud computing resource management to maximize the social welfare. For ii), since it is difficult to directly prove the convexity of the primal problem, we first proved the concavity of the wholesaled computing resources with respect to the wholesale price and designed an optimal pricing and cloud computing resource management to maximize their profits. Numerical evaluations show that the total profit can be maximized by social welfare maximization while the respective profits can be maximized by the optimal pricing and cloud computing resource management.