scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Mobile Computing in 2023"


Journal ArticleDOI
TL;DR: In this article , a density-based network division algorithm was proposed to divide the satellite-terrestrial networks into a series of blocks with different sizes to amortize the data delivery costs.
Abstract: The satellite-terrestrial networks (STN) utilize the spacious coverage and low transmission latency of the Low Earth Orbit (LEO) constellation to transfer requested content for subscribers especially in remote areas. With the development of storage and computing capacity of satellite onboard equipment, it is considered promising to leverage in-network caching technology on STN to improve content distribution efficiency. However, traditional caching and distribution schemes are not suitable in STN, considering dynamic satellite propagation links and time-varying topology. More specifically, the unevenness of user distribution heightens difficulties for assurance of user quality of experience. To address these problems, we first propose a density-based network division algorithm. The STN is divided into a series of blocks with different sizes to amortize the data delivery costs. To deploy the caching satellites, we analyze the link connectivity and propose an approximate minimum coverage vertex set algorithm. Then, a novel cache node selection algorithm is designed for optimal subscriber matching. On the basis of time-varying network model, the STN cache content updating mechanism is derived to enable a stable and sustainable quality of user experience. The simulation results demonstrate that the proposed user-oriented STN content distribution scheme can obviously reduce the average propagation delay and network load under different network conditions and has better stability and self-adaptability under continuous time variation.

43 citations


Journal ArticleDOI
TL;DR: In this article , a multi-agent imitation learning enabled UAV deployment approach is proposed to maximize both profits of UAV owners and utilities of on-ground users, where a Markov game is formulated among UAV owner and proves that a Nash equilibrium exists based on the full knowledge of the system.
Abstract: Unmanned Aerial Vehicles (UAVs) have been utilized to serve on-ground users with various services, e.g., computing, communication and caching, due to their mobility and flexibility. The main focus of many recent studies on UAVs is to deploy a set of homogeneous UAVs with identical capabilities controlled by one UAV owner/company to provide services. However, little attention has been paid to the issue of how to enable different UAV owners to provide services with differentiated service capabilities in a shared area. To address this issue, we propose a multi-agent imitation learning enabled UAV deployment approach to maximize both profits of UAV owners and utilities of on-ground users. Specially, a Markov game is formulated among UAV owners and we prove that a Nash equilibrium exists based on the full knowledge of the system. For online scheduling with incomplete information, we design agent policies by imitating the behaviors of corresponding experts. A novel neural network model, integrating convolutional neural networks, generative adversarial networks and a gradient-based policy, can be trained and executed in a fully decentralized manner with a guaranteed $\epsilon$ -Nash equilibrium. Performance results show that our algorithm has significant superiority in terms of average profits, utilities and execution time compared with other representative algorithms.

36 citations


Journal ArticleDOI
TL;DR: In this paper , the authors investigated an MEC network enabled by UAV and considered both the multi-user computation offloading and edge server deployment to minimize the system-wide computation cost under dynamic environment, where users generate tasks according to time varying probabilities.
Abstract: Driven by the increasing demand of real-time mobile application processing, Multi-access Edge Computing (MEC) has been envisioned as a promising paradigm for pushing computational resources to network edges. In this paper, we investigate an MEC network enabled by Unmanned Aerial Vehicles (UAV), and consider both the multi-user computation offloading and edge server deployment to minimize the system-wide computation cost under dynamic environment, where users generate tasks according to time-varying probabilities. We decompose the minimization problem by formulating two stochastic games for multi-user computation offloading and edge server deployment respectively, and prove that each formulated stochastic game has at least one Nash Equilibrium (NE). Two learning algorithms are proposed to reach the NEs with polynomial-time computational complexities. We further incorporate these two algorithms into a chess-like asynchronous updating algorithm to solve the system-wide computation cost minimization problem. Finally, performance evaluations based on real-world data are conducted and analyzed, corroborating that the proposed algorithms can achieve efficient computation offloading coupled with proper server deployment under dynamic environment for multiple users and MEC servers.

33 citations


Journal ArticleDOI
TL;DR: In this article , an online joint offloading and resource allocation (JORA) framework was proposed under the long-term MEC energy constraint, aiming at guaranteeing the end-users' QoE.
Abstract: We consider the problem of task offloading and resource allocation in mobile edge computing (MEC). To maintain satisfactory quality of experience (QoE) of end-users, mobile devices (MDs) may offload their tasks to edge servers based on the allocated computation (e.g., CPU/GPU cycles and storage) and wireless resources (e.g., bandwidth). However, these resources could not be effectively utilized unless an encouraging resource allocation scheme can be proposed. What's worse, task offloading incurs additional MEC energy consumption, which inevitably violate the long-term MEC energy budget. Considering these two challenges, we propose an online joint offloading and resource allocation (JORA) framework under the long-term MEC energy constraint, aiming at guaranteeing the end-users' QoE. To achieve this, we leverage Lyapunov optimization to exploit the optimality of the long-term QoE maximization problem. By constructing an energy deficit queue to guide energy consumption, the problem can be solved in a real-time manner. On this basis, we propose online JORA methods in both centralized and distributed manners. Furthermore, we prove that our proposed methods enable the achievement of the close-to-optimal performance while satisfying the long-term MEC energy constraint. In addition, we conduct extensive simulations and the results show superiority in performance over other methods.

30 citations


Journal ArticleDOI
TL;DR: In this article , a novel data offloading decision-making framework is proposed, where users have the option to partially offload their data to a complex multiaccess edge computing (MEC) environment, consisting of both ground and UAV-mounted MEC servers.
Abstract: In this paper, a novel data offloading decision-making framework is proposed, where users have the option to partially offload their data to a complex Multi-access Edge Computing (MEC) environment, consisting of both ground and UAV-mounted MEC servers. The problem is treated under the perspective of risk-aware user behavior as captured via prospect-theoretic utility functions, while accounting for the inherent computing environment uncertainties. The UAV-mounted MEC servers act as a common pool of resources with potentially superior but uncertain payoff for the users, while the local computation and ground server alternatives constitute safe and guaranteed options, respectively. The optimal user task offloading to the available computing choices is formulated as a maximization problem of each user’s satisfaction, and confronted as a non-cooperative game. The existence and uniqueness of a Pure Nash Equilibrium (PNE) are proven, and convergence to the PNE is shown. Detailed numerical results highlight the convergence of the system to the PNE in few only iterations, while the impact of user behavior heterogeneity is evaluated. The introduced framework’s consideration of the user risk-aware characteristics and computing uncertainties, results to a sophisticated exploitation of the system resources, which in turn leads to superior users’ experienced performance compared to alternative approaches.

30 citations


Journal ArticleDOI
TL;DR: In this paper , an artificial neural network (ANN) based efficient and accurate solution is proposed to predict the signal strength from a drone based on several pertinent factors such as drone altitude, path loss, distance, transmitter height, receiver height, transmitted power, and signal frequency.
Abstract: The integration of drones, the Internet of Things (IoT), and Artificial Intelligence (AI) domains can produce exceptional solutions to today complex problems in smart cities. A drone, which essentially is a data-gathering robot, can access geographical areas that are difficult, unsafe, or even impossible for humans to reach. Besides, communicating amongst themselves, such drones need to be in constant contact with other ground-based agents such as IoT sensors, robots, and humans. In this paper, an intelligent technique is proposed to predict the signal strength from a drone to IoT devices in smart cities in order to maintain the network connectivity, provide the desired quality of service (QoS), and identify the drone coverage area. An artificial neural network (ANN) based efficient and accurate solution is proposed to predict the signal strength from a drone based on several pertinent factors such as drone altitude, path loss, distance, transmitter height, receiver height, transmitted power, and signal frequency. Furthermore, the signal strength estimates are then used to predict the drone flying path. The findings show that the proposed ANN technique has achieved a good agreement with the validation data generated via simulations, yielding determination coefficient $R^2$ to be 0.96 and 0.98, for variation in drone altitude and distance from a drone, respectively. Therefore, the proposed ANN technique is reliable, useful, and fast to estimate the signal strength, determine the optimal drone flying path, and predict the next location based on received signal strength.

27 citations


Journal ArticleDOI
TL;DR: In this paper , an online joint offloading and resource allocation (JORA) framework was proposed under the long-term MEC energy constraint, aiming at guaranteeing the end-users' QoE.
Abstract: We consider the problem of task offloading and resource allocation in mobile edge computing (MEC). To maintain satisfactory quality of experience (QoE) of end-users, mobile devices (MDs) may offload their tasks to edge servers based on the allocated computation (e.g., CPU/GPU cycles and storage) and wireless resources (e.g., bandwidth). However, these resources could not be effectively utilized unless an encouraging resource allocation scheme can be proposed. What’s worse, task offloading incurs additional MEC energy consumption, which inevitably violate the long-term MEC energy budget. Considering these two challenges, we propose an online joint offloading and resource allocation (JORA) framework under the long-term MEC energy constraint, aiming at guaranteeing the end-users’ QoE. To achieve this, we leverage Lyapunov optimization to exploit the optimality of the long-term QoE maximization problem. By constructing an energy deficit queue to guide energy consumption, the problem can be solved in a real-time manner. On this basis, we propose online JORA methods in both centralized and distributed manners. Furthermore, we prove that our proposed methods enable the achievement of the close-to-optimal performance while satisfying the long-term MEC energy constraint. In addition, we conduct extensive simulations and the results show superiority in performance over other methods.

21 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a novel framework for range counting trading over IoT networks by jointly considering data utility, bandwidth consumption, and privacy preservation, which is evaluated by estimating the air pollution levels and the traffic levels with different ranges on the 2014 CityPulse Smart City datasets.
Abstract: The data collected in Internet of Thing (IoT) systems (IoT data) have stimulated dramatic extension to the boundary of commercialized data statistic analysis. However, huge volumes of devices bring large scales of data, constituting heavy burdens for data exchange. Even worse, contents in IoT systems are also sensitive as they are usually linked to private physical status of data contributors. Therefore, this paper proposes a novel framework for the range counting trading over IoT networks by jointly considering data utility, bandwidth consumption, and privacy preservation. This paper first proposes a novel sampling-based method with histogram sketching for range counting estimation. Then the framework adopts a perturbation mechanism that can further preserve the results under differential privacy. Finally, two types of pricing strategies for range counting trading are introduced for different circumstances, providing holistic consideration on how the parameters given in the estimator should be used for data trading. The framework is evaluated by estimating the air pollution levels and the traffic levels with different ranges on the 2014 CityPulse Smart City datasets.

19 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a distillation-based semi-supervised FL (DS-FL) algorithm that exchanges the outputs of local models among mobile devices, instead of model parameter exchange employed by the typical frameworks.
Abstract: This study develops a federated learning (FL) framework overcoming largely incremental communication costs due to model sizes in typical frameworks without compromising model performance. To this end, based on the idea of leveraging an unlabeled open dataset, we propose a distillation-based semi-supervised FL (DS-FL) algorithm that exchanges the outputs of local models among mobile devices, instead of model parameter exchange employed by the typical frameworks. In DS-FL, the communication cost depends only on the output dimensions of the models and does not scale up according to the model size. The exchanged model outputs are used to label each sample of the open dataset, which creates an additionally labeled dataset. Based on the new dataset, local models are further trained, and model performance is enhanced owing to the data augmentation effect. We further highlight that in DS-FL, the heterogeneity of the devices' dataset leads to ambiguous of each data sample and lowing of the training convergence. To prevent this, we propose entropy reduction averaging, where the aggregated model outputs are intentionally sharpened. Moreover, extensive experiments show that DS-FL reduces communication costs up to 99% relative to those of the FL benchmark while achieving similar or higher classification accuracy.

19 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed an in-network caching scheme to support various provisions of data sharing in the ICVs by exploring the advantages of information-centric networks (ICN), where each on-board service was divided into several content units and placed at the ICV and small cell base stations (SBSs) to reduce the content retrieval delay.
Abstract: With the increasing on-board demand for intelligent connected vehicles (ICVs), the fifth-generation (5G) wireless systems are being massively utilized in vehicular networks. As an essential component, content retrieval in the ICV provides a basis for vehicle-to-vehicle or vehicle-to-infrastructure data interaction for many applications. However, content access is still subject to performance degradation due to congested communication channels, diverse requests patterns, and intermittent network connectivity. To mitigate these issues, in-network caching in 5G-enabled ICV has been leveraged to benefit content access by allowing edge nodes to store content for data generators. In this paper, we propose an in-network caching scheme to support various provisions of data sharing in the ICVs by exploring the advantages of information-centric networks (ICN). We first divide each on-board service into several content units. Then, we place these units at the ICV and small cell base stations (SBSs) to reduce the content retrieval delay, further model the proposed system as an integer nonlinear program (INLP) and attain the optimal QoE (Quality of Experience) by placing content units at appropriate cache entities. Finally, we verify the effectiveness and correctness of our proposed model through extensive simulations.

17 citations


Journal ArticleDOI
TL;DR: This work presents a protocol and procedure to correctly compute the safe transition between different controlling algorithms, down to autonomous (or manual) driving when no communication is possible, and develops a new version of PLEXE, the only Open Source, free simulation tool that enables the study of such systems with a modular approach.
Abstract: Cooperative Driving requires ultra-reliable communications, and it is now clear that no single technology will ever be able to satisfy such stringent requirements, if only because active jamming can kill (almost) any wireless technology. Cooperative driving with multiple communication technologies which complement each other opens new spaces for research and development, but also poses several challenges. The work we present tackles the fallback and recovery mechanisms that the longitudinal controlling system of a platoon of vehicles can implement as a distributed system with multiple communication interfaces. We present a protocol and procedure to correctly compute the safe transition between different controlling algorithms, down to autonomous (or manual) driving when no communication is possible. To empower the study, we also develop a new version of Plexe, which is an integral part of this contribution as the only Open Source, free simulation tool that enables the study of such systems with a modular approach, and that we deem offers the community the possibility of boosting research in this field. The results we present demonstrate the feasibility of safe fallback, but also highlight that such complex systems require careful design choices, as naïve approaches can lead to instabilities or even collisions, and that such design can only be done with appropriate in-silico experiments.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an in-network caching scheme to support various provisions of data sharing in the ICVs by exploring the advantages of information-centric networks (ICN), where each on-board service was divided into several content units and placed at the ICV and small cell base stations (SBSs) to reduce the content retrieval delay.
Abstract: With the increasing on-board demand for intelligent connected vehicles (ICVs), the fifth-generation (5G) wireless systems are being massively utilized in vehicular networks. As an essential component, content retrieval in the ICV provides a basis for vehicle-to-vehicle or vehicle-to-infrastructure data interaction for many applications. However, content access is still subject to performance degradation due to congested communication channels, diverse requests patterns, and intermittent network connectivity. To mitigate these issues, in-network caching in 5G-enabled ICV has been leveraged to benefit content access by allowing edge nodes to store content for data generators. In this paper, we propose an in-network caching scheme to support various provisions of data sharing in the ICVs by exploring the advantages of information-centric networks (ICN). We first divide each on-board service into several content units. Then, we place these units at the ICV and small cell base stations (SBSs) to reduce the content retrieval delay, further model the proposed system as an integer nonlinear program (INLP) and attain the optimal QoE (Quality of Experience) by placing content units at appropriate cache entities. Finally, we verify the effectiveness and correctness of our proposed model through extensive simulations.

Journal ArticleDOI
TL;DR: In this article , a joint design of task partitioning and offloading for a DNN-task enabled MEC network that consists of a single server and multiple mobile devices (MDs), where the server and each MD employ the well-trained DNNs for task computation, is presented.
Abstract: Deep neural network (DNN)-task enabled mobile edge computing (MEC) is gaining ubiquity due to outstanding performance of artificial intelligence. By virtue of characteristics of DNN, this paper develops a joint design of task partitioning and offloading for a DNN-task enabled MEC network that consists of a single server and multiple mobile devices (MDs), where the server and each MD employ the well-trained DNNs for task computation. The main contributions of this paper are as follows: First, we propose a layer-level computation partitioning strategy for DNN to partition each MD's task into the subtasks that are either locally computed at the MD or offloaded to the server. Second, we develop a delay prediction model for DNN to characterize the computation delay of each subtask at the MD and the server. Third, we design a slot model and a dynamic pricing strategy for the server to efficiently schedule the offloaded subtasks. Fourth, we jointly optimize the design of task partitioning and offloading to minimize each MD's cost that includes the computation delay, the energy consumption, and the price paid to the server. In particular, we propose two distributed algorithms based on the aggregative game theory to solve the optimization problem. Finally, numerical results demonstrate that the proposed scheme is scalable to different types of DNNs and shows the superiority over the baseline schemes in terms of processing delay and energy consumption.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper developed a deep reinforcement learning (DRL)-based computation offloading scheme for the smart contract of blockchain, where task vehicles can offload part of computation-intensive tasks to neighboring vehicles.
Abstract: Vehicular edge computing (VEC) is an effective method to increase the computing capability of vehicles, where vehicles share their idle computing resources with each other. However, due to the high mobility of vehicles, it is challenging to design an optimal task allocation policy that adapts to the dynamic vehicular environment. Further, vehicular computation offloading often occurs between unfamiliar vehicles, how to motivate vehicles to share their computing resources while guaranteeing the reliability of resource allocation in task offloading is one main challenge. In this paper, we propose a blockchain-enabled VEC framework to ensure the reliability and efficiency of vehicle-to-vehicle (V2V) task offloading. Specifically, we develop a deep reinforcement learning (DRL)-based computation offloading scheme for the smart contract of blockchain, where task vehicles can offload part of computation-intensive tasks to neighboring vehicles. To ensure the security and reliability in task offloading, we evaluate the reliability of vehicles in resource allocation by blockchain. Moreover, we propose an enhanced consensus algorithm based on practical Byzantine fault tolerance (PBFT), and design a consensus nodes selection algorithm to improve the efficiency of consensus and motivate base stations to improve reliability in task allocation. Simulation results validate the effectiveness of our proposed scheme for blockchain-enabled VEC.

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a personalized federated smart human activity recognition (HAR) framework, named FedHAR, which performs distributed learning, which allows training data to be kept local to protect users' privacy.
Abstract: The advancement of smartphone sensors and wearable devices has enabled a new paradigm for smart human activity recognition (HAR), which has a broad range of applications in healthcare and smart cities. However, there are four challenges, privacy preservation, label scarcity, real-timing, and heterogeneity patterns, to be addressed before HAR can be more applicable in real-world scenarios. To this end, in this paper, we propose a personalized federated HAR framework, named FedHAR, to overcome all the above obstacles. Specially, as federated learning, FedHAR performs distributed learning, which allows training data to be kept local to protect users’ privacy. Also, for each client without activity labels, in FedHAR, we design an algorithm to compute unsupervised gradients under the consistency training proposition and an unsupervised gradient aggregation strategy is developed for overcoming the concept drift and convergence instability issues in online federated learning process. Finally, extensive experiments are conducted using two diverse real-world HAR datasets to show the advantages of FedHAR over state-of-the-art methods. In addition, when fine-tuning each unlabeled client, personalized FedHAR can achieve additional 10% improvement across all metrics on average.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel framework for range counting trading over IoT networks by jointly considering data utility, bandwidth consumption, and privacy preservation, which can provide more accurate and reliable statistical information, with reduced bandwidth consumption and strengthened privacy preservation.
Abstract: The data collected in Internet of Thing (IoT) systems (IoT data) have stimulated dramatic extension to the boundary of commercialized data statistic analysis, owing to the pervasive availability of low-cost wireless network access and off-the-shelf mobile devices. In such cases, many data consumers post their queries for urban statistic analysis in the system, like the scales of traffics, and then data contributors in IoT networks upload their contents, which can be evaluated by data brokers and responded to data consumers. However, huge volumes of devices bring large scales of data, constituting heavy burdens for data exchange. Even worse, contents in IoT systems are also sensitive as they are usually linked to private physical status of data contributors. The previous studies for IoT data trading fail to provide comprehensive estimation and pricing towards these difficulties. Therefore, this paper proposes a novel framework for the range counting trading over IoT networks by jointly considering data utility, bandwidth consumption, and privacy preservation. The range counting accumulates the number of data items falling in a concerned range of value, providing important information on the underlying data distribution. This paper first proposes a novel sampling-based method with histogram sketching for range counting estimation. The estimator is proved to be unbiased and achieves advanced performance on variance. Then the framework adopts a perturbation mechanism that can further preserve the results under differential privacy. The theoretical analysis shows that the mechanism can guarantee the privacy preservation under a given size of samples and the accuracy requirement of results. Finally, two types of pricing strategies for range counting trading are introduced for different circumstances, providing holistic consideration on how the parameters given in the estimator should be used for data trading. The framework is evaluated by estimating the air pollution levels and the traffic levels with different ranges on the 2014 CityPulse Smart City datasets. The evaluation results demonstrate that our framework can provide more accurate and reliable statistical information, with reduced bandwidth consumption and strengthened privacy preservation.

Journal ArticleDOI
TL;DR: In this paper , the authors present a protocol and procedure to correctly compute the safe transition between different controlling algorithms, down to autonomous (or manual) driving when no communication is possible.
Abstract: Cooperative Driving requires ultra-reliable communications, and it is now clear that no single technology will ever be able to satisfy such stringent requirements, if only because active jamming can kill (almost) any wireless technology. Cooperative driving with multiple communication technologies which complement each other opens new spaces for research and development, but also poses several challenges. The work we present tackles the fallback and recovery mechanisms that the longitudinal controlling system of a platoon of vehicles can implement as a distributed system with multiple communication interfaces. We present a protocol and procedure to correctly compute the safe transition between different controlling algorithms, down to autonomous (or manual) driving when no communication is possible. To empower the study, we also develop a new version of PLEXE, which is an integral part of this contribution as the only Open Source, free simulation tool that enables the study of such systems with a modular approach, and that we deem offers the community the possibility of boosting research in this field. The results we present demonstrate the feasibility of safe fallback, but also highlight that such complex systems require careful design choices, as naive approaches can lead to instabilities or even collisions, and that such design can only be done with appropriate in-silico experiments.

Journal ArticleDOI
TL;DR: In this article , an actor-critic-based distributed application placement technique, working based on the IMPORTance weighted Actor-Learner Architectures (IMPALA), is proposed for efficient distributed experience trajectory generation that significantly reduces exploration costs of agents.
Abstract: Fog/Edge computing is a novel computing paradigm supporting resource-constrained Internet of Things (IoT) devices by placement of their tasks on edge and/or cloud servers. Recently, several Deep Reinforcement Learning (DRL)-based placement techniques have been proposed in fog/edge computing environments, which are only suitable for centralized setups. The training of well-performed DRL agents requires manifold training data while obtaining training data is costly. Hence, these centralized DRL-based techniques lack generalizability and quick adaptability, thus failing to efficiently tackle application placement problems. Moreover, many IoT applications are modeled as Directed Acyclic Graphs (DAGs) with diverse topologies. Satisfying dependencies of DAG-based IoT applications incur additional constraints and increase the complexity of placement problem. To overcome these challenges, we propose an actor-critic-based distributed application placement technique, working based on the IMPortance weighted Actor-Learner Architectures (IMPALA). IMPALA is known for efficient distributed experience trajectory generation that significantly reduces exploration costs of agents. Besides, it uses an adaptive off-policy correction method for faster convergence to optimal solutions. Our technique uses recurrent layers to capture temporal behaviors of input data and a replay buffer to improve the sample efficiency. The performance results, obtained from simulation and testbed experiments, demonstrate that our technique significantly improves execution cost of IoT applications up to 30% compared to its counterparts.

Journal ArticleDOI
TL;DR: In this article , a dependent task offloading framework for multiple mobile applications, named COFE, is proposed, where mobile devices can offload their compute-intensive tasks with dependent constraints to the MEC-Cloud system.
Abstract: With the proliferation of versatile mobile applications, offloading compute-intensive tasks to the MEC/Cloud becomes a dramatic technique due to the limited resources and high user experience requirements at mobile devices. However, most existing works design their task offloading schemes without considering the dependence of tasks and the orchestration of the MEC and Cloud, and thus may limit the system performance. In this paper, we propose a dependent task offloading framework for multiple mobile applications, named COFE, where mobile devices can offload their compute-intensive tasks with dependent constraints to the MEC-Cloud system. It can assign the offloaded tasks to the MEC and Cloud adaptively to improve the user experience. Based on COFE, we formulate the task offloading problem as an average makespan minimization problem, which is proved to be NP-hard. Then, we propose a heuristic ranking-based algorithm to assign the offloaded tasks according to their bottom levels. Theoretical analysis proves the stability of the system under the proposed algorithm and extensive simulations validate that the proposed algorithm can significantly reduce the average makespan and deadline violation probabilities of offloaded applications.

Journal ArticleDOI
TL;DR: Wu et al. as mentioned in this paper proposed WiTraj, a device-free indoor motion tracking system using commodity WiFi devices, which leverages multiple receivers placed at different viewing angles to capture human walking and then intelligently combines the best views to achieve a robust trajectory reconstruction, and distinguishes walking from in-place activities, which are typically interleaved in daily life, so that non-walking activities do not cause tracking errors.
Abstract: WiFi-based device-free motion tracking systems track persons without requiring them to carry any device. Existing work has explored signal parameters such as time-of-flight (ToF), angle-of-arrival (AoA), and Doppler-frequency-shift (DFS) extracted from WiFi channel state information (CSI) to locate and track people in a room. However, they are not robust due to unreliable estimation of signal parameters. ToF and AoA estimations are not accurate for current standards-compliant WiFi devices that typically have only two antennas and limited channel bandwidth. On the other hand, DFS can be extracted relatively easily on current devices but is susceptible to the high noise level and random phase offset in CSI measurement, which results in a speed-sign-ambiguity problem and renders ambiguous walking speeds. This paper proposes WiTraj, a device-free indoor motion tracking system using commodity WiFi devices. WiTraj improves tracking robustness from three aspects: 1) It significantly improves DFS estimation quality by using the ratio of the CSI from two antennas of each receiver, 2) To better track human walking, it leverages multiple receivers placed at different viewing angles to capture human walking and then intelligently combines the best views to achieve a robust trajectory reconstruction, and, 3) It differentiates walking from in-place activities, which are typically interleaved in daily life, so that non-walking activities do not cause tracking errors. Experiments show that WiTraj can significantly improve tracking accuracy in typical environments compared to existing DFS-based systems. Evaluations across 9 participants and 3 different environments show that the median tracking error $<2.5\%$<2.5% for typical room-sized trajectories.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a personalized federated smart human activity recognition (HAR) framework, named FedHAR, which allows training data to be kept local to protect users' privacy.
Abstract: The advancement of smartphone sensors and wearable devices has enabled a new paradigm for smart human activity recognition (HAR), which has a broad range of applications in healthcare and smart cities. However, there are four challenges, privacy preservation , label scarcity , real-timing , and heterogeneity patterns , to be addressed before HAR can be more applicable in real-world scenarios. To this end, in this paper, we propose a personalized federated HAR framework, named FedHAR , to overcome all the above obstacles. Specially, as federated learning, FedHAR performs distributed learning, which allows training data to be kept local to protect users’ privacy. Also, for each client without activity labels, in FedHAR , we design an algorithm to compute unsupervised gradients under the consistency training proposition and an unsupervised gradient aggregation strategy is developed for overcoming the concept drift and convergence instability issues in online federated learning process. Finally, extensive experiments are conducted using two diverse real-world HAR datasets to show the advantages of FedHAR over state-of-the-art methods. In addition, when fine-tuning each unlabeled client, personalized FedHAR can achieve additional 10% improvement across all metrics on average.

Journal ArticleDOI
TL;DR: In this paper , the authors jointly optimize route selection, sensing time, and delivery weight allocation, to maximize delivery and sensing utility under drones' energy constraints, which is formulated as a nonconvex mixed-integer non-Linear programming problem.
Abstract: Thanks to the increasing number and massive coverage, delivery drones, equipped with various sensors, have demonstrated significant but unexplored potentials for large-scale and low-cost urban sensing during package delivery. In this paper, we propose novel studies on the reutilization of such delivery drone resources to fill this void in urban crowdsensing. Accounting for interdependency between flying/sensing and drone delivery weight, we jointly optimize route selection, sensing time, and delivery weight allocation, to maximize delivery and sensing utility under drones’ energy constraints. This problem is formulated as a non-convex mixed-integer non-Linear programming problem, which is proved to be NP-hard. To address this intricate problem, we propose near-optimal algorithms that leverage the equivalent objective function construction, the local search scheme, and the alternating iteration technique. Theoretical analysis indicates that our algorithms can achieve $\frac{1}{4+\varepsilon }$ -approximation ratio (where $\varepsilon$ is an arbitrarily small positive parameter) and the convergence guarantee in polynomial time, for the scenarios of fixed and adjustable delivery weights, respectively. Extensive trace-based simulations, field experiments, and the real-world application demonstrate that ours can significantly improve the delivery & sensing utility by $124.7\%$ and the energy utilization rate by $72.2\%$ on average, compared with the drone delivery without reusing.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a data collection framework that enables the UAV to collect sensory data from multiple IoT devices simultaneously if these IoT devices are within the coverage range of a UAV, through adopting the orthogonal frequency division multiple access (OFDMA) technique.
Abstract: In this paper, we study sensing data collection of IoT devices in a sparse IoT-sensor network, using an energy-constrained Unmanned Aerial Vehicle (UAV), where the sensory data is stored in IoT devices while the IoT devices may or may not be within the transmission range of each other. We formulate two novel data collection problems to fully or partially collect data stored from IoT devices using the UAV, by finding a closed tour for the UAV that consists of hovering locations and the sojourn duration at each of the hovering locations such that the accumulative volume of data collected within the tour is maximized, subject to the energy capacity on the UAV, where the UAV consumes energy on both hovering for data collection and flying from one hovering location to another hovering location. To this end, we first propose a novel data collection framework that enables the UAV to collect sensory data from multiple IoT devices simultaneously if these IoT devices are within the coverage range of the UAV, through adopting the orthogonal frequency division multiple access (OFDMA) technique. We then formulate two data collection maximization problems to deal with full or partial data collection from IoT devices at each hovering location, and show that both defined problems are NP-hard. We instead devise approximation and heuristic algorithms for the problems. We finally evaluate the performance of the proposed algorithms through experimental simulations. Simulation results demonstrated that the proposed algorithms are promising.

Journal ArticleDOI
TL;DR: In this paper , a cooperative task offloading and block mining (TOBM) scheme for a blockchain-based MEC system was proposed, where each edge device not only handles data tasks but also deals with block mining for improving the system utility.
Abstract: The convergence of mobile edge computing (MEC) and blockchain is transforming the current computing services in mobile networks, by offering task offloading solutions with security enhancement empowered by blockchain mining. Nevertheless, these important enabling technologies have been studied separately in most existing works. This article proposes a novel cooperative task offloading and block mining (TOBM) scheme for a blockchain-based MEC system where each edge device not only handles data tasks but also deals with block mining for improving the system utility. To address the latency issues caused by the blockchain operation in MEC, we develop a new Proof-of-Reputation consensus mechanism based on a lightweight block verification strategy. A multi-objective function is then formulated to maximize the system utility of the blockchain-based MEC system, by jointly optimizing offloading decision, channel selection, transmit power allocation, and computational resource allocation. We propose a novel distributed deep reinforcement learning-based approach by using a multi-agent deep deterministic policy gradient algorithm. We then develop a game-theoretic solution to model the offloading and mining competition among edge devices as a potential game, and prove the existence of a pure Nash equilibrium. Simulation results demonstrate the significant system utility improvements of our proposed scheme over baseline approaches.

Journal ArticleDOI
TL;DR: In this article , the authors proposed HAR-SAnet, a novel RF-based human activity recognition (HAR) framework, which adopts an original signal adapted convolutional neural network architecture.
Abstract: Human Activity Recognition (HAR) plays a critical role in a wide range of real-world applications, and it is traditionally achieved via wearable sensing. Recently, to avoid the burden and discomfort caused by wearable devices, device-free approaches exploiting RF signals arise as a promising alternative for HAR. Most of the latest device-free approaches require training a large deep neural network model in either time or frequency domain, entailing extensive storage to contain the model and intensive computations to infer activities. Consequently, even with some major advances on device-free HAR, current device-free approaches are still far from practical in real-world scenarios where the computation and storage resources possessed by, for example, edge devices, are limited. Therefore, we introduce HAR-SAnet which is a novel RF-based HAR framework. It adopts an original signal adapted convolutional neural network architecture: instead of feeding the handcraft features of RF signals into a classifier, HAR-SAnet fuses them adaptively from both time and frequency domains to design an end-to-end neural network model. We apply point-wise grouped convolution and depth-wise separable convolutions to confine the model scale and to speed up the inference execution time. The experiment results show that the recognition accuracy of HAR-SAnet outperforms state-of-the-art algorithms and systems.

Journal ArticleDOI
TL;DR: In this article , a Deep Recurrent Graph Network (DRGN) is proposed to obtain extra spatial information through graph-convolution based inter-UAV communication, and utilize historical features with a recurrent unit.
Abstract: In this paper, we aim to design a deep reinforcement learning (DRL) based control solution to navigating a swarm of unmanned aerial vehicles (UAVs) to fly around an unexplored target area under partial observation, which serves as Mobile Base Stations (MBSs) providing optimal communication coverage for the ground mobile users. To handle the information loss caused by the partial observability, we introduce a novel network architecture named Deep Recurrent Graph Network (DRGN), which could obtain extra spatial information through graph-convolution based inter-UAV communication, and utilize historical features with a recurrent unit. Based on DRGN and maximum-entropy learning, we propose a stochastic DRL policy named Soft Deep Recurrent Graph Network (SDRGN). In SDRGN, a heuristic reward function is elaborated, which is based on the local information of each UAV instead of the global information; thus, SDRGN reduces the training cost and enables distributed online learning. We conducted extensive experiments to design the structure of DRGN and examine the performance of SDRGN. The simulation results show that the proposed model outperforms four state-of-the-art DRL-based approaches and three heuristic baselines, and demonstrate the scalability, transferability, robustness, and interpretability of SDRGN.

Journal ArticleDOI
TL;DR: In this article , a scaling rule is proposed to guide the setting of learning rate in terms of batch size to alleviate the negative impact of synchronization barrier through adaptive batch size during model training.
Abstract: The emerging Federated Learning (FL) enables IoT devices to collaboratively learn a shared model based on their local datasets. However, due to end devices’ heterogeneity, it will magnify the inherent synchronization barrier issue of FL and result in non-negligible waiting time when local models are trained with the identical batch size. Moreover, the useless waiting time will further lead to a great strain on devices’ limited battery life. Herein, we aim to alleviate the negative impact of synchronization barrier through adaptive batch size during model training. When using different batch sizes, stability and convergence of the global model should be enforced by assigning appropriate learning rates on different devices. Therefore, we first study the relationship between batch size and learning rate, and formulate a scaling rule to guide the setting of learning rate in terms of batch size. Then we theoretically analyze the convergence rate of global model and obtain a convergence upper bound. On these bases, we propose an efficient algorithm that adaptively adjusts batch size with scaled learning rate for heterogeneous devices to reduce the waiting time and save battery life. We conduct extensive simulations and testbed experiments, and the experimental results demonstrate the effectiveness of our method.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a joint resource optimization and hyper-learning rate control problem for federated learning at the multi-access edge computing server, where each learning service can independently manage the local resource and learning process without revealing the learning service information.
Abstract: Federated Learning is a new learning scheme for collaborative training a shared prediction model while keeping data locally on participating devices. In this paper, we study a new model of multiple federated learning services at the multi-access edge computing server. Accordingly, the sharing of CPU resources among learning services at each mobile device for the local training process and allocating communication resources among mobile devices for exchanging learning information must be considered. Furthermore, the convergence performance of different learning services depends on the hyper-learning rate parameter that needs to be precisely decided. Towards this end, we propose a joint resource optimization and hyper-learning rate control problem, namely ${{\sf MS-FEDL}}$ , regarding the energy consumption of mobile devices and overall learning time. We design a centralized algorithm based on the block coordinate descent method and a decentralized JP-miADMM algorithm for solving the ${{\sf MS-FEDL}}$ problem. Different from the centralized approach, the decentralized approach requires many iterations to obtain but it allows each learning service to independently manage the local resource and learning process without revealing the learning service information. Our simulation results demonstrate the convergence performance of our proposed algorithms and the superior performance of our proposed algorithms compared to the heuristic strategy.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper developed a deep reinforcement learning (DRL)-based computation offloading scheme for the smart contract of blockchain, where task vehicles can offload part of computation-intensive tasks to neighboring vehicles.
Abstract: Vehicular edge computing (VEC) is an effective method to increase the computing capability of vehicles, where vehicles share their idle computing resources with each other. However, due to the high mobility of vehicles, it is challenging to design an optimal task allocation policy that adapts to the dynamic vehicular environment. Further, vehicular computation offloading often occurs between unfamiliar vehicles, how to motivate vehicles to share their computing resources while guaranteeing the reliability of resource allocation in task offloading is one main challenge. In this paper, we propose a blockchain-enabled VEC framework to ensure the reliability and efficiency of vehicle-to-vehicle (V2V) task offloading. Specifically, we develop a deep reinforcement learning (DRL)-based computation offloading scheme for the smart contract of blockchain, where task vehicles can offload part of computation-intensive tasks to neighboring vehicles. To ensure the security and reliability in task offloading, we evaluate the reliability of vehicles in resource allocation by blockchain. Moreover, we propose an enhanced consensus algorithm based on practical Byzantine fault tolerance (PBFT), and design a consensus nodes selection algorithm to improve the efficiency of consensus and motivate base stations to improve reliability in task allocation. Simulation results validate the effectiveness of our proposed scheme for blockchain-enabled VEC.

Journal ArticleDOI
TL;DR: CrowdFL as mentioned in this paper is a privacy-preserving mobile crowdsensing system by seamlessly integrating federated learning (FL) into MCS, where participants in CrowdFL locally process sensing data via FL paradigm and only upload encrypted training models to the server.
Abstract: As an emerging sensing data collection paradigm, mobile crowdsensing (MCS) enjoys good scalability and low deployment cost but raises privacy concerns. In this paper, we propose a privacy-preserving MCS system called CrowdFL by seamlessly integrating federated learning (FL) into MCS. At a high level, in order to protect participants’ privacy and fully explore participants’ computing power, participants in CrowdFL locally process sensing data via FL paradigm and only upload encrypted training models to the server. To this end, we design a secure aggregation algorithm (SecAgg) through the threshold Paillier cryptosystem to aggregate training models in an encrypted form. Also, to stimulate participation, we present a hybrid incentive mechanism combining the reverse Vickrey auction and posted pricing mechanism, which is proved to be truthful and fail. Results of theoretical analysis and experimental evaluation on a practical MCS scenario (human activity recognition) show that CrowdFL is effective in protecting participants’ privacy and is efficient in operations. In contrast to existing solutions, CrowdFL is 3× faster in model decryption and improves an order of magnitude in model aggregation.