scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Mobile Computing in 2020"


Journal ArticleDOI
TL;DR: In this article, a Deep Reinforcement Learning-based Online Offloading (DROO) framework is proposed to optimize task offloading decisions and wireless resource allocation to the time-varying wireless channel conditions.
Abstract: Wireless powered mobile-edge computing (MEC) has recently emerged as a promising paradigm to enhance the data processing capability of low-power networks, such as wireless sensor networks and internet of things (IoT). In this paper, we consider a wireless powered MEC network that adopts a binary offloading policy, so that each computation task of wireless devices (WDs) is either executed locally or fully offloaded to an MEC server. Our goal is to acquire an online algorithm that optimally adapts task offloading decisions and wireless resource allocations to the time-varying wireless channel conditions. This requires quickly solving hard combinatorial optimization problems within the channel coherence time, which is hardly achievable with conventional numerical optimization methods. To tackle this problem, we propose a Deep Reinforcement learning-based Online Offloading (DROO) framework that implements a deep neural network as a scalable solution that learns the binary offloading decisions from the experience. It eliminates the need of solving combinatorial optimization problems, and thus greatly reduces the computational complexity especially in large-size networks. To further reduce the complexity, we propose an adaptive procedure that automatically adjusts the parameters of the DROO algorithm on the fly. Numerical results show that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computation time by more than an order of magnitude compared with existing optimization methods. For example, the CPU execution latency of DROO is less than 0.1 second in a 30-user network, making real-time and optimal offloading truly viable even in a fast fading environment.

403 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of joint computing, caching, communication, and control (4C) in big data MEC is formulated as an optimization problem whose goal is to jointly optimize a linear combination of the bandwidth consumption and network latency.
Abstract: The concept of Multi-access Edge Computing (MEC) has been recently introduced to supplement cloud computing by deploying MEC servers to the network edge so as to reduce the network delay and alleviate the load on cloud data centers. However, compared to the resourceful cloud, MEC server has limited resources. When each MEC server operates independently, it cannot handle all computational and big data demands stemming from users devices. Consequently, the MEC server cannot provide significant gains in overhead reduction of data exchange between users devices and remote cloud. Therefore, joint Computing, Caching, Communication, and Control (4C) at the edge with MEC server collaboration is needed. To address these challenges, in this paper, the problem of joint 4C in big data MEC is formulated as an optimization problem whose goal is to jointly optimize a linear combination of the bandwidth consumption and network latency. However, the formulated problem is shown to be non-convex. As a result, a proximal upper bound problem of the original formulated problem is proposed. To solve the proximal upper bound problem, the block successive upper bound minimization method is applied. Simulation results show that the proposed approach satisfies computation deadlines and minimizes bandwidth consumption and network latency.

208 citations


Journal ArticleDOI
TL;DR: A decentralized deep reinforcement learning (DRL) based framework to control each UAV in a distributed manner to maximize the temporal average coverage score achieved by all UAVs in a task, maximize the geographical fairness of all considered point-of-interests (PoIs), and minimize the total energy consumptions.
Abstract: In this paper, we aim to design a fully-distributed control solution to navigate a group of unmanned aerial vehicles (UAVs), as the mobile Base Stations (BSs) to fly around a target area, to provide long-term communication coverage for the ground mobile users. Different from existing solutions that mainly solve the problem from optimization perspectives, we proposed a decentralized deep reinforcement learning (DRL) based framework to control each UAV in a distributed manner. Our goal is to maximize the temporal average coverage score achieved by all UAVs in a task, maximize the geographical fairness of all considered point-of-interests (PoIs), and minimize the total energy consumptions, while keeping them connected and not flying out of the area border. We designed the state, observation, action space, and reward in an explicit manner, and model each UAV by deep neural networks (DNNs). We conducted extensive simulations and found the appropriate set of hyperparameters, including experience replay buffer size, number of neural units for two fully-connected hidden layers of actor, critic, and their target networks, and the discount factor for remembering the future reward. The simulation results justified the superiority of the proposed model over the state-of-the-art DRL-EC $^3$ 3 approach based on deep deterministic policy gradient (DDPG), and three other baselines.

165 citations


Journal ArticleDOI
TL;DR: This work proposes a novel mechanism, which can maximize the network social welfare (i.e., the network-wide performance), while achieving a game equilibrium among strategic mobile users, and demonstrates its superiority over the counterparts.
Abstract: In this paper, a mobile edge computing framework with multi-user computation offloading and transmission scheduling for delay-sensitive applications is studied. In the considered model, computation tasks are generated randomly at mobile users along the time. For each task, the mobile user can choose to either process it locally or offload it via the uplink transmission to the edge for cloud computing. To efficiently manage the system, the network regulator is required to employ a network-wide optimal scheme for computation offloading and transmission scheduling while guaranteeing that all mobile users would like to follow (as they may naturally behave strategically for benefiting themselves). By considering tradeoffs between local and edge computing, wireless features and noncooperative game interactions among mobile users, we formulate a mechanism design problem to jointly determine a computation offloading scheme , a transmission scheduling discipline , and a pricing rule . A queueing model is built to analytically describe the packet-level network dynamics. Based on this, we propose a novel mechanism, which can maximize the network social welfare (i.e., the network-wide performance), while achieving a game equilibrium among strategic mobile users. Theoretical and simulation results examine the performance of our proposed mechanism, and demonstrate its superiority over the counterparts.

138 citations


Journal ArticleDOI
TL;DR: This paper considers a wireless broadcast network where a base-station is updating many users on random information arrivals under a transmission capacity constraint, and develops a structural MDP scheduling algorithm and an index scheduling algorithm, leveraging Markov decision process (MDP) techniques and the Whittle's methodology for restless bandits.
Abstract: Age of information is a new network performance metric that captures the freshness of information at end-users . This paper studies the age of information from a scheduling perspective. To that end, we consider a wireless broadcast network where a base-station (BS) is updating many users on random information arrivals under a transmission capacity constraint. For the offline case when the arrival statistics are known to the BS, we develop a structural MDP scheduling algorithm and an index scheduling algorithm , leveraging Markov decision process (MDP) techniques and the Whittle's methodology for restless bandits. By exploring optimal structural results, we not only reduce the computational complexity of the MDP-based algorithm, but also simplify deriving a closed form of the Whittle index. Moreover, for the online case, we develop an MDP-based online scheduling algorithm and an index-based online scheduling algorithm . Both the structural MDP scheduling algorithm and the MDP-based online scheduling algorithm asymptotically minimize the average age, while the index scheduling algorithm minimizes the average age when the information arrival rates for all users are the same. Finally, the algorithms are validated via extensive numerical studies.

133 citations


Journal ArticleDOI
TL;DR: Simulation results demonstrate that the proposed JCC-UA algorithm can effectively reduce the latency of user content downloading and improve the hit rates of contents cached at the BSs as compared to several baseline schemes.
Abstract: Deploying small cell base stations (SBS) under the coverage area of a macro base station (MBS), and caching popular contents at the SBSs in advance, are effective means to provide high-speed and low-latency services in next generation mobile communication networks. In this paper, we investigate the problem of content caching (CC) and user association (UA) for edge computing. A joint CC and UA optimization problem is formulated to minimize the content download latency. We prove that the joint CC and UA optimization problem is NP-hard. Then, we propose a CC and UA algorithm (JCC-UA) to reduce the content download latency. JCC-UA includes a smart content caching policy (SCCP) and dynamic user association (DUA). SCCP utilizes the exponential smoothing method to predict content popularity and cache contents according to prediction results. DUA includes a rapid association (RA) method and a delayed association (DA) method. Simulation results demonstrate that the proposed JCC-UA algorithm can effectively reduce the latency of user content downloading and improve the hit rates of contents cached at the BSs as compared to several baseline schemes.

120 citations


Journal ArticleDOI
TL;DR: In this paper, a novel cloud-edge based federated learning framework for in-home health monitoring is proposed, which learns a shared global model in the cloud from multiple homes at the network edges and achieves data privacy protection by keeping user data locally.
Abstract: In-home health monitoring has attracted great attention for the ageing population worldwide. With the abundant user health data accessed by Internet of Things (IoT) devices and recent development in machine learning, smart healthcare has seen many successful stories. However, existing approaches for in-home health monitoring do not pay sufficient attention to user data privacy and thus are far from being ready for large-scale practical deployment. In this paper, we propose FedHome, a novel cloud-edge based federated learning framework for in-home health monitoring, which learns a shared global model in the cloud from multiple homes at the network edges and achieves data privacy protection by keeping user data locally. To cope with the imbalanced and non-IID distribution inherent in user's monitoring data, we design a generative convolutional autoencoder (GCAE), which aims to achieve accurate and personalized health monitoring by refining the model with a generated class-balanced dataset from user's personal data. Besides, GCAE is lightweight to transfer between the cloud and edges, which is useful to reduce the communication cost of federated learning in FedHome. Extensive experiments based on realistic human activity recognition data traces corroborate that FedHome significantly outperforms existing widely-adopted methods.

119 citations


Journal ArticleDOI
TL;DR: POETS is presented, an efficient partial computation offloading and adaptive task scheduling algorithm to maximize the overall system-wide profit and a two-sided matching algorithm is first proposed to derive the optimal transmission scheduling discipline.
Abstract: A variety of novel mobile applications are developed to attract the interests of potential users in the emerging 5G-enabled vehicular networks. Although computation offloading and task scheduling have been widely investigated, it is rather challenging to decide the optimal offloading ratio and perform adaptive task scheduling in high-dynamic networks. Furthermore, the scheduling policy made by the network operator may be violated, since vehicular users are rational and selfish to maximize their own profits. By considering the incentive compatibility and individual rationality of vehicular users, we present POETS, an efficient partial computation offloading and adaptive task scheduling algorithm to maximize the overall system-wide profit. Specially, a two-sided matching algorithm is first proposed to derive the optimal transmission scheduling discipline. After that, the offloading ratio of vehicular users can be obtained through convex optimization, without any information of other users. Furthermore, a non-cooperative game is constructed to derive the payoff of vehicular users that can reach the equilibrium between users and the network operator. Theoretical analyses and performance evaluations based on real-world traces of taxies demonstrate the effectiveness of our proposed solution.

115 citations


Journal ArticleDOI
TL;DR: This paper proposes an online Rewards-optimal Auction (RoA) to optimize the long-term sum-of-rewards for processing offloaded tasks, meanwhile adapting to the highly dynamic energy harvesting process and computation task arrivals.
Abstract: Utilizing the intelligence at the network edge, edge computing paradigm emerges to provide time-sensitive computing services for Internet of Things. In this paper, we investigate sustainable computation offloading in an edge-computing system that consists of energy harvesting-enabled mobile devices (MDs) and a dispatcher. The dispatcher collects computation tasks generated by IoT devices with limited computation power, and offloads them to resourceful MDs in exchange for rewards. We propose an online Rewards-optimal Auction (RoA) to optimize the long-term sum-of-rewards for processing offloaded tasks, meanwhile adapting to the highly dynamic energy harvesting (EH) process and computation task arrivals. RoA is designed based on Lyapunov optimization and Vickrey-Clarke-Groves auction, the operation of which does not require a prior knowledge of the energy harvesting, task arrivals, or wireless channel statistics. Our analytical results confirm the optimality of tasks assignment. Furthermore, simulation results validate the analytical analysis, and verify the efficacy of the proposed RoA.

109 citations


Journal ArticleDOI
TL;DR: This paper proposes an imitation learning enabled online task scheduling algorithm with near-optimal performance from the initial stage and trains agent policies by following the expert’s demonstration with an acceptable performance gap in theory.
Abstract: Vehicular Edge Computing (VEC) is a promising paradigm based on the Internet of vehicles to provide computing resources for end users and relieve heavy traffic burden for cellular networks. In this paper, we consider a VEC network with dynamic topologies, unstable connections and unpredictable movements. Vehicles inside can offload computation tasks to available neighboring VEC clusters formed by onboard resources, with the purpose of both minimizing energy consumption of systems and satisfying task latency constraints. For online task scheduling, existing researches either design heuristic algorithms or leverage machine learning, e.g., Deep Reinforcement Learning (DRL). However, these algorithms are not efficient enough because of their low searching efficiency and slow convergence speeds for large-scale networks. Instead, we propose an imitation learning enabled online task scheduling algorithm with near-optimal performance from the initial stage of the system. Specially, an expert can obtain the optimal scheduling policy by solving the formulated optimization problem with a few samples offline. For online learning, we train a learning policy by following the expert's demonstration with an acceptable performance gap in theory. Performance results show that our solution has a significant advantage with more than 50% improvement compared with the benchmark.

104 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed to integrate the opportunistic and participatory modes in a two-phased hybrid framework called HyTasker, which jointly optimizes them with a total incentive budget constraint.
Abstract: Task allocation is a major challenge in Mobile Crowd Sensing (MCS). While previous task allocation approaches follow either the opportunistic or participatory mode, this paper proposes to integrate these two complementary modes in a two-phased hybrid framework called HyTasker. In the offline phase, a group of workers (called opportunistic workers ) are selected, and they complete MCS tasks during their daily routines (i.e., opportunistic mode). In the online phase, we assign another set of workers (called participatory workers ) and require them to move specifically to perform tasks that are not completed by the opportunistic workers (i.e., participatory mode). Instead of considering these two phases separately, HyTasker jointly optimizes them with a total incentive budget constraint. In particular, when selecting opportunistic workers in the offline phase of HyTasker, we propose a novel algorithm that simultaneously considers the predicted task assignment for the participatory workers, in which the density and mobility of participatory workers are taken into account. Experiments on two real-world mobility datasets demonstrate that HyTasker outperforms other methods with more completed tasks under the same budget constraint.

Journal ArticleDOI
TL;DR: In SPOON, the service provider enables to recruit mobile users based on their locations, and select proper sensing reports according to their trust levels without invading user privacy, and a privacy-preserving credit management mechanism is introduced to achieve decentralized trust management and secure credit proof for mobile users.
Abstract: Mobile crowdsensing engages a crowd of individuals to use their mobile devices to cooperatively collect data about social events and phenomena for customers with common interest. It can reduce the cost on sensor deployment and improve data quality with human intelligence. To enhance data trustworthiness, it is critical for the service provider to recruit mobile users based on their personal features, e.g., mobility pattern and reputation, but it leads to the privacy leakage of mobile users. Therefore, how to resolve the contradiction between user privacy and task allocation is challenging in mobile crowdsensing. In this paper, we propose SPOON, a strong privacy-preserving mobile crowdsensing scheme supporting accurate task allocation based on geographic information and credit points of mobile users. In SPOON, the service provider enables to recruit mobile users based on their locations, and select proper sensing reports according to their trust levels without invading user privacy. By utilizing proxy re-encryption and BBS+ signature, sensing tasks are protected and reports are anonymized to prevent privacy leakage. In addition, a privacy-preserving credit management mechanism is introduced to achieve decentralized trust management and secure credit proof for mobile users. Finally, we show the security properties of SPOON and demonstrate its efficiency in terms of computation and communication.

Journal ArticleDOI
TL;DR: An app named “ALManager” is developed that leverages the Xposed framework to manage analytics libraries in other apps and shows that some apps indeed leak users’ personal information through analytics libraries even though their genuine purposes of using analytics services are legal.
Abstract: While much effort has been made to detect and measure the privacy leakage caused by the advertising (ad) libraries integrated in mobile applications, analytics libraries, which are also widely used in mobile apps have not been systematically studied for their privacy risks. Different from ad libraries, the main function of analytics libraries is to collect users’ in-app actions. Hence, by design analytics libraries are more likely to leak users’ private information. In this work, we study what information is collected by the analytics libraries integrated in popular Android apps. We design and implement a framework called “Alde”. Given an app, Alde employs both static analysis and dynamic analysis to detect the users’ in-app actions collected by analytics libraries. We also study what private information can be leaked by the apps that use the same analytics library. Moreover, we analyze apps’ privacy policies to see whether app developers have notified the users that their in-app action data is collected by analytics libraries. Finally, we select eight widely used analytics libraries to study and apply our method to 300 popular apps downloaded from both Chinese app markets and Google play. Our experimental results show that some apps indeed leak users’ personal information through analytics libraries even though their genuine purposes of using analytics services are legal. To mitigate such threats, we have developed an app named “ALManager” that leverages the Xposed framework to manage analytics libraries in other apps.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed model-free deep reinforcement learning-based distributed algorithm can better exploit the processing capacities of the edge nodes and significantly reduce the ratio of dropped tasks and average delay when compared with several existing algorithms.
Abstract: In mobile edge computing systems, an edge node may have a high load when a large number of mobile devices offload their tasks to it. Those offloaded tasks may experience large processing delay or even be dropped when their deadlines expire. Due to the uncertain load dynamics at the edge nodes, it is challenging for each device to determine its offloading decision (i.e., whether to offload or not, and which edge node it should offload its task to) in a decentralized manner. In this work, we consider non-divisible and delay-sensitive tasks as well as edge load dynamics, and formulate a task offloading problem to minimize the expected long-term cost. We propose a model-free deep reinforcement learning-based distributed algorithm, where each device can determine its offloading decision without knowing the task models and offloading decision of other devices. To improve the estimation of the long-term cost in the algorithm, we incorporate the long short-term memory (LSTM), dueling deep Q-network (DQN), and double-DQN techniques. Simulation results show that our proposed algorithm can better exploit the processing capacities of the edge nodes and significantly reduce the ratio of dropped tasks and average delay when compared with several existing algorithms.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an asynchronous advantage actor-critic (A3C) based real-time scheduler for stochastic edge-cloud environments allowing decentralized learning, concurrently across multiple agents.
Abstract: The ubiquitous adoption of Internet-of-Things (IoT) based applications has resulted in the emergence of the Fog computing paradigm, which allows seamlessly harnessing both mobile-edge and cloud resources. Efficient scheduling of application tasks in such environments is challenging due to constrained resource capabilities, mobility factors in IoT, resource heterogeneity, network hierarchy, and stochastic behaviors. Existing heuristic-based and Reinforcement Learning approaches lack generalizability and quick adaptability, thus failing to tackle this problem optimally. They are also unable to utilize the temporal workload patterns and are suitable only for centralized setups. Thus, we propose an Asynchronous-Advantage-Actor-Critic (A3C) based real-time scheduler for stochastic Edge-Cloud environments allowing decentralized learning, concurrently across multiple agents. We use the Residual Recurrent Neural Network (R2N2) architecture to capture a large number of host and task parameters together with temporal patterns to provide efficient scheduling decisions. The proposed model is adaptive and able to tune different hyper-parameters based on the application requirements. We explicate our choice of hyper-parameters through sensitivity analysis. The experiments conducted on real-world data set show a significant improvement in terms of energy consumption, response time, Service-Level-Agreement and running cost by 14.4%, 7.74%, 31.9%, and 4.64%, respectively when compared to the state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: OLCD, an Online Learning-aided Cooperative offloaDing mechanism under the scenario where computation offloading is organized based on accumulated social trust is proposed, theoretically proving that OLCD guarantees close-to-optimal system performance even with inaccurate prediction, but its robustness is achieved at the expense of decreased stability.
Abstract: Cooperative offloading in mobile edge computing enables resource-constrained edge clouds to help each other with computation-intensive tasks. However, the power of such offloading could not be fully unleashed, unless trust risks in collaboration are properly managed. As tasks are outsourced and processed at the network edge, completion latency usually presents high variability that can harm the offered service levels. By jointly considering these two challenges, we propose OLCD, an Online Learning-aided Cooperative offloaDing mechanism under the scenario where computation offloading is organized based on accumulated social trust. Under co-provisioning of computation, transmission, and trust services, trust propagation is performed along the multi-hop offloading path such that tasks are allowed to be fulfilled by powerful edge clouds. We harness Lyapunov optimization to exploit the spatial-temporal optimality of long-term system cost minimization problem. By gap-preserving transformation, we decouple the series of bidirectional offloading problems so that it suffices to solve a separate decision problem for each edge cloud. The optimal offloading control can not materialize without complete latency knowledge. To adapt to latency variability, we resort to the delayed online learning technique to facilitate completion latency prediction under long-duration processing, which is fed as input to queued-based offloading control policy. Such predictive control is specially designed to minimize the loss due to prediction errors over time. We theoretically prove that OLCD guarantees close-to-optimal system performance even with inaccurate prediction, but its robustness is achieved at the expense of decreased stability. Trace-driven simulations demonstrate the efficiency of OLCD as well as its superiorities over prior related work.

Journal ArticleDOI
TL;DR: This work proposes a lightweight and real-time traffic light detector for the autonomous vehicle platform that consists of a heuristic candidate region selection module to identify all possible traffic lights, and a lightweight Convolution Neural Network (CNN) classifier to classify the results obtained.
Abstract: Due to the unavailability of Vehicle-to-Infrastructure (V2I) communication in current transportation systems, Traffic Light Detection (TLD) is still considered an important module in autonomous vehicles and Driver Assistance Systems (DAS). To overcome low flexibility and accuracy of vision-based heuristic algorithms and high power consumption of deep learning-based methods, we propose a lightweight and real-time traffic light detector for the autonomous vehicle platform. Our model consists of a heuristic candidate region selection module to identify all possible traffic lights, and a lightweight Convolution Neural Network (CNN) classifier to classify the results obtained. Offline simulations on the GPU server with the collected dataset and several public datasets show that our model achieves higher average accuracy and less time consumption. By integrating our detector module on NVidia Jetson TX1/TX2, we conduct on-road tests on two full-scale self-driving vehicle platforms (a car and a bus) in normal traffic conditions. Our model can achieve an average detection accuracy of 99.3 percent (mRttld) and 99.7 percent (Rttld) at 10Hz on TX1 and TX2, respectively. The on-road tests also show that our traffic light detection module can achieve $ ± 1 . 5 m errors at stop lines when working with other self-driving modules.

Journal ArticleDOI
TL;DR: DeepWear as discussed by the authors is a deep learning framework for wearable devices to improve the performance and reduce the energy footprint by offloading DL tasks from a wearable device to its paired handheld device through local network connectivity such as Bluetooth.
Abstract: Due to their on-body and ubiquitous nature, wearables can generate a wide range of unique sensor data creating countless opportunities for deep learning tasks. We propose DeepWear, a deep learning (DL) framework for wearable devices to improve the performance and reduce the energy footprint. DeepWear strategically offloads DL tasks from a wearable device to its paired handheld device through local network connectivity such as Bluetooth. Compared to the remote-cloud-based offloading, DeepWear requires no Internet connectivity, consumes less energy, and is robust to privacy breach. DeepWear provides various novel techniques such as context-aware offloading, strategic model partition, and pipelining support to efficiently utilize the processing capacity from nearby paired handhelds. Deployed as a user-space library, DeepWear offers developer-friendly APIs that are as simple as those in traditional DL libraries such as TensorFlow. We have implemented DeepWear on the Android OS and evaluated it on COTS smartphones and smartwatches with real DL models. DeepWear brings up to 5.08X and 23.0X execution speedup, as well as 53.5 and 85.5 percent energy saving compared to wearable-only and handheld-only strategies, respectively.

Journal ArticleDOI
TL;DR: A high-availability data collection scheme based on multiple autonomous underwater vehicles (AUVs) (HAMA) is proposed to improve the performance of the sensor network and guarantee the high availability of the data collection service.
Abstract: In this paper, a high-availability data collection scheme based on multiple autonomous underwater vehicles (AUVs) (HAMA) is proposed to improve the performance of the sensor network and guarantee the high availability of the data collection service. Multi-AUVs move in the network and their trajectory is predefined. The nodes near the trajectory of an AUV directly send their data to the AUV while the others transmit data to nodes that are closer to the trajectory. Malfunction discovery and repair mechanisms are applied to ensure that the network operates appropriately when an AUV fails to communicate with the nodes while collecting data. Compared with existing methods, the proposed HAMA method increases the packet delivery ratio and the network lifetime.

Journal ArticleDOI
TL;DR: A network slice admission control algorithm that ensures that the service guarantees provided to tenants are always satisfied and the design of a machine learning algorithm that can be deployed in practical settings and achieves close to optimal performance is designed.
Abstract: It is now commonly agreed that future 5G Networks will build upon the network slicing concept. The ability to provide virtual, logically independent “slices” of the network will also have an impact on the models that will sustain the business ecosystem. Network slicing will open the door to new players: the infrastructure provider, which is the owner of the infrastructure, and the tenants, which may acquire a network slice from the infrastructure provider to deliver a specific service to their customers. In this new context, how to correctly handle resource allocation among tenants and how to maximize the monetization of the infrastructure become fundamental problems that need to be solved. In this paper, we address this issue by designing a network slice admission control algorithm that ( $i$ i ) autonomously learns the best acceptance policy while ( $ii$ i i ) it ensures that the service guarantees provided to tenants are always satisfied. The contributions of this paper include: ( $i$ i ) an analytical model for the admissibility region of a network slicing-capable 5G Network, ( $ii$ i i ) the analysis of the system (modeled as a Semi-Markov Decision Process) and the optimization of the infrastructure providers revenue, and ( $iii$ i i i ) the design of a machine learning algorithm that can be deployed in practical settings and achieves close to optimal performance.

Journal ArticleDOI
TL;DR: This work presents a vehicular crowd sensing system to efficiently incentivize the vehicle agents to match the sensing distribution of the sampled data to the desired target distribution with a limited budget, and forms the incentivizing problem as a new type of non-linear multiple-choice knapsack problem.
Abstract: Vehicular crowd sensing systems are designed to achieve large spatio-temporal sensing coverage with low-cost in deployment and maintenance. For example, taxi platforms can be utilized for sensing city-wide air quality. However, the goals of vehicle agents are often inconsistent with the goal of the crowdsourcer. Vehicle agents like taxis prioritize searching for passenger ride requests (defined as task requests), which leads them to gather in busy regions. In contrast, sensing systems often need to sample data over the entire city with a desired distribution (e.g., Uniform distribution, Gaussian Mixture distribution, etc.) to ensure sufficient spatio-temporal information for further analysis. This inconsistency decreases the sensing coverage quality and thus impairs the quality of the collected information. A simple approach to reduce the inconsistency is to greedily incentivize the vehicle agents to different regions. However, incentivization brings challenges, including the heterogeneity of desired target distributions, limited budget to incentivize more vehicle agents, and the high computational complexity of optimizing incentivizing strategies. To this end, we present a vehicular crowd sensing system to efficiently incentivize the vehicle agents to match the sensing distribution of the sampled data to the desired target distribution with a limited budget. To make the system flexible to various desired target distributions, we formulate the incentivizing problem as a new type of non-linear multiple-choice knapsack problem, with the dissimilarity between the collected data distribution and the desired distribution as the objective function. To utilize the budget efficiently, we design a customized incentive by combining monetary incentives and potential task (ride) requests at the destination. Meanwhile, an efficient optimization algorithm, iLOCuS , is presented to plan the incentivizing policy for vehicle agents to decompose the sensing distribution into two distinct levels: time-location level and vehicle level, to approximate the optimal solution iteratively and reduce the dissimilarity objective. Our experimental results based on real-world data show that our system can reduce up to 26.99 percent of the dissimilarity between the sensed and target distributions compared to benchmark methods.

Journal ArticleDOI
TL;DR: A game theoretic approach based incentive mechanism to encourage the “best” neighbor mobile devices to share their own resource for sensing and a auction based task migration algorithm, which can guarantee the truthfulness of announced price of auctioneer, individual rationality, profitability, and computational efficiency.
Abstract: With the exponentially increasing number of mobile devices, crowdsensing has been a hot topic to use the available resource of neighbor mobile devices to perform sensing tasks cooperatively. However, there still remain three main obstacles to be solved in the practical system. First, since mobile devices are selfish and rational, it is natural to provide cooperation for sensing with a reasonable payment. Meanwhile, due to the arrival and departure of sensing tasks, resource should be allocated and released dynamically when sensing task comes or leaves. To this end, this paper designs a game theoretic approach based incentive mechanism to encourage the “best” neighbor mobile devices to share their own resource for sensing. Next, in order to adjust resource among mobile devices for the better crowdsensing response, an auction based task migration algorithm is proposed, which can guarantee the truthfulness of announced price of auctioneer, individual rationality, profitability, and computational efficiency. Moreover, taking into account the random movement of mobile devices resulting in the stochastic connection, we also use multi-stage stochastic decision to take posterior resource allocation to compensate for inaccurate prediction. The numerical results show the effectiveness and improvement of the proposed multi-stage stochastic programming based distributed game theoretic methodology ( SPG ) for crowdsensing.

Journal ArticleDOI
TL;DR: Furion is presented, a VR framework that enables high-quality, immersive mobile VR on today's mobile devices and wireless networks and exploits a key insight about the VR workload that foreground interactions and background environment have contrasting predictability and rendering workload.
Abstract: Despite the growing market penetration, today's high-end virtual reality (VR) systems remain tethered, which not only limits users’ VR experience but also creates a safety hazard. In this paper, we perform a systematic design study of the “elephant in the room” facing the VR industry – is it feasible to enable high-quality VR apps on untethered mobile devices such as smartphones? Our quantitative, performance-driven design study makes two contributions. First, we show that the QoE achievable for high-quality VR applications on today's mobile hardware and wireless networks via local rendering or offloading is about 10X away from the acceptable QoE, yet waiting for future mobile hardware or next-generation wireless networks (e.g., 5G) is unlikely to help, because of power limitation and the higher CPU utilization needed for processing packets under higher data rate. Second, we present Furion , a VR framework that enables high-quality, immersive mobile VR on today's mobile devices and wireless networks. Furion exploits a key insight about the VR workload that foreground interactions and background environment have contrasting predictability and rendering workload, and employs a split renderer architecture running on both the phone and the server. Supplemented with video compression, use of panoramic frames, parallel decoding on multiple cores on the phone, and view-based bitrate adaptation we demonstrate Furion can support high-quality VR apps on today's smartphones over WiFi, with under 14 ms latency and 60 FPS (the phone display refresh rate).

Journal ArticleDOI
TL;DR: Comprehensive evaluations demonstrate that the Lagreedy algorithm is able to obtain the shortest delay with a high power consumption, while the branch-and-bound algorithm can achieve both shorter delay and lower power consumption with reliability guarantees.
Abstract: Computation offloading over fog computing has the potential to improve reliability and reduce latency in future networks. This paper considers a scenario where roadside units (RSUs) are installed for offloading tasks to the computation nodes including nearby fog nodes and a cloud center. To guarantee the reliable communication, we formulate the first subproblem of power allocation, and leverage the conditional value-at-risk approach to analyze the successful transmission probability in the worse-case channel condition. To complete computation tasks with low latency, we formulate the second subproblem of task allocation into a multi-period generalized assignment problem (MPGAP), which aims at minimizing the total delay by offloading tasks to the ‘right’ fog nodes at ‘right’ period. Then, we propose a modified branch-and-bound algorithm to derive the optimal solution and a heuristic greedy algorithm to obtain approximate performance. In addition, the master problem is proposed as a non-convex optimization problem, which considers both the reliability-guaranteed and delay-sensitive requirements. We design the Lagreedy algorithm by combining the subgradient algorithm and the heuristic algorithm. Comprehensive evaluations demonstrate that the Lagreedy is able to obtain the shortest delay with a high power consumption, while the branch-and-bound algorithm can achieve both shorter delay and lower power consumption with reliability guarantees.

Journal ArticleDOI
TL;DR: Extensive experimental results show that ViFi outperforms virtual fingerprinting systems adopting simpler propagation models in terms of accuracy, and allows a seven-fold reduction in the number of measurements to be collected, while achieving the same accuracy of a traditional fingerprinting system deployed in the same environment.
Abstract: Widespread adoption of indoor positioning systems based on WiFi fingerprinting is at present hindered by the large efforts required for measurements collection during the offline phase. Two approaches were recently proposed to address such an issue: crowdsourcing and RSS radiomap prediction, based on either interpolation or propagation channel model fitting from a small set of measurements. RSS prediction promises better positioning accuracy when compared to crowdsourcing, but no systematic analysis of the impact of system parameters on positioning accuracy is available. This paper fills this gap by introducing ViFi, an indoor positioning system that relies on RSS prediction based on Multi-Wall Multi-Floor (MWMF) propagation model to generate a discrete RSS radiomap ( virtual fingerprints). Extensive experimental results, obtained in multiple independent testbeds, show that ViFi outperforms virtual fingerprinting systems adopting simpler propagation models in terms of accuracy, and allows a seven-fold reduction in the number of measurements to be collected, while achieving the same accuracy of a traditional fingerprinting system deployed in the same environment. Finally, a set of guidelines for the implementation of ViFi in a generic environment, that saves the effort of collecting additional measurements for system testing and fine tuning, is proposed.

Journal ArticleDOI
TL;DR: This paper designs joint charging and scheduling schemes to maximize the Quality of Monitoring (QoM) for stochastic events, which arrive and depart according to known probability distributions of time.
Abstract: In this paper, we consider the scenario in which a mobile charger (MC) periodically travels within a sensor network to recharge the sensors wirelessly. We design joint charging and scheduling schemes to maximize the Quality of Monitoring (QoM) for stochastic events, which arrive and depart according to known probability distributions of time. Information is considered captured if it is sensed by at least one sensor. We focus on two closely related research issues, i.e., how to choose the sensors for charging and decide the charging time for each of them, and how to schedule the sensors’ activation schedules according to their received energy. We formulate our problem as the maximum QoM CHA rging and S ch E duling problem (CHASE). We first ignore the MC's travel time and study the resulting relaxed version of the problem, which we call CHASE-R. We show that both CHASE and CHASE-R are NP-hard. For CHASE-R, we prove that it can be formulated as a submodular function maximization problem, which allows two algorithms to achieve $1/6$ 1 / 6 - and $1/(4 + \epsilon)$ 1 / ( 4 + e ) -approximation ratios. Then, for CHASE, we propose approximation algorithms to solve it by extending the CHASE-R results. We conduct simulations to validate our algorithm design.

Journal ArticleDOI
TL;DR: This paper proposes and evaluates a MILP optimization model to solve the complexities that arise from this new environment, and designs a greedy-based heuristic to investigate the possible trade-offs between execution runtime and network slice deployment.
Abstract: Network Slicing (NS) is a key enabler of the upcoming 5G and beyond system, leveraging on both Network Function Virtualization (NFV) and Software Defined Networking (SDN), NS will enable a flexible deployment of Network Functions (NFs) belonging to multiple Service Function Chains (SFC) over various administrative and technological domains. Our novel architecture addresses the complexities and heterogeneities of verticals targeted by 5G systems, whereby each slice consists of a set of SFCs, and each SFC handles specific traffic within the slice. In this paper, we propose and evaluate a MILP optimization model to solve the complexities that arise from this new environment. Our proposed model enables a cost-optimal deployment of network slices allowing a mobile network operator to efficiently allocate the underlying layer resources according to its users’ requirements. We also design a greedy-based heuristic to investigate the possible trade-offs between execution runtime and network slice deployment. For each network slice, the proposed solution guarantees the required delay and the bandwidth, while efficiently handling the use of both the VNF nodes and the physical nodes, reducing the service provider's Operating Expenditure (OPEX).

Journal ArticleDOI
TL;DR: An analytical framework of reliability-oriented cooperative computation optimization is developed, considering the dynamics of vehicular communication and computation and proposes stochastic modeling of V2V and V2I communications, incorporating effects of the vehicle mobility, channel contentions, and fading, to derive the probability of successful data transmission.
Abstract: The emergence of vehicular networking enables distributed cooperative computation among nearby vehicles and infrastructures to achieve various applications that may need to handle mass data by a short deadline. In this paper, we investigate the fundamental problems of a cooperative vehicle-infrastructure system (CVIS): how does vehicular communication and networking affect the benefit gained from cooperative computation in the CVIS and what should a reliability-optimal cooperation be? We develop an analytical framework of reliability-oriented cooperative computation optimization, considering the dynamics of vehicular communication and computation. To be specific, we propose stochastic modeling of V2V and V2I communications, incorporating effects of the vehicle mobility, channel contentions, and fading, and theoretically derive the probability of successful data transmission. We also formulate and solve an execution time minimization model to obtain the success probability of application completion with the constrained computation capacity and application requirements. By combining these models, we develop constrained optimizations to maximize the coupled reliability of communication and computation by optimizing the data partitions among different cooperators. Numerical results confirm that vehicular applications with a short deadline and large processing data size can better benefit from the cooperative computation rather than non-cooperative solutions.

Journal ArticleDOI
TL;DR: A Deep Reinforcement-Learning (DRL) based SBS activation strategy that activates the optimal subset of SBSs to significantly lower the energy consumption without compromising the quality of service and can scale to large system with polynomial complexities in both storage and computation.
Abstract: Heterogeneous Network (HetNet), where Small cell Base Stations (SBSs) are densely deployed to offload traffic from macro Base Stations (BSs), is identified as a key solution to meet the unprecedented mobile traffic demand. The high density of SBSs are designed for peak traffic hours and consume an unnecessarily large amount of energy during off-peak time. In this paper, we propose a Deep Reinforcement-Learning (DRL) based SBS activation strategy that activates the optimal subset of SBSs to significantly lower the energy consumption without compromising the quality of service. In particular, we formulate the SBS on/off switching problem into a Markov Decision Process that can be solved by Actor Critic (AC) reinforcement learning methods. To avoid prohibitively high computational and storage costs of conventional tabular-based approaches, we propose to use deep neural networks to approximate the policy and value functions in the AC approach. Moreover, to expedite the training process, we adopt a Deep Deterministic Policy Gradient (DDPG) approach together with a novel action refinement scheme. Through extensive numerical simulations, we show that the proposed scheme greatly outperforms the existing methods in terms of both energy efficiency and computational efficiency. We also show that the proposed scheme can scale to large system with polynomial complexities in both storage and computation.

Journal ArticleDOI
TL;DR: This paper designs truthful incentive mechanisms to minimize the social cost such that each of the cooperative tasks can be completed by a group of compatible users and presents a user grouping method through neural network model and clustering algorithm.
Abstract: Mobile crowd sensing emerges as a new paradigm which takes advantage of the pervasive sensor-embedded smartphones to collect data. Many incentive mechanisms for mobile crowd sensing have been proposed. However, none of them takes into consideration the cooperative compatibility of users for multiple cooperative tasks. In this paper, we design truthful incentive mechanisms to minimize the social cost such that each of the cooperative tasks can be completed by a group of compatible users. We study two bid models and formulate the Social Optimization Compatible User Selection (SOCUS) problem for each model. We also define three compatibility models and use real-life relationships from social networks to model the compatibility relationships. We design two incentive mechanisms, $MCT-M$ M C T - M and $MCT-S$ M C T - S , for the compatibility cases. Both of $MCT-M$ M C T - M and $MCT-S$ M C T - S consist of two steps: compatible user grouping and reverse auction. We further present a user grouping method through neural network model and clustering algorithm. Through both rigorous theoretical analysis and extensive simulations, we demonstrate that the proposed mechanisms achieve computational efficiency, individual rationality, and truthfulness. Moreover, $MCT-M$ M C T - M can output the optimal solution. By using neural network and clustering algorithm for user grouping, the proposed incentive mechanisms can reduce the social cost and overpayment ratio further with less grouping time.