scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Mobile Computing in 2018"


Journal ArticleDOI
TL;DR: In this paper, a delay-optimal cooperative edge caching in large-scale user-centric mobile networks, where the content placement and cluster size are optimized based on the stochastic information of network topology, traffic distribution, channel quality, and file popularity, is proposed.
Abstract: With files proactively stored at base stations (BSs), mobile edge caching enables direct content delivery without remote file fetching, which can reduce the end-to-end delay while relieving backhaul pressure. To effectively utilize the limited cache size in practice, cooperative caching can be leveraged to exploit caching diversity, by allowing users served by multiple base stations under the emerging user-centric network architecture. This paper explores delay-optimal cooperative edge caching in large-scale user-centric mobile networks, where the content placement and cluster size are optimized based on the stochastic information of network topology, traffic distribution, channel quality, and file popularity. Specifically, a greedy content placement algorithm is proposed based on the optimal bandwidth allocation, which can achieve $(1-{1/e})$ -optimality with linear computational complexity. In addition, the optimal user-centric cluster size is studied, and a condition constraining the maximal cluster size is presented in explicit form, which reflects the tradeoff between caching diversity and spectrum efficiency. Extensive simulations are conducted for analysis validation and performance evaluation. Numerical results demonstrate that the proposed greedy content placement algorithm can reduce the average file transmission delay up to 45 percent compared with the non-cooperative and hit-ratio-maximal schemes. Furthermore, the optimal clustering is also discussed considering the influences of different system parameters.

248 citations


Journal ArticleDOI
TL;DR: EABS, an event-aware backpressure scheduling scheme for EIoT, which combines the shortest path with backpressure scheme in the process of next-hop node selecting and can reduce the average end-to-end delay and increase the average forwarding percentage.
Abstract: The backpressure scheduling scheme has been applied in Internet of Things, which can control the network congestion effectively and increase the network throughput. However, in large-scale Emergency Internet of Things (EIoT), emergency packets may exist because of the urgent events or situations. The traditional backpressure scheduling scheme will explore all the possible routes between the source and destination nodes that cause a superfluous long path for packets. Therefore, the end-to-end delay increases and the real-time performance of emergency packets cannot be guaranteed. To address this shortcoming, this paper proposes EABS, an event-aware backpressure scheduling scheme for EIoT. A backpressure queue model with emergency packets is first devised based on the analysis of the arrival process of different packets. Meanwhile, EABS combines the shortest path with backpressure scheme in the process of next-hop node selecting. The emergency packets are forwarded in the shortest path and avoid the network congestion according to the queue backlog difference. The extensive experiment results verify that EABS can reduce the average end-to-end delay and increase the average forwarding percentage. For the emergency packets, the real-time performance is guaranteed. Moreover, we compare EABS with two existing backpressure scheduling schemes, showing that EABS outperforms both of them.

200 citations


Journal ArticleDOI
TL;DR: A hybrid framework that combines the two technologies - cluster heads are equipped with solar panels to scavenge solar energy and the rest of nodes are powered by wireless charging is proposed and can reduce battery depletion by 20 percent and save vehicles’ moving cost by 25 percent compared to previous works.
Abstract: The application of wireless charging technology in traditional battery-powered wireless sensor networks (WSNs) grows rapidly recently. Although previous studies indicate that the technology can deliver energy reliably, it still faces regulatory mandate to provide high power density without incurring health risks. In particular, in clustered WSNs there exists a mismatch between the high energy demands from cluster heads and the relatively low energy supplies from wireless chargers. Fortunately, solar energy harvesting can provide high power density without health risks. However, its reliability is subject to weather dynamics. In this paper, we propose a hybrid framework that combines the two technologies - cluster heads are equipped with solar panels to scavenge solar energy and the rest of nodes are powered by wireless charging. We divide the network into three hierarchical levels. On the first level, we study a discrete placement problem of how to deploy solar-powered cluster heads that can minimize overall cost and propose a distributed $1.61(1+\epsilon)^2$ -approximation algorithm for the placement. Then, we extend the discrete problem into continuous space and develop an iterative algorithm based on the Weiszfeld algorithm. On the second level, we establish an energy balance in the network and explore how to maintain such balance for wireless-powered nodes when sunlight is unavailable. We also propose a distributed cluster head re-selection algorithm. On the third level, we first consider the tour planning problem by combining wireless charging with mobile data gathering in a joint tour. We then propose a polynomial-time scheduling algorithm to find appropriate hitting points on sensors’ transmission boundaries for data gathering. For wireless charging, we give the mobile chargers more flexibility by allowing partial recharge when energy demands are high. The problem turns out to be a Linear Program. By exploiting its particular structure, we propose an efficient algorithm that can achieve near-optimal solutions. Our extensive simulation results demonstrate that the hybrid framework can reduce battery depletion by 20 percent and save vehicles’ moving cost by 25 percent compared to previous works. By allowing partial recharge, battery depletion can be further reduced at a slightly increased cost. The results also suggest that we can reduce the number of high-cost mobile chargers by deploying more low-cost solar-powered sensors.

165 citations


Journal ArticleDOI
TL;DR: This paper develops a temporal-spatial charging scheduling algorithm, namely TSCA, for the on-demand charging architecture that can achieve promising performance in charging throughput, charging efficiency, and other performance metrics.
Abstract: The collaborative charging issue in Wireless Rechargeable Sensor Networks (WRSNs) is a popular research problem. With the help of wireless power transfer technology, electrical energy can be transferred from wireless charging vehicles (WCVs) to sensors, providing a new paradigm to prolong network lifetime. Existing techniques on collaborative charging usually take the periodical and deterministic approach, but neglect influences of non-deterministic factors such as topological changes and node failures, making them unsuitable for large-scale WRSNs. In this paper, we develop a t emporal- s patial c harging scheduling a lgorithm, namely TSCA, for the on-demand charging architecture. We aim to minimize the number of dead nodes while maximizing energy efficiency to prolong network lifetime. First, after gathering charging requests, a WCV will compute a feasible movement solution. A basic path planning algorithm is then introduced to adjust the charging order for better efficiency. Furthermore, optimizations are made in a global level. Then, a node deletion algorithm is developed to remove low efficient charging nodes. Lastly, a node insertion algorithm is executed to avoid the death of abandoned nodes. Extensive simulations show that, compared with state-of-the-art charging scheduling algorithms, our scheme can achieve promising performance in charging throughput, charging efficiency, and other performance metrics.

148 citations


Journal ArticleDOI
TL;DR: This article presents User-Level Online Offloading Framework (ULOOF), a lightweight and efficient framework for mobile computation offloading that can offload up to 73 percent of computations, and improve the execution time by 50 percent while at the same time significantly reducing the energy consumption of mobile devices.
Abstract: Mobile devices are equipped with limited processing power and battery charge. A mobile computation offloading framework is a software that provides better user experience in terms of computation time and energy consumption, also taking profit from edge computing facilities. This article presents User-Level Online Offloading Framework (ULOOF), a lightweight and efficient framework for mobile computation offloading. ULOOF is equipped with a decision engine that minimizes remote execution overhead, while not requiring any modification in the device’s operating system. By means of real experiments with Android systems and simulations using large-scale data from a major cellular network provider, we show that ULOOF can offload up to 73 percent of computations, and improve the execution time by 50 percent while at the same time significantly reducing the energy consumption of mobile devices.

143 citations


Journal ArticleDOI
En Wang1, Yongjian Yang1, Jie Wu2, Wenbin Liu1, Xingbo Wang1 
TL;DR: An efficient prediction-based user-recruitment strategy for mobile crowdsensing that achieves a lower recruitment payment and PURE-DF achieves the highest delivery efficiency is proposed.
Abstract: Mobile crowdsensing is a new paradigm in which a group of mobile users exploit their smart devices to cooperatively perform a large-scale sensing job. One of the users’ main concerns is the cost of data uploading, which affects their willingness to participate in a crowdsensing task. In this paper, we propose an efficient Prediction-based User Recruitment for mobile crowdsEnsing (PURE), which separates the users into two groups corresponding to different price plans: Pay as you go (PAYG) and Pay monthly (PAYM). By regarding the PAYM users as destinations, the minimizing cost problem goes to recruiting the users that have the largest contact probability with a destination. We first propose a semi-Markov model to determine the probability distribution of user arrival time at points of interest (PoIs) and then get the inter-user contact probability. Next, an efficient prediction-based user-recruitment strategy for mobile crowdsensing is proposed to minimize the data uploading cost. We then propose PURE-DF by extending PURE to a case in which we address the tradeoff between the delivery ratio of sensing data and the recruiter number according to Delegation Forwarding. We conduct extensive simulations based on three widely-used real-world traces: roma/taxi , epfl , and geolife . The results show that, compared with other recruitment strategies, PURE achieves a lower recruitment payment and PURE-DF achieves the highest delivery efficiency.

140 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel multi-task allocation framework named MTasker, which adopts a descent greedy approach, where a quasi-optimal allocation plan is evolved by removing a set of task-worker pairs from the full set.
Abstract: Task allocation is a fundamental research issue in mobile crowd sensing. While earlier research focused mainly on single tasks, recent studies have started to investigate multi-task allocation, which considers the interdependency among multiple tasks. A common drawback shared by existing multi-task allocation approaches is that, although the overall utility of multiple tasks is optimized, the sensing quality of individual tasks may become poor as the number of tasks increases. To overcome this drawback, we re-define the multi-task allocation problem by introducing task-specific minimal sensing quality thresholds, with the objective of assigning an appropriate set of tasks to each worker such that the overall system utility is maximized. Our new problem also takes into account the maximum number of tasks allowed for each worker and the sensor availability of each mobile device. To solve this newly-defined problem, this paper proposes a novel multi-task allocation framework named MTasker. Different from previous approaches which start with an empty set and iteratively select task-worker pairs, MTasker adopts a descent greedy approach, where a quasi-optimal allocation plan is evolved by removing a set of task-worker pairs from the full set. Extensive evaluations based on real-world mobility traces show that MTasker outperforms the baseline methods under various settings, and our theoretical analysis proves that MTasker has a good approximation bound.

139 citations


Journal ArticleDOI
TL;DR: This paper investigates the NOMA downlink relay-transmission, and proposes an optimal power allocation problem for the BS and relays to maximize the overall throughput delivered to the MU and proposes a hybrid N OMA (HB-NOMA) relay that adaptively exploits the benefit of NOMa relay and that of the interference-free TDMA relay.
Abstract: The emerging non-orthogonal multiple access (NOMA), which enables mobile users (MUs) to share same frequency channel simultaneously, has been considered as a spectrum-efficient multiple access scheme to accommodate tremendous traffic growth in future cellular networks. In this paper, we investigate the NOMA downlink relay-transmission, in which the macro base station (BS) first uses NOMA to transmit to a group of relays, and all relays then use NOMA to transmit their respectively received data to an MU. In specific, we propose an optimal power allocation problem for the BS and relays to maximize the overall throughput delivered to the MU. Despite the non-convexity of the problem, we adopt the vertical decomposition and propose a layered-algorithm to efficiently compute the optimal power allocation solution. Numerical results show that the proposed NOMA relay-transmission can increase the throughput up to 30 percent compared with the conventional time division multiple access (TDMA) scheme, and we find that increasing the relays’ power capacity can increase the throughput gain of the NOMA relay against the TDMA relay. Furthermore, to improve the throughput under weak channel power gains, we propose a hybrid NOMA (HB-NOMA) relay that adaptively exploits the benefit of NOMA relay and that of the interference-free TDMA relay. By using the throughput provided by the HB-NOMA relay for each individual MU, we study the multi-MUs scenario and investigate the multi-MUs scheduling problem over a long-term period to maximize the overall utility of all MUs. Numerical results demonstrate the performance advantage of the proposed multi-MUs scheduling that adopts the HB-NOMA relay-transmission.

136 citations


Journal ArticleDOI
TL;DR: The aim is to quantify the potential of human activity recognition from kinetic energy harvesting (HARKE) and demonstrate that HARKE can save 79 percent of the overall system power consumption of conventional accelerometer-based HAR.
Abstract: Kinetic energy harvesting (KEH) may help combat battery issues in wearable devices. While the primary objective of KEH is to generate energy from human activities, the harvested energy itself contains information about human activities that most wearable devices try to detect using motion sensors. In principle, it is therefore possible to use KEH both as a power generator and a sensor for human activity recognition (HAR), saving sensor-related power consumption. Our aim is to quantify the potential of human activity recognition from kinetic energy harvesting (HARKE). We evaluate the performance of HARKE using two independent datasets: (i) a public accelerometer dataset converted into KEH data through theoretical modeling; and (ii) a real KEH dataset collected from volunteers performing activities of daily living while wearing a data-logger that we built of a piezoelectric energy harvester. Our results show that HARKE achieves an accuracy of 80 to 95 percent, depending on the dataset and the placement of the device on the human body. We conduct detailed power consumption measurements to understand and quantify the power saving opportunity of HARKE. The results demonstrate that HARKE can save 79 percent of the overall system power consumption of conventional accelerometer-based HAR.

134 citations


Journal ArticleDOI
TL;DR: AcMu is proposed, an automatic and continuous radio map self-updating service for wireless indoor localization that exploits the static behaviors of mobile devices that provides 2$\times$ improvement on localization accuracy by maintaining an up-to-date radio map.
Abstract: The proliferation of mobile computing has prompted WiFi-based indoor localization to be one of the most attractive and promising techniques for ubiquitous applications. A primary concern for these technologies to be fully practical is to combat harsh indoor environmental dynamics, especially for long-term deployment. Despite numerous research on WiFi fingerprint-based localization, the problem of radio map adaptation has not been sufficiently studied and remains open. In this work, we propose AcMu, an automatic and continuous radio map self-updating service for wireless indoor localization that exploits the static behaviors of mobile devices. By accurately pinpointing mobile devices with a novel trajectory matching algorithm, we employ them as mobile reference points to collect real-time RSS samples when they are static. With these fresh reference data, we adapt the complete radio map by learning an underlying relationship of RSS dependency between different locations, which is expected to be relatively constant over time. Extensive experiments for 20 days across six months demonstrate that AcMu effectively accommodates RSS variations over time and derives accurate prediction of fresh radio map with average errors of less than 5dB, outperforming existing approaches. Moreover, AcMu provides 2 $\times$ improvement on localization accuracy by maintaining an up-to-date radio map.

122 citations


Journal ArticleDOI
TL;DR: This paper introduces the three planes of SERvICE, a Software dEfined fRamework for Integrated spaCe-tErrestrial satellite Communication, based on Software Defined Network (SDN) and Network Function Virtualization (NFV), and proposes two heuristic algorithms, namely the QoS-oriented Satellite Routing (QSR) algorithm and the QOS-oriented Bandwidth Allocation (QBA) algorithm, to guarantee theQoS requirement of multiple users.
Abstract: The existing satellite communication systems suffer from traditional design, such as slow configuration, inflexible traffic engineering, and coarse-grained Quality of Service (QoS) guarantee. To address these issues, in this paper, we propose SERvICE, a Software dEfined fRamework for Integrated spaCe-tErrestrial satellite Communication, based on Software Defined Network (SDN) and Network Function Virtualization (NFV). We first introduce the three planes of SERvICE, Management Plane, Control Plane, and Forwarding Plane. The framework is designed to achieve flexible satellite network traffic engineering and fine-grained QoS guarantee. We analyze the agility of the space component of SERvICE. Then, we give a description of the implementation of the prototype with the help of the Delay Tolerant Network (DTN) and OpenFlow. We conduct two experiments to validate the feasibility of SERvICE and the functionality of the prototype. In addition, we propose two heuristic algorithms, namely the QoS-oriented Satellite Routing (QSR) algorithm and the QoS-oriented Bandwidth Allocation (QBA) algorithm, to guarantee the QoS requirement of multiple users. The algorithms are also evaluated in the prototype. The experimental results show the efficiency of the proposed algorithms in terms of file transmission delay and transmission rate.

Journal ArticleDOI
TL;DR: A Greedy and Adaptive AUV Path-finding (GAAP) heuristic that drives the AUV to collect data from nodes depending on the VoI of their data, which shows that GAAP always outperforms every other heuristic in terms of delivered VoI, also obtaining higher energy efficiency.
Abstract: We consider underwater multi-modal wireless sensor networks (UWSNs) suitable for applications on submarine surveillance and monitoring, where nodes offload data to a mobile autonomous underwater vehicle (AUV) via optical technology, and coordinate using acoustic communication Sensed data are associated with a value, decaying in time In this scenario, we address the problem of finding the path of the AUV so that the Value of Information (VoI) of the data delivered to a sink on the surface is maximized We define a Greedy and Adaptive AUV Path-finding (GAAP) heuristic that drives the AUV to collect data from nodes depending on the VoI of their data For benchmarking the performance of AUV path-finding heuristics, we define an integer linear programming (ILP) formulation that accurately models the considered scenario, deriving a path that drives the AUV to collect and deliver data with the maximum VoI In our experiments GAAP consistently delivers more than 80 percent of the theoretical maximum VoI determined by the ILP model We also compare the performance of GAAP with that of other strategies for driving the AUV among sensing nodes, namely, random paths, TSP-based paths and a “lawn mower”-like strategy Our results show that GAAP always outperforms every other heuristic in terms of delivered VoI, also obtaining higher energy efficiency

Journal ArticleDOI
TL;DR: Effective heuristic methods, including multi-round linear weight optimization and enhanced multi-objective particle swarm optimization algorithms are proposed to achieve adequate Pareto-optimal allocation in heterogeneous spatial crowdsourcing.
Abstract: With the rapid development of mobile networks and the proliferation of mobile devices, spatial crowdsourcing, which refers to recruiting mobile workers to perform location-based tasks, has gained emerging interest from both research communities and industries. In this paper, we consider a spatial crowdsourcing scenario: in addition to specific spatial constraints, each task has a valid duration, operation complexity, budget limitation, and the number of required workers. Each volunteer worker completes assigned tasks while conducting his/her routine tasks. The system has a desired task probability coverage and budget constraint. Under this scenario, we investigate an important problem, namely heterogeneous spatial crowdsourcing task allocation (HSC-TA), which strives to search a set of representative Pareto-optimal allocation solutions for the multi-objective optimization problem, such that the assigned task coverage is maximized and incentive cost is minimized simultaneously. To accommodate the multi-constraints in heterogeneous spatial crowdsourcing, we build a worker mobility behavior prediction model to align with allocation process. We prove that the HSC-TA problem is NP-hard. We propose effective heuristic methods, including multi-round linear weight optimization and enhanced multi-objective particle swarm optimization algorithms to achieve adequate Pareto-optimal allocation. Comprehensive experiments on both real-world and synthetic data sets clearly validate the effectiveness and efficiency of our proposed approaches.

Journal ArticleDOI
TL;DR: This paper proposes to pay the participants as how well they do, to motivate the rational participants to efficiently perform crowdsensing tasks, and proposes a mechanism that estimates the quality of sensing data, and offers each participant a reward based on her effective contribution.
Abstract: In crowdsensing, appropriate rewards are always expected to compensate the participants for their consumptions of physical resources and involvements of manual efforts. While continuous low quality sensing data could do harm to the availability and preciseness of crowdsensing based services, few existing incentive mechanisms have ever addressed the issue of data quality. The design of quality based incentive mechanism is motivated by its potential to avoid inefficient sensing and unnecessary rewards. In this paper, we incorporate the consideration of data quality into the design of incentive mechanism for crowdsensing, and propose to pay the participants as how well they do, to motivate the rational participants to efficiently perform crowdsensing tasks. This mechanism estimates the quality of sensing data, and offers each participant a reward based on her effective contribution. We also implement the mechanism and evaluate its improvement in terms of quality of service and profit of service provider. The evaluation results show that our mechanism achieves superior performance when compared to general data collection model and uniform pricing scheme.

Journal ArticleDOI
TL;DR: This paper develops a machine learning based prediction framework, LinkForecast, that identifies the most important features and uses these features to predict link bandwidth in real time and investigates the prediction performance when using lower-layer features obtained through standard APIs provided by the operating system, instead of specialized tools.
Abstract: Accurate cellular link bandwidth prediction can benefit upper-layer protocols significantly. In this paper, we investigate how to predict cellular link bandwidth in LTE networks. We first conduct an extensive measurement study in two major commercial LTE networks in the US, and identify five types of lower-layer information that are correlated with cellular link bandwidth. We then develop a machine learning based prediction framework, LinkForecast , that identifies the most important features (from both upper and lower layers) and uses these features to predict link bandwidth in real time. Our evaluation shows that LinkForecast is lightweight and the prediction is highly accurate: At the time granularity of one second, the average prediction error is in the range of 3.9 to 17.0 percent for all the scenarios we explore. We further investigate the prediction performance when using lower-layer features obtained through standard APIs provided by the operating system, instead of specialized tools. Our results show that, while the features thus obtained have lower fidelity compared to those from specialized tools, they lead to similar prediction accuracy, indicating that our approach can be easily used over commercial off-the-shelf mobile devices.

Journal ArticleDOI
TL;DR: This work proposes a truthful, reverse-auction-based incentive mechanism that includes an approximation algorithm to select winning bids with a nearly minimum social cost and a payment algorithm to determine payments for all participants, and extends the problem to a more complex case in which the Quality of sensing Data of each vehicle is taken into consideration.
Abstract: In this paper, we focus on the incentive mechanism design for a vehicle-based, nondeterministic crowdsensing system. In this crowdsensing system, vehicles move along their trajectories and perform corresponding sensing tasks with different probabilities. Each task may be performed by multiple vehicles jointly so as to ensure a high probability of success. Designing an incentive mechanism for such a crowdsensing system is challenging since it contains a non-trivial set cover problem. To solve this problem, we propose a truthful, reverse-auction-based incentive mechanism that includes an approximation algorithm to select winning bids with a nearly minimum social cost and a payment algorithm to determine payments for all participants. Moreover, we extend the problem to a more complex case in which the Quality of sensing Data (QoD) of each vehicle is taken into consideration. For this problem, we propose a QoD-aware incentive mechanism, which consists of a QoD-aware winning-bid selection algorithm and a QoD-aware payment determination algorithm. We prove that the proposed incentive mechanisms have truthfulness, individual rationality, and computational efficiency. Moreover, we analyze the approximation ratios of the winning-bid selection algorithms. The simulations, based on a real vehicle trace, also demonstrate the significant performances of our incentive mechanisms.

Journal ArticleDOI
TL;DR: Comparing the transceiver’s performance with independent results from simulations and experiments, it underline its potential to be used as a tool for further studies of IEEE 802.11p networks both in field operational tests as well as for simulation-based development of novel physical layer solutions.
Abstract: We present a complete simulation and experimentation framework for IEEE 802.11p. The core of the framework is a Software Defined Radio (SDR)-based Orthogonal Frequency Division Multiplexing (OFDM) transceiver that we validated extensively by means of simulations, interoperability tests, and, ultimately, by conducting a field test. Being SDR-based, the transceiver offers important benefits: It provides access to all data down to and including the physical layer, allowing for a better understanding of the system. Based on open and programmable hardware and software, the transceiver is completely transparent and all implementation details can be studied and, if needed, modified. Finally, it enables a seamless switch between simulations and experiments and, thus, helps to bridge the gap between theory and practice. Comparing the transceiver’s performance with independent results from simulations and experiments, we underline its potential to be used as a tool for further studies of IEEE 802.11p networks both in field operational tests as well as for simulation-based development of novel physical layer solutions. To make the framework accessible to fellow researchers and to allow reproduction of the results, we released it under an Open Source license.

Journal ArticleDOI
TL;DR: This work focuses on the state-of-the-art, stateless geographic packet routing protocols conceived or adapted for three-dimensional network scenarios, and evaluated over a common scenario through a comprehensive comparative analysis.
Abstract: Scalable routing for wireless communication systems is a compelling and challenging task. To this aim, routing algorithms exploiting geographic information have been proposed. These algorithms refer to nodes by their location, rather than their address, and use those coordinates to route greedily towards a destination. With the advent of unmanned airborne vehicle (UAV) technology, a lot of research effort has been devoted to extend position-based packet routing proposals to three dimensional environments. In this context, Flying Ad-hoc Networks (FANETs), comprised of autonomous flying vehicles, pose several issues. This work focuses on the state-of-the-art, stateless geographic packet routing protocols conceived or adapted for three-dimensional network scenarios. Proposals are evaluated over a common scenario through a comprehensive comparative analysis.

Journal ArticleDOI
Huaming Wu1
TL;DR: Two types of delayed offloading policies are investigated, the partial offloading model where jobs can leave from the slow phase of the offloading process and be executed locally on the mobile device, and the full off loading model, where jobsCan abandon the WiFi Queue and be offloaded via the Cellular Queue, which minimize the Energy-Response time Weighted Product (ERWP) metric.
Abstract: Mobile cloud offloading that migrates heavy computation from mobile devices to powerful cloud servers through communication networks can alleviate the hardware limitations of mobile devices thus providing higher performance and saving energy. Different applications usually give different relative importance to response time and energy consumption. If a delay-tolerant job is deferred up to a given deadline, or until a fast and energy-efficient network becomes available, the transmission time will be extended, which can save energy because a more energy-efficient communication channel and a less energy-restricted computation platform may become available. However, if the reduced service time fails to cover the extra waiting time, this policy may not be competitive. In this paper, we investigate two types of delayed offloading policies, the partial offloading model where jobs can leave from the slow phase of the offloading process and be executed locally on the mobile device, and the full offloading model, where jobs can abandon the WiFi Queue and be offloaded via the Cellular Queue . In both models, we minimize the Energy-Response time Weighted Product (ERWP) metric. Not surprisingly, we find that jobs abandon the queue often when the availability of the WiFi network is low. In general, for delay-sensitive applications the partial offloading model is preferred under a suitable reneging rate, while for delay-tolerant applications the full offloading model shows very good results and outperforms the other offloading model when selecting a large deadline. From the perspective of energy consumption, the full offloading model will always be best, even if the deadline must be extremely long. Only if job response time is of high importance an optimal deadline to abort offloading in the partial offloading model or the WiFi transmission in the full offloading model can be found. For reduction of the energy consumption it will always be better to wait longer rather than compute locally or use the cellular network.

Journal ArticleDOI
TL;DR: A reinforcement learning based handoff policy named SMART is proposed to reduce the number of handoffs while maintaining user Quality of Service (QoS) requirements in mmWave HetNets and proposes reinforcement-learning based BS selection algorithms for different UE densities.
Abstract: The millimeter wave (mmWave) radio band is promising for the next-generation heterogeneous cellular networks (HetNets) due to its large bandwidth available for meeting the increasing demand of mobile traffic. However, the unique propagation characteristics at mmWave band cause huge redundant handoffs in mmWave HetNets that brings heavy signaling overhead, low energy efficiency and increased user equipment (UE) outage probability if conventional Reference Signal Received Power (RSRP) based handoff mechanism is used. In this paper, we propose a reinforcement learning based handoff policy named SMART to reduce the number of handoffs while maintaining user Quality of Service (QoS) requirements in mmWave HetNets. In SMART, we determine handoff trigger conditions by taking into account both mmWave channel characteristics and QoS requirements of UEs. Furthermore, we propose reinforcement-learning based BS selection algorithms for different UE densities. Numerical results show that in typical scenarios, SMART can significantly reduce the number of handoffs when compared with traditional handoff policies without learning.

Journal ArticleDOI
TL;DR: In this paper, an extension of RPL, called BRPL, is proposed, which allows users to smoothly combine any RPL Object Function (OF) with backpressure routing.
Abstract: RPL, an IPv6 routing protocol for Low power Lossy Networks (LLNs), is considered to be the de facto routing standard for the Internet of Things (IoT). However, more and more experimental results demonstrate that RPL performs poorly when it comes to throughput and adaptability to network dynamics. This significantly limits the application of RPL in many practical IoT scenarios, such as an LLN with high-speed sensor data streams and mobile sensing devices. To address this issue, we develop BRPL, an extension of RPL, providing a practical approach that allows users to smoothly combine any RPL Object Function (OF) with backpressure routing. BRPL uses two novel algorithms, QuickTheta and QuickBeta , to support time-varying data traffic loads and node mobility respectively. We implement BRPL on Contiki OS, an open-source operating system for the Internet of Things. We conduct an extensive evaluation using both real-world experiments based on the FIT IoT-LAB testbed and large-scale simulations using Cooja over 18 virtual servers on the Cloud. The evaluation results demonstrate that BRPL not only is fully backward compatible with RPL (i.e., devices running RPL and BRPL can work together seamlessly), but also significantly improves network throughput and adaptability to changes in network topologies and data traffic loads. The observed packet loss reduction in mobile networks is, at a minimum, 60 and up to 1,000 percent can be seen in extreme cases.

Journal ArticleDOI
TL;DR: This paper applies the Simultaneous Wireless Information and Power Transfer technique to a MWSN where the energy harvested by relay nodes can compensate their energy consumption on data forwarding, and designs a resource allocation (ResAll) algorithm by considering different power splitting abilities of relays.
Abstract: In mobile wireless sensor networks (MWSNs), scavenging energy from ambient radio frequency (RF) signals is a promising solution to prolonging the lifetime of energy-constrained relay nodes. In this paper, we apply the Simultaneous Wireless Information and Power Transfer (SWIPT) technique to a MWSN where the energy harvested by relay nodes can compensate their energy consumption on data forwarding. In such a network, how to maximize system energy efficiency (bits/Joule delivered to relays) by trading off energy harvesting and data forwarding is a critical issue. To this end, we design a resource allocation (ResAll) algorithm by considering different power splitting abilities of relays under two scenarios. In the first scenario, the power received by relays is split into a continuous set of power streams with arbitrary power splitting ratios. In the second scenario, the received power is only split into a discrete set of power streams with fixed power splitting ratios. For each scenario above, we formulate the ResAll problem in a MWSN with SWIPT as a non-convex energy efficiency maximization problem. By exploiting fractional programming and dual decomposition, we further propose a cross-layer ResAll algorithm consisting of subalgorithms for rate control, power allocation, and power splitting to solve the problem efficiently and optimally. Simulation results reveal that the proposed ResAll algorithm converges within a small number of iterations, and achieves optimal system energy efficiency by balancing energy efficiency, data rate, transmit power, and power splitting ratio.

Journal ArticleDOI
TL;DR: It is shown that for the same cellular traffic level, as the number of sensor nodes in the network increases, the IoT network utilization increases resulting in a multi-user gain thanks to the broadcast nature of the energy transfer.
Abstract: This paper proposes an energy and spectrum efficient IoT network for 5G systems where spectrum is shared with the cellular system for spectrum efficiency and energy harvesting and energy transfer are utilized for energy efficiency. The IoT network, which consists of sensor nodes and a cluster head with a reliable energy source, reuses part of the cellular band whenever the cellular network does not utilize it. The cluster head performs spectrum sensing, random scheduling of the sensor nodes, and schedules some idle time for energy transfer. The sensor nodes harvest RF energy from the cellular traffic and the transferred energy from the cluster head. Provided the sensor nodes have sufficient energy, they transmit collected sensory data when scheduled. The inter-play between the cellular and IoT network introduces trade-offs between the spectrum availability, energy availability, information, and energy transfer. This paper shows that for the same cellular traffic level, as the number of sensor nodes in the network increases, the IoT network utilization increases resulting in a multi-user gain thanks to the broadcast nature of the energy transfer. The results offer insights into different operational regimes and exposes what type of IoT applications may be feasible with such networks.

Journal ArticleDOI
TL;DR: This paper designs two frameworks for privacy-preserving auction-based incentive mechanisms that also achieve approximate social cost minimization and demonstrates that both frameworks achieve bid-privacy preservation although sacrificing social cost.
Abstract: With the rapid growth of smartphones, mobile crowdsensing emerges as a new paradigm which takes advantage of the pervasive sensor-embedded smartphones to collect data efficiently. Many auction-based incentive mechanisms have been proposed to stimulate smartphone users to participate in the mobile crowdsensing applications and systems. However, none of them has taken into consideration both the bid privacy of smartphone users and the social cost. In this paper, we design two frameworks for privacy-preserving auction-based incentive mechanisms that also achieve approximate social cost minimization. In the former, each user submits a bid for a set of tasks it is willing to perform; in the latter, each user submits a bid for each task in its task set. Both frameworks select users based on platform-defined score functions. As examples, we propose two score functions, linear and log functions, to realize the two frameworks. We rigorously prove that both proposed frameworks achieve computational efficiency, individual rationality, truthfulness, differential privacy, and approximate social cost minimization. In addition, with log score function, the two frameworks are asymptotically optimal in terms of the social cost. Extensive simulations evaluate the performance of the two frameworks and demonstrate that our frameworks achieve bid-privacy preservation although sacrificing social cost.

Journal ArticleDOI
TL;DR: This paper proposes efficient algorithms for the use of a mobile charger to wirelessly charge sensors in a rechargeable sensor network so that the sum of sensor lifetimes is maximized while the travel distance of the mobile charger is minimized.
Abstract: Wireless energy transfer technology based on magnetic resonant coupling has emerged as a promising technology for wireless sensor networks, by providing controllable yet continual energy to sensors. In this paper, we study the use of a mobile charger to wirelessly charge sensors in a rechargeable sensor network so that the sum of sensor lifetimes is maximized while the travel distance of the mobile charger is minimized. Unlike existing studies that assumed a mobile charger must charge a sensor to its full energy capacity before moving to charge the next sensor, we here assume that each sensor can be partially charged so that more sensors can be charged before their energy depletions. Under this new energy charging model, we first formulate two novel optimization problems of scheduling a mobile charger to charge a set of sensors, with the objectives to maximize the sum of sensor lifetimes and to minimize the travel distance of the mobile charger while achieving the maximum sum of sensor lifetimes, respectively. We then propose efficient algorithms for the problems. We finally evaluate the performance of the proposed algorithms through experimental simulations. Simulation results demonstrate that the proposed algorithms are very promising. Especially, the average energy expiration duration per sensor by the proposed algorithm for maximizing the sum of sensor lifetimes is only 9 percent of that by the state-of-the-art algorithm while the travel distance of the mobile charger by the second proposed algorithm is only about from 1 to 15 percent longer than that by the state-of-the-art benchmark.

Journal ArticleDOI
TL;DR: LIFS, a Low human-effort, device-free localization system with fine-grained subcarrier information, which can localize a target accurately without offline training, outperforming the state-of-the-art systems.
Abstract: Device-free localization of objects not equipped with RF radios is playing a critical role in many applications. This paper presents LIFS, a Low human-effort, device-free localization system with fine-grained subcarrier information, which can localize a target accurately without offline training. The basic idea is simple: channel state information (CSI) is sensitive to a target’s location and thus the target can be localized by modelling the CSI measurements of multiple wireless links. However, due to rich multipath indoors, CSI can not be easily modelled. To deal with this challenge, our key observation is that even in a rich multipath environment, not all subcarriers are affected equally by multipath reflections. Our CSI pre-processing scheme tries to identify the subcarriers not affected by multipath. Thus, CSI on the “clean” subcarriers can still be utilized for accurate localization. Without the need of knowing the majority transceivers’ locations, LiFS achieves a median accuracy of 0.5 m and 1.1 m in line-of-sight (LoS) and non-line-of-sight (NLoS) scenarios, respectively, outperforming the state-of-the-art systems.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that SEDC with DHRP is more effective than two well-known clustering protocols, HEED and M-LEACH, for prolonging the network lifetime and achieving energy conservation.
Abstract: Prolonging the network life cycle is an essential requirement for many types of Wireless Sensor Network (WSN) applications Dynamic clustering of sensors into groups is a popular strategy to maximize the network lifetime and increase scalability In this strategy, to achieve the sensor nodes’ load balancing, with the aim of prolonging lifetime, network operations are split into rounds, ie, fixed time intervals Clusters are configured for the current round and reconfigured for the next round so that the costly role of the cluster head is rotated among the network nodes, ie, Round-Based Policy (RBP) This load balancing approach potentially extends the network lifetime However, the imposed overhead, due to the clustering in every round, wastes network energy resources This paper proposes a distributed energy-efficient scheme to cluster a WSN, ie, Dynamic Hyper Round Policy (DHRP), which schedules clustering-task to extend the network lifetime and reduce energy consumption Although DHRP is applicable to any data gathering protocols that value energy efficiency, a Simple Energy-efficient Data Collecting (SEDC) protocol is also presented to evaluate the usefulness of DHRP and calculate the end-to-end energy consumption Experimental results demonstrate that SEDC with DHRP is more effective than two well-known clustering protocols, HEED and M-LEACH, for prolonging the network lifetime and achieving energy conservation

Journal ArticleDOI
TL;DR: Simulation results demonstrate that EDGR exhibits higher energy efficiency, and has moderate performance improvements on network lifetime, packet delivery ratio, and delivery delay, compared to other geographic routing protocols in WSNs over a variety of communication scenarios passing through routing holes.
Abstract: Geographic routing has been considered as an attractive approach for resource-constrained wireless sensor networks (WSNs) since it exploits local location information instead of global topology information to route data. However, this routing approach often suffers from the routing hole (i.e., an area free of nodes in the direction closer to destination) in various environments such as buildings and obstacles during data delivery, resulting in route failure. Currently, existing geographic routing protocols tend to walk along only one side of the routing holes to recover the route, thus achieving suboptimal network performance such as longer delivery delay and lower delivery ratio. Furthermore, these protocols cannot guarantee that all packets are delivered in an energy-efficient manner once encountering routing holes. In this paper, we focus on addressing these issues and propose an energy-aware dual-path geographic routing (EDGR) protocol for better route recovery from routing holes. EDGR adaptively utilizes the location information, residual energy, and the characteristics of energy consumption to make routing decisions, and dynamically exploits two node-disjoint anchor lists, passing through two sides of the routing holes, to shift routing path for load balance. Moreover, we extend EDGR into three-dimensional (3D) sensor networks to provide energy-aware routing for routing hole detour. Simulation results demonstrate that EDGR exhibits higher energy efficiency, and has moderate performance improvements on network lifetime, packet delivery ratio, and delivery delay, compared to other geographic routing protocols in WSNs over a variety of communication scenarios passing through routing holes. The proposed EDGR is much applicable to resource-constrained WSNs with routing holes.

Journal ArticleDOI
TL;DR: A low-cost, unobtrusive, and robust system that supports independent living of older people that interprets what a person is doing by deciphering signal fluctuations using radio-frequency identification technology and machine learning algorithms is presented.
Abstract: Understanding and recognizing human activities is a fundamental research topic for a wide range of important applications such as fall detection and remote health monitoring and intervention. Despite active research in human activity recognition over the past years, existing approaches based on computer vision or wearable sensor technologies present several significant issues such as privacy (e.g., using video camera to monitor the elderly at home) and practicality (e.g., not possible for an older person with dementia to remember wearing devices). In this paper, we present a low-cost, unobtrusive, and robust system that supports independent living of older people. The system interprets what a person is doing by deciphering signal fluctuations using radio-frequency identification (RFID) technology and machine learning algorithms. To deal with noisy, streaming, and unstable RFID signals, we develop a compressive sensing, dictionary-based approach that can learn a set of compact and informative dictionaries of activities using an unsupervised subspace decomposition. In particular, we devise a number of approaches to explore the properties of sparse coefficients of the learned dictionaries for fully utilizing the embodied discriminative information on the activity recognition task. Our approach achieves efficient and robust activity recognition via a more compact and robust representation of activities. Extensive experiments conducted in a real-life residential environment demonstrate that our proposed system offers a good overall performance and shows the promising practical potential to underpin the applications for the independent living of the elderly.

Journal ArticleDOI
TL;DR: These results show that the proposed solutions give nearly optimal performance under a wide range of parameter settings, and the addition of a CAP can significantly reduce the cost of multi-user task offloading compared with conventional mobile cloud computing where only the remote cloud server is available.
Abstract: We consider a mobile cloud computing system with multiple users, a remote cloud server, and a computing access point (CAP). The CAP serves both as the network access gateway and a computation service provider to the mobile users. It can either process the received tasks from mobile users or offload them to the cloud. We jointly optimize the offloading decisions of all users, together with the allocation of computation and communication resources, to minimize the overall cost of energy consumption, computation, and maximum delay among users. The joint optimization problem is formulated as a mixed-integer program. We show that the problem can be reformulated and transformed into a non-convex quadratically constrained quadratic program, which is NP-hard in general. We then propose an efficient solution to this problem by semidefinite relaxation and a novel randomization mapping method. Furthermore, when there is a strict delay constraint for processing each user's task, we further propose a three-step algorithm to guarantee the feasibility and local optimality of the obtained solution. Our numerical results show that the proposed solutions give nearly optimal performance under a wide range of parameter settings, and the addition of a CAP can significantly reduce the cost of multi-user task offloading compared with conventional mobile cloud computing where only the remote cloud server is available.