scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Mobile Computing in 2017"


Journal ArticleDOI
TL;DR: This work analyzes the wireless signal propagation model considering human activities influence and proposes a novel and truly unobtrusive detection method based on the advanced wireless technologies, which it is called as WiFall, which withdraws the need for hardware modification, environmental setup and worn or taken devices.
Abstract: Injuries that are caused by falls have been regarded as one of the major health threats to the independent living for the elderly. Conventional fall detection systems have various limitations. In this work, we first look for the correlations between different radio signal variations and activities by analyzing radio propagation model. Based on our observation, we propose WiFall, a truly unobtrusive fall detection system. WiFall employs physical layer Channel State Information (CSI) as the indicator of activities. It can detect fall of the human without hardware modification, extra environmental setup, or any wearable device. We implement WiFall on desktops equipped with commodity 802.11n NIC, and evaluate the performance in three typical indoor scenarios with several layouts of transmitter-receiver (Tx-Rx) links. In our area of interest, WiFall can achieve fall detection for a single person with high accuracy. As demonstrated by the experimental results, WiFall yields 90 percent detection precision with a false alarm rate of 15 percent on average using a one-class SVM classifier in all testing scenarios. It can also achieve average 94 percent fall detection precisions with 13 percent false alarm using Random Forest algorithm.

686 citations


Journal ArticleDOI
Hao Wang1, Daqing Zhang1, Yasha Wang1, Junyi Ma1, Yuxiang Wang1, Shengjie Li1 
TL;DR: RT-Fall exploits the phase and amplitude of the fine-grained Channel State Information accessible in commodity WiFi devices, and for the first time fulfills the goal of segmenting and detecting the falls automatically in real-time, which allows users to perform daily activities naturally and continuously without wearing any devices on the body.
Abstract: This paper presents the design and implementation of RT-Fall, a real-time, contactless, low-cost yet accurate indoor fall detection system using the commodity WiFi devices. RT-Fall exploits the phase and amplitude of the fine-grained Channel State Information (CSI) accessible in commodity WiFi devices, and for the first time fulfills the goal of segmenting and detecting the falls automatically in real-time, which allows users to perform daily activities naturally and continuously without wearing any devices on the body. This work makes two key technical contributions. First, we find that the CSI phase difference over two antennas is a more sensitive base signal than amplitude for activity recognition, which can enable very reliable segmentation of fall and fall-like activities. Second, we discover the sharp power profile decline pattern of the fall in the time-frequency domain and further exploit the insight for new feature extraction and accurate fall segmentation/detection. Experimental results in four indoor scenarios demonstrate that RT-fall consistently outperforms the state-of-the-art approach WiFall with 14 percent higher sensitivity and 10 percent higher specificity on average.

464 citations


Journal ArticleDOI
TL;DR: An online algorithm to learn the unknown dynamic environment and guarantee that the performance gap compared to the optimal strategy is bounded by a logarithmic function with time is proposed.
Abstract: With mobile devices increasingly able to connect to cloud servers from anywhere, resource-constrained devices can potentially perform offloading of computational tasks to either save local resource usage or improve performance. It is of interest to find optimal assignments of tasks to local and remote devices that can take into account the application-specific profile, availability of computational resources, and link connectivity, and find a balance between energy consumption costs of mobile devices and latency for delay-sensitive applications. We formulate an NP-hard problem to minimize the application latency while meeting prescribed resource utilization constraints. Different from most of existing works that either rely on the integer programming solver, or on heuristics that offer no theoretical performance guarantees, we propose Hermes, a novel fully polynomial time approximation scheme (FPTAS). We identify for a subset of problem instances, where the application task graphs can be described as serial trees, Hermes provides a solution with latency no more than $(1+\epsilon)$ times of the minimum while incurring complexity that is polynomial in problem size and $\frac{1}{\epsilon}$ . We further propose an online algorithm to learn the unknown dynamic environment and guarantee that the performance gap compared to the optimal strategy is bounded by a logarithmic function with time. Evaluation is done by using real data set collected from several benchmarks, and is shown that Hermes improves the latency by $16$ percent compared to a previously published heuristic and increases CPU computing time by only $0.4$ percent of overall latency.

233 citations


Journal ArticleDOI
TL;DR: Numerical results have shown that the proposed cooperative content caching and delivery policy can significantly improve content delivery performance in comparison with existing caching strategies.
Abstract: To address the explosively growing demand for mobile data services in the 5th generation (5G) mobile communication system, it is important to develop efficient content caching and distribution techniques, aiming at significantly reducing redundant data transmissions and improving content delivery efficiency. In heterogeneous cellular network (HetNet), which has been deemed as a promising architectural technique for 5G, caching some popular content items at femto base-stations (FBSs) and even at user equipment (UE) can be exploited to alleviate the burden of backhaul and to reduce the costly transmissions from the macro base-stations to UEs. In this paper, we develop the optimal cooperative content caching and delivery policy, for which FBSs and UEs are all engaged in local content caching. We formulate the cooperative content caching problem as an integer-linear programming problem, and use hierarchical primal-dual decomposition method to decouple the problem into two level optimization problems, which are solved by using the subgradient method. Furthermore, we design the optimal content delivery policy, which is formulated as an unbalanced assignment problem and solved by using Hungarian algorithm. Numerical results have shown that the proposed cooperative content caching and delivery policy can significantly improve content delivery performance in comparison with existing caching strategies.

206 citations


Journal ArticleDOI
TL;DR: This article proposes a simple yet effective queue utilization based RPL (QU-RPL) that achieves load balancing and significantly improves the end-to-end packet delivery performance compared to the standard RPL.
Abstract: RPL is an IPv6 routing protocol for low-power and lossy networks (LLNs) designed to meet the requirements of a wide range of LLN applications including smart grid AMIs, industrial and environmental monitoring, and wireless sensor networks. RPL allows bi-directional end-to-end IPv6 communication on resource constrained LLN devices, leading to the concept of the Internet of Things (IoT) with thousands and millions of devices interconnected through multihop mesh networks. In this article, we investigate the load balancing and congestion problem of RPL. Specifically, we show that most of the packet losses under heavy traffic are due to congestion, and a serious load balancing problem appears in RPL in terms of routing parent selection. To overcome this problem, this article proposes a simple yet effective queue utilization based RPL ( QU-RPL ) that achieves load balancing and significantly improves the end-to-end packet delivery performance compared to the standard RPL. QU-RPL is designed for each node to select its parent node considering the queue utilization of its neighbor nodes as well as their hop distances to an LLN border router (LBR). Owing to its load balancing capability, QU-RPL is very effective in lowering queue losses and increasing the packet delivery ratio. We implement QU-RPL on a low-power embedded platform, and verify all of our findings through experimental measurements on a real testbed of a multihop LLN over IEEE 802.15.4. We present the impact of each design element of QU-RPL on performance in detail, and also show that QU-RPL reduces the queue loss by up to 84 percent and improves the packet delivery ratio by up to 147 percent compared to the standard RPL.

196 citations


Journal ArticleDOI
TL;DR: This work proposes a novel approach that introduces moving object modeling and indexing techniques from the theory of large moving object databases into the design of VANET routing protocols and demonstrates the superiority of this approach compared with both clustering and non-clustering based routing protocols.
Abstract: Vehicular Ad-hoc Networks (VANETs) are an emerging field, whereby vehicle-to-vehicle communications can enable many new applications such as safety and entertainment services. Most VANET applications are enabled by different routing protocols. The design of such routing protocols, however, is quite challenging due to the dynamic nature of nodes (vehicles) in VANETs. To exploit the unique characteristics of VANET nodes, we design a moving-zone based architecture in which vehicles collaborate with one another to form dynamic moving zones so as to facilitate information dissemination. We propose a novel approach that introduces moving object modeling and indexing techniques from the theory of large moving object databases into the design of VANET routing protocols. The results of extensive simulation studies carried out on real road maps demonstrate the superiority of our approach compared with both clustering and non-clustering based routing protocols.

193 citations


Journal ArticleDOI
TL;DR: This paper designs a malware detection scheme with Q-learning for a mobile device to derive the optimal offloading rate without knowing the trace generation and the radio bandwidth model of other mobile devices, and designs a post-decision state learning-based scheme that utilizes the known radio channel model to accelerate the reinforcement learning process in the malware detection.
Abstract: As accurate malware detection on mobile devices requires fast process of a large number of application traces, cloud-based malware detection can utilize the data sharing and powerful computational resources of security servers to improve the detection performance. In this paper, we investigate the cloud-based malware detection game, in which mobile devices offload their application traces to security servers via base stations or access points in dynamic networks. We derive the Nash equilibrium (NE) of the static malware detection game and present the existence condition of the NE, showing how mobile devices share their application traces at the security server to improve the detection accuracy, and compete for the limited radio bandwidth, the computational and communication resources of the server. We design a malware detection scheme with Q-learning for a mobile device to derive the optimal offloading rate without knowing the trace generation and the radio bandwidth model of other mobile devices. The detection performance is further improved with the Dyna architecture, in which a mobile device learns from the hypothetical experience to increase its convergence rate. We also design a post-decision state learning-based scheme that utilizes the known radio channel model to accelerate the reinforcement learning process in the malware detection. Simulation results show that the proposed schemes improve the detection accuracy, reduce the detection delay, and increase the utility of a mobile device in the dynamic malware detection game, compared with the benchmark strategy.

162 citations


Journal ArticleDOI
TL;DR: This paper focuses on the makespan sensitive task assignment problems for the crowdsensing in mobile social networks, where the mobility model is predicable, and the time of sending tasks and recycling results is non-negligible.
Abstract: Mobile crowdsensing is a new paradigm in which a crowd of mobile users exploit their carried smart phones to conduct complex sensing tasks. In this paper, we focus on the makespan sensitive task assignment problems for the crowdsensing in mobile social networks, where the mobility model is predicable, and the time of sending tasks and recycling results is non-negligible. To solve the problems, we propose an Average makespan sensitive Online Task Assignment (AOTA) algorithm and a Largest makespan sensitive Online Task Assignment (LOTA) algorithm. In AOTA and LOTA, the online task assignments are viewed as multiple rounds of virtual offline task assignments. Moreover, a greedy strategy of small-task-first-assignment and earliest-idle-user-receive-task is adopted for each round of virtual offline task assignment in AOTA, while the greedy strategy of large-task-first-assignment and earliest-idle-user-receive-task is adopted for the virtual offline task assignments in LOTA. Based on the two greedy strategies, both AOTA and LOTA can achieve nearly optimal online decision performances. We prove this and give the competitive ratios of the two algorithms. In addition, we also demonstrate the significant performance of the two algorithms through extensive simulations, based on four real MSN traces and a synthetic MSN trace.

139 citations


Journal ArticleDOI
TL;DR: An Android application that uses a smartphone-based accelerometer to capture gait data continuously in the background, but only when an individual walks is developed, which indicates that mimicry does not improve chances of attackers being accepted by the gait authentication system.
Abstract: This work evaluates the security strength of a smartphone-based gait recognition system against zero-effort and live minimal-effort impersonation attacks under realistic scenarios. For this purpose, we developed an Android application, which uses a smartphone-based accelerometer to capture gait data continuously in the background, but only when an individual walks. Later, it analyzes the recorded gait data and establishes the identity of an individual. At first, we tested the performance of this system against zero-effort attacks by using a dataset of 35 participants. Later, live impersonation attacks were performed by five professional actors who are specialized in mimicking body movements and body language. These attackers were paired with their physiologically close victims, and they were given live audio and visual feedback about their latest impersonation attempt during the whole experiment. No false positives under impersonation attacks, indicate that mimicry does not improve chances of attackers being accepted by our gait authentication system. In 29 percent of total impersonation attempts, when attackers walked like their chosen victim, they lost regularity between their steps which makes impersonation even harder for attackers.

135 citations


Journal ArticleDOI
TL;DR: Numerical results are provided to validate the proposed algorithm (including its accuracy and computational efficiency) and demonstrate that the optimal MDs' cooperative offloading can significantly reduce the system cost compared to some heuristic schemes.
Abstract: In this paper, we investigate the cooperative traffic offloading among mobiles devices (MDs) which are interested in receiving a common content from a cellular base station (BS). For offloading traffic, the BS first sends the content to some selected MDs which then broadcast the received data to the other MDs, such that each MD can receive the entire content simultaneously. Due to each MD's limited transmit-power and energy budget, the transmission rate of the content should be properly designed, since it strongly influences whether and how long each MD can perform relaying. Therefore, different from most existing MDs cooperative schemes, we focus on a novel joint optimization of the content transmission rate and each MD's relay-duration, with the objective of minimizing the system cost accounting for the energy consumption and the cellular-link usage. To tackle with the technical challenge due to the coupling effect between the content transmission rate and each MD's relay-duration, we exploit the decomposable property of the joint optimization problem, based on which we characterize different possible cases for achieving the optimal solution. We then derive the optimal solution for each case analytically, and further propose an efficient algorithm for finding the globally optimal solution of the original joint optimization problem. Numerical results are provided to validate the proposed algorithm (including its accuracy and computational efficiency) and demonstrate that the optimal MDs’ cooperative offloading can significantly reduce the system cost compared to some heuristic schemes. Several interesting insights about the cooperative offloading are also obtained.

132 citations


Journal ArticleDOI
TL;DR: This paper proposes a mechanism based on differential privacy and geocasting that achieves effective SC services while offering privacy guarantees to workers, and addresses scenarios with both static and dynamic datasets of workers.
Abstract: Spatial Crowdsourcing (SC) is a transformative platform that engages individuals in collecting and analyzing environmental, social, and other spatio-temporal information. SC outsources spatio-temporal tasks to a set of workers , i.e., individuals with mobile devices that perform the tasks by physically traveling to specified locations. However, current solutions require the workers to disclose their locations to untrusted parties. In this paper, we introduce a framework for protecting location privacy of workers participating in SC tasks. We propose a mechanism based on differential privacy and geocasting that achieves effective SC services while offering privacy guarantees to workers. We address scenarios with both static and dynamic (i.e., moving) datasets of workers. Experimental results on real-world data show that the proposed technique protects location privacy without incurring significant performance overhead.

Journal ArticleDOI
TL;DR: In GRfid, after data are collected by hardware, the data is processed by a sequence of functional blocks, namely data preprocessing, gesture detection, profiles training, and gesture recognition, all of which are well-designed to achieve high performance in gesture recognition.
Abstract: Gesture recognition has emerged recently as a promising application in our daily lives. Owing to low cost, prevalent availability, and structural simplicity, RFID shall become a popular technology for gesture recognition. However, the performance of existing RFID-based gesture recognition systems is constrained by unfavorable intrusiveness to users, requiring users to attach tags on their bodies. To overcome this, we propose GRfid, a novel device-free gesture recognition system based on phase information output by COTS RFID devices. Our work stems from the key insight that the RFID phase information is capable of capturing the spatial features of various gestures with low-cost commodity hardware. In GRfid, after data are collected by hardware, we process the data by a sequence of functional blocks, namely data preprocessing, gesture detection, profiles training, and gesture recognition, all of which are well-designed to achieve high performance in gesture recognition. We have implemented GRfid with a commercial RFID reader and multiple tags, and conducted extensive experiments in different scenarios to evaluate its performance. The results demonstrate that GRfid can achieve an average recognition accuracy of $96.5$ and $92.8$ percent in the identical-position and diverse-positions scenario, respectively. Moreover, experiment results show that GRfid is robust against environmental interference and tag orientations.

Journal ArticleDOI
TL;DR: From results of extensive experiments with 20 volunteers driving for another four months in real driving environments, it is shown that a fine-grained abnormal driving behaviors detection and identification model achieves an average total accuracy of 95.36 percent with SVM classifier model, and 96.88 percent with NN classifiers.
Abstract: Real-time abnormal driving behaviors monitoring is a corner stone to improving driving safety. Existing works on driving behaviors monitoring using smartphones only provide a coarse-grained result, i.e., distinguishing abnormal driving behaviors from normal ones. To improve drivers’ awareness of their driving habits so as to prevent potential car accidents, we need to consider a fine-grained monitoring approach, which not only detects abnormal driving behaviors but also identifies specific types of abnormal driving behaviors, i.e., Weaving , Swerving , Sideslipping , Fast U-turn , Turning with a wide radius , and Sudden braking . Through empirical studies of the 6-month driving traces collected from real driving environments, we find that all of the six types of driving behaviors have their unique patterns on acceleration and orientation. Recognizing this observation, we further propose a fine-grained abnormal D riving behavior D etection and i D entification system, $D^{3}$ , to perform real-time high-accurate abnormal driving behaviors monitoring using smartphone sensors. We extract effective features to capture the patterns of abnormal driving behaviors. After that, two machine learning methods, Support Vector Machine (SVM) and Neuron Networks (NN), are employed, respectively, to train the features and output a classifier model which conducts fine-grained abnormal driving behaviors detection and identification. From results of extensive experiments with 20 volunteers driving for another four months in real driving environments, we show that $D^{3}$ achieves an average total accuracy of 95.36 percent with SVM classifier model, and 96.88 percent with NN classifier model.

Journal ArticleDOI
TL;DR: This paper adopts a TDMA-based protocol and dynamically adjust the transmission order and transmission duration of the nodes based on channel status and application context of WBAN, and designs a new synchronization scheme to reduce the synchronization overhead.
Abstract: With the promising applications in e-Health and entertainment services, wireless body area network (WBAN) has attracted significant interest. One critical challenge for WBAN is to track and maintain the quality of service (QoS), e.g., delivery probability and latency, under the dynamic environment dictated by human mobility. Another important issue is to ensure the energy efficiency within such a resource-constrained network. In this paper, a new medium access control (MAC) protocol is proposed to tackle these two important challenges. We adopt a TDMA-based protocol and dynamically adjust the transmission order and transmission duration of the nodes based on channel status and application context of WBAN. The slot allocation is optimized by minimizing energy consumption of the nodes, subject to the delivery probability and throughput constraints. Moreover, we design a new synchronization scheme to reduce the synchronization overhead. Through developing an analytical model, we analyze how the protocol can adapt to different latency requirements in the healthcare monitoring service. Simulations results show that the proposed protocol outperforms CA-MAC and IEEE 802.15.6 MAC in terms of QoS and energy efficiency under extensive conditions. It also demonstrates more effective performance in highly heterogeneous WBAN.

Journal ArticleDOI
TL;DR: DIVERT is a hybrid system because it still uses a server and Internet communication to determine an accurate global view of the traffic, and balances the user privacy with the re-routing effectiveness.
Abstract: Centralized solutions for vehicular traffic re-routing to alleviate congestion suffer from two intrinsic problems: scalability, as the central server has to perform intensive computation and communication with the vehicles in real-time; and privacy, as the drivers have to share their location as well as the origins and destinations of their trips with the server. This article proposes DIVERT, a distributed vehicular re-routing system for congestion avoidance. DIVERT offloads a large part of the re-routing computation at the vehicles, and thus, the re-routing process becomes practical in real-time. To take collaborative re-routing decisions, the vehicles exchange messages over vehicular ad hoc networks. DIVERT is a hybrid system because it still uses a server and Internet communication to determine an accurate global view of the traffic. In addition, DIVERT balances the user privacy with the re-routing effectiveness. The simulation results demonstrate that, compared with a centralized system, the proposed hybrid system increases the user privacy by 92 percent on average. In terms of average travel time, DIVERT's performance is slightly less than that of the centralized system, but it still achieves substantial gains compared to the no re-routing case. In addition, DIVERT reduces the CPU and network load on the server by 99.99 and 95 percent, respectively.

Journal ArticleDOI
TL;DR: In this paper, a learning framework based on a problem-specific Markov chain is proposed for underlay D2D network, where a two phase algorithm is developed to perform mode selection and resource allocation in the respective phases.
Abstract: Device to device (D2D) communication is considered as an effective technology for enhancing the spectral efficiency and network throughput of existing cellular networks. However, enabling it in an underlay fashion poses a significant challenge pertaining to interference management. In this paper, mode selection and resource allocation for an underlay D2D network is studied while simultaneously providing interference management. The problem is formulated as a combinatorial optimization problem whose objective is to maximize the utility of all D2D pairs. To solve this problem, a learning framework is proposed based on a problem-specific Markov chain. From the local balance equation of the designed Markov chain, the transition probabilities are derived for distributed implementation. Then, a novel two phase algorithm is developed to perform mode selection and resource allocation in the respective phases. This algorithm is then shown to converge to a near optimal solution. Moreover, to reduce the computation in the learning framework, two resource allocation algorithms based on matching theory are proposed to output a specific and deterministic solution. The first algorithm employs the one-to-one matching game approach whereas in the second algorithm, the one-to many matching game with externalities and dynamic quota is employed. Simulation results show that the proposed framework converges to a near optimal solution under all scenarios with probability one. Moreover, our results show that the proposed matching game with externalities achieves a performance gain of up to 35 percent in terms of the average utility compared to a classical matching scheme with no externalities.

Journal ArticleDOI
TL;DR: A two-phase MC-based data recovery scheme, named MC-Two-Phase, which applies the matrix completion technique to fully exploit the inherent features of environmental data to recover the data matrix due to either data missing or corruption is proposed.
Abstract: Affected by hardware and wireless conditions in WSNs, raw sensory data usually have notable data loss and corruption. Existing studies mainly consider the interpolation of random missing data in the absence of the data corruption. There is also no strategy to handle the successive missing data. To address these problems, this paper proposes a novel approach based on matrix completion (MC) to recover the successive missing and corrupted data. By analyzing a large set of weather data collected from 196 sensors in Zhu Zhou, China, we verify that weather data have the features of low-rank, temporal stability, and spatial correlation. Moreover, from simulations on the real weather data, we also discover that successive data corruption not only seriously affects the accuracy of missing and corrupted data recovery but even pollutes the normal data when applying the matrix completion in a traditional way. Motivated by these observations, we propose a novel Principal Component Analysis (PCA)-based scheme to efficiently identify the existence of data corruption. We further propose a two-phase MC-based data recovery scheme, named MC-Two-Phase, which applies the matrix completion technique to fully exploit the inherent features of environmental data to recover the data matrix due to either data missing or corruption. Finally, the extensive simulations with real-world sensory data demonstrate that the proposed MC-Two-Phase approach can achieve very high recovery accuracy in the presence of successively missing and corrupted data.

Journal ArticleDOI
TL;DR: This work proposes Localization with Altered APs and Fingerprint Updating (LAAFU) system, employing implicit crowdsourced signals for fingerprint update and survey reduction, and results show that LAAFU is robust against altered APs, achieving 20 percent localization error reduction with the fingerprints adaptive to environmental signal changes.
Abstract: Wi-Fi fingerprinting has been extensively studied for indoor localization due to its deployability under pervasive indoor WLAN. As the signals from access points (APs) may change due to, for example, AP movement or power adjustment, the traditional approach is to conduct site survey regularly in order to maintain localization accuracy, which is costly and time-consuming. Here, we study how to accurately locate a target and automatically update fingerprints in the presence of altered AP signals (or simply, “altered APs”). We propose L ocalization with A ltered A Ps and F ingerprint U pdating (LAAFU) system, employing implicit crowdsourced signals for fingerprint update and survey reduction. Using novel subset sampling, LAAFU identifies any altered APs and filter them out before a location decision is made, hence maintaining localization accuracy under altered AP signals. With client locations anywhere in the region, fingerprint signals can be adaptively and transparently updated using non-parametric Gaussian process regression. We have conducted extensive experiments in our campus hall, an international airport, and a premium shopping mall. Compared with traditional weighted nearest neighbors and probabilistic algorithms, results show that LAAFU is robust against altered APs, achieving 20 percent localization error reduction with the fingerprints adaptive to environmental signal changes.

Journal ArticleDOI
TL;DR: It is shown that power usage contains valuable and sensitive user information, demonstrating a virtual occupancy sensing approach with minimal system calibration and setup, as motivated by difficulties in ground truth collection.
Abstract: Occupancy detection for buildings is crucial to improving energy efficiency, user comfort, and space utility. However, existing methods require dedicated system setup, continuous calibration, and frequent maintenance. With the instrumentation of electricity meters in millions of homes and offices, however, power measurement presents a unique opportunity for a non-intrusive and cost-effective way to detect occupant presence. This study develops solutions to the problems when no data or limited data is available for training, as motivated by difficulties in ground truth collection. Experimental evaluations on data from both residential and commercial buildings indicate that the proposed methods for binary occupancy detection are nearly as accurate as models learned with sufficient data, with accuracies of approximately 78 to 93 percent for residences and 90 percent for offices. This study shows that power usage contains valuable and sensitive user information, demonstrating a virtual occupancy sensing approach with minimal system calibration and setup.

Journal ArticleDOI
TL;DR: This work revisits the caching problem in realistic environments where moving users intermittently connect to multiple SBSs encountered at different times and introduces an optimization framework that models user movements via random walks on a Markov chain aimed at minimizing the load of the macro-cell.
Abstract: Caching popular content files at small-cell base stations (SBSs) has emerged as a promising technique to meet the overwhelming growth in mobile data demand. Despite the plethora of work in this field, a specific aspect has been overlooked. It is assumed that all users remain stationary during data transfer and therefore a complete copy of the requested file can always be downloaded by the associated SBSs. In this work, we revisit the caching problem in realistic environments where moving users intermittently connect to multiple SBSs encountered at different times. Due to connection duration limits, users may download only parts of the requested files. Requests for files that failed to be delivered on time by the SBSs are redirected to the coexisting macro-cell. We introduce an optimization framework that models user movements via random walks on a Markov chain aimed at minimizing the load of the macro-cell. As the main contribution, we put forward a distributed caching paradigm that leverages user mobility predictions and innovative information-mixing methods based on the principle of network coding. Systematic experiments based on measured traces of human mobility patterns demonstrate that our approach can offload $65 \text{percent}$ more macro-cell traffic than existing caching schemes in realistic settings.

Journal ArticleDOI
TL;DR: A new network model aiming at improving user experience by pushing the scheduling problem to the task layer is proposed and results indicate that the scheduling policy can significantly improve QoE.
Abstract: The past decade witnessed the dramatic evolution from Quality of Service (QoS) to Quality of Experience (QoE) in the design of wireless networks, especially on the aspect of link scheduling. In many applications, end users are concerned more about transmission quality of an individual task rather than the quality of a link, where a task may refer to a piece of music, video, etc. and may include many packets. This paper proposes a new network model aiming at improving user experience by pushing the scheduling problem to the task layer. A novel QoE requirement is designed to generalize the QoS requirements of a task, which is the ratio requirement. Following this design, a corresponding scheduling policy is proposed to capture it for each task and then reach an application-aware transmission allocation. We theoretically analyze the performance of the scheduling policy, and discuss the design of an optimal solution and the impact of the QoE requirements. Finally, the simulation results indicate that our scheduling policy can significantly improve QoE.

Journal ArticleDOI
TL;DR: This paper proposes BEACON, which is a Budget fEAsible and strategy-proof incentive mechanism for weighted COverage maximizatioN in mobile crowdsensing, and employs a novel monotonic and computationally tractable approximation algorithm for sensing task allocation.
Abstract: Mobile crowdsensing is a novel paradigm to collect sensing data and extract useful information about regions of interest. It widely employs incentive mechanisms to recruit a number of mobile users to fulfill coverage requirement in the interested regions. In practice, sensing service providers face a pressing optimization problem: How to maximize the valuation of the covered interested regions under a limited budget? However, the relation between two important factors, i.e., Coverage Maximization and Budget Feasibility , has not been fully studied in existing incentive mechanisms for mobile crowdsensing. Furthermore, the existing approaches on coverage maximization in sensor networks can work, when mobile users are rational and selfish. In this paper, we present the first in-depth study on the coverage problem for incentive-compatible mobile crowdsensing, and propose BEACON, which is a B udget f EA sible and strategy-proof incentive mechanism for weighted CO verage maximizatio N in mobile crowdsensing. BEACON employs a novel monotonic and computationally tractable approximation algorithm for sensing task allocation, and adopts a newly designed proportional share rule based compensation determination scheme to guarantee strategy-proofness and budget feasibility. Our theoretical analysis shows that BEACON can achieve strategy-proofness, budget feasibility, and a constant-factor approximation. We deploy a noise map crowdsensing system to capture the noise level in a selected campus, and evaluate the system performance of BEACON on the collected sensory data. Our evaluation results demonstrate the efficacy of BEACON.

Journal ArticleDOI
TL;DR: This paper formalizes the problem and derive an optimal inference algorithm that incorporates co-location information, yet at the cost of high complexity, and proposes some approximate inference algorithms, including a solution that relies on the belief propagation algorithm executed on a general Bayesian network model.
Abstract: Co-location information about users is increasingly available online. For instance, mobile users more and more frequently report their co-locations with other users in the messages and in the pictures they post on social networking websites by tagging the names of the friends they are with. The users’ IP addresses also constitute a source of co-location information. Combined with (possibly obfuscated) location information, such co-locations can be used to improve the inference of the users’ locations, thus further threatening their location privacy: As co-location information is taken into account, not only a user's reported locations and mobility patterns can be used to localize her, but also those of her friends (and the friends of their friends and so on). In this paper, we study this problem by quantifying the effect of co-location information on location privacy, considering an adversary such as a social network operator that has access to such information. We formalize the problem and derive an optimal inference algorithm that incorporates such co-location information, yet at the cost of high complexity. We propose some approximate inference algorithms, including a solution that relies on the belief propagation algorithm executed on a general Bayesian network model, and we extensively evaluate their performance. Our experimental results show that, even in the case where the adversary considers co-locations of the targeted user with a single friend, the median location privacy of the user is decreased by up to 62 percent in a typical setting. We also study the effect of the different parameters (e.g., the settings of the location-privacy protection mechanisms) in different scenarios.

Journal ArticleDOI
Arash Asadi, Vincenzo Mancuso1
TL;DR: Analytical models for the proposed channel-opportunistic architecture are provided and the impact of several payoff distribution methods commonly adopted in the literature on coalitional game theory are studied.
Abstract: We introduce a channel-opportunistic architecture that enhances the user experience in terms of throughput, fairness, and energy efficiency. Our proposed architecture leverages D2D communication and it is built on top of the forthcoming D2D features of 5G networks. In particular, we focus on outband D2D where cellular users are allowed to exploit both cellular (i.e., LTE-A) and WLAN (i.e., WiFi Direct) technologies to establish a D2D connection. In this architecture, cellular users form clusters, in which only the user with the best channel condition communicates with the base station on behalf of the entire cluster. Within the cluster, the unlicensed spectrum is utilized to relay traffic. In this article, we provide analytical models for the proposed system and study the impact of several payoff distribution methods commonly adopted in the literature on coalitional game theory. We then introduce an operator-controlled relay protocol based on the D2D features of LTE-A and WiFi Direct, and demonstrate the feasibility and the advantages of D2D-assisted cellular communication with our SDR prototype.

Journal ArticleDOI
TL;DR: The paper shows that for a wide range of parameters, a game where each user independently sets its offloading decisions always has a pure Nash equilibrium, and a Gauss-Seidel-like method for determining this equilibrium is introduced.
Abstract: This paper considers a set of mobile users that employ cloud-based computation offloading . In order to execute jobs in the cloud, the user uploads must occur over a base station channel that is shared by all of the uploading users. Since the job completion times are subject to hard deadline constraints, this restricts the feasible set of jobs that can be processed. The system is modelled as a competitive game in which each user is interested in minimizing its own energy consumption. The game is subject to the real-time constraints imposed by the job execution deadlines, user specific channel bit rates, and the competition over the shared communication channel. The paper shows that for a wide range of parameters, a game where each user independently sets its offloading decisions always has a pure Nash equilibrium, and a Gauss-Seidel-like method for determining this equilibrium is introduced. Results are presented that illustrate that the system always converges to a Nash equilibrium using the Gauss-Seidel method. Data is also presented that show the number of iterations required, and the quality of the solutions. We find that the solutions perform well compared to a lower bound on total energy performance.

Journal ArticleDOI
TL;DR: This paper proposes a scheme, called DV-maxHop, which reaches comparable accuracy quickly utilizing simpler, practical and proven variant of the DV-Hop algorithm, and introduces the formulation and simulation of Multi-objective Optimization to obtain the optimal solution.
Abstract: Localization awareness is a fundamental requirement in many Internet of Things (IoT) and other wireless sensor applications. The information transmitted by an individual entity or node is of limited use without the knowledge of its location. Research in this area is mostly geared towards multi-hop range-free localization algorithm as that only utilizes connectivity (neighbors) information. This work focuses on anchor-based, range-free localization algorithm, particularly in anisotropic networks. We observe that the pioneer Distance Vector Hop or DV-Hop algorithm, which provides accurate estimation in isotropic networks, can be enhanced to compute localization estimation for anisotropic networks with similar or comparable accuracy. The recently proposed algorithms for anisotropic networks are complex with communication and computational overheads. These algorithms may also be overkill for several location dependent protocols and applications. This paper proposes a scheme, called DV-maxHop, which reaches comparable accuracy quickly utilizing simpler, practical and proven variant of the DV-Hop algorithm. We evaluate the performance of our scheme using extensive simulation on several topologies under the effect of multiple anisotropic factors such as the existence of obstacles, sparse and non-uniform sensor distribution, and irregular radio propagation pattern. Even for isotropic networks, our scheme out-performed recent algorithms with lower computational overheads as well as reduced energy or communication cost due to its faster convergence. We also introduce the formulation and simulation of Multi-objective Optimization to obtain the optimal solution.

Journal ArticleDOI
TL;DR: An e nergy-efficient framework for high-precision multi-target-a daptive device-free localization (E-HIPA), which demands fewer transceivers, applies the compressive sensing (CS) theory to guarantee high localization accuracy with less RSS change measurements, and theoretically proves the validity of the proposed CS-based framework problem formulation.
Abstract: Device-free localization (DFL), which does not require any devices to be attached to target(s), has become an appealing technology for many applications, such as intrusion detection and elderly monitoring. To achieve high localization accuracy, most recent DFL methods rely on collecting a large number of received signal strength (RSS) changes distorted by target(s). Consequently, the incurred high energy consumption renders them infeasible for resource-constraint networks, such as wireless sensor networks. This paper introduces an e nergy-efficient framework for high-precision multi-target-a daptive device-free localization (E-HIPA). Compared with the existing methods, E-HIPA demands fewer transceivers, applies the compressive sensing (CS) theory to guarantee high localization accuracy with less RSS change measurements. The motivation behind the proposed E-HIPA is the sparse nature of multi-target locations in the spatial domain. Before taking advantage of this intrinsic sparseness, we theoretically prove the validity of the proposed CS-based framework problem formulation. Based on the formulation, the proposed E-HIPA primarily includes an adaptive orthogonal matching pursuit (AOMP) algorithm, by which it is capable of recovering the precise location vector with high probability, even for a more practical scenario with unknown target number. Experimental results via real testbed demonstrate that, compared with the previous state-of-the-art solutions, i.e., RTI, SCPL, and RASS approaches, E-HIPA reduces the energy consumption by up to 69 percent with meter-level localization accuracy.

Journal ArticleDOI
TL;DR: Performance evaluation by simulation show that the proposed Minimum Delay Routing Algorithm can achieve a substantial reduction in delay compared with the geocast-routing approach, and its performance is close to the flooding-based Epidemic algorithm, while the solution maintains only a single copy of the message.
Abstract: For disconnected Vehicular Ad hoc NETworks (VANETs), the carry-and-forward mechanism is promising to ensure the delivery success ratio at the cost of a longer delay, as the vehicle travel speed is much lower than the wireless signal propagation speed. Estimating delay is critical to select the paths with low delay, and is also challenging given the random topology and high mobility, and the difficulty to let the message propagate along the selected path. In this paper, we first propose a simple yet effective propagation strategy considering bidirectional vehicle traffic for two-dimensional VANETs, so the opposite-direction vehicles can be used to accelerate the message propagation and the message can largely follow the selected path. Focusing on the propagation delay, an analytical framework is developed to quantify the expected path delay. Using the analytical model, a source node can apply the shortest-path algorithm to select the path with the lowest expected delay. Performance evaluation by simulation show that, when the vehicle density is uneven but known, the proposed Minimum Delay Routing Algorithm can achieve a substantial reduction in delay compared with the geocast-routing approach, and its performance is close to the flooding-based Epidemic algorithm, while our solution maintains only a single copy of the message.

Journal ArticleDOI
TL;DR: A general probabilistic model is presented to shed light on a fundamental issue: how good the RSS fingerprinting based indoor localization can achieve and the interaction among accuracy, reliability, and the number of measurements in the localization process.
Abstract: Indoor localization has been an active research field for decades, where received signal strength (RSS) fingerprinting based methodology is widely adopted and induces many important localization techniques, such as the recently proposed one building fingerprints database with crowdsourcing. While efforts have been dedicated to improve accuracy and efficiency of localization, performance of the RSS fingerprinting based methodology itself is still unknown in a theoretical perspective. In this paper, we present a general probabilistic model to shed light on a fundamental issue: how good the RSS fingerprinting based indoor localization can achieve? Concretely, we present the probability that a user can be localized in a region with certain size. We reveal the interaction among accuracy, reliability, and the number of measurements in the localization process. Moreover, we present the optimal fingerprints reporting strategy that can achieve the best localization accuracy with given reliability and the number of measurements, which provides a design guideline for the RSS fingerprinting based indoor localization system. Further, we analyze the influence of imperfect database information on the reliability of localization, and find that the impact of imperfect information is still under control with reasonable number of samplings when building the database.

Journal ArticleDOI
TL;DR: Pallas is a self-bootstrapping system for fine-grained passive indoor localization using non-intrusive WiFi monitors that uses off-the-shelf access point hardware to opportunistically capture WiFi packets to infer the location of smartphones in the indoor environment.
Abstract: Passive indoor localization for smartphones requires no explicit cooperation of the smartphone and enables a new spectrum of applications such as passive user tracking, mobility monitoring, social pattern analysis, etc. However, existing passive localization methods either achieve coarse-grained localization accuracy or require expensive infrastructure support. In this paper, we present Pallas, a self-bootstrapping system for fine-grained passive indoor localization using non-intrusive WiFi monitors. Pallas uses off-the-shelf access point hardware to opportunistically capture WiFi packets to infer the location of smartphones in the indoor environment. The key novelty of Pallas lies in that the passive fingerprint database for localization is automatically constructed and updated without any active participation of WiFi devices or manual calibration. To achieve this, Pallas first identifies passive landmarks that are present in WiFi RSS traces. Given the knowledge of the indoor floor plan and the location of WiFi monitors, Pallas statistically maps the collected RSS traces to specific indoor pathways. With sufficient mapping opportunistically detected, Pallas is able to bootstrap a fine-grained passive fingerprint database and build Gaussian processes for localization automatically without requiring any additional calibration effort.