scispace - formally typeset
Search or ask a question

Showing papers by "Shaojie Tang published in 2014"


Proceedings ArticleDOI
08 Jul 2014
TL;DR: FCC, a device-Free Crowd Counting approach based on Channel State Information (CSI), is presented and a metric, the Percentage of nonzero Elements (PEM) in the dilated CSI Matrix is proposed, which can be explicitly formulated by the Grey Verhulst Model.
Abstract: Crowd counting, which count or accurately estimate the number of human beings within a region, is critical in many applications, such as guided tour and crowd control. A crowd counting solution should be scalable and be minimally intrusive (i.e., device-free) to users. Image-based solutions are device-free, but cannot work well in a dim or dark environment. Non-image based solutions usually require every human being carrying device, and are inaccurate and unreliable in practice. In this paper, we present FCC, a device-Free Crowd Counting approach based on Channel State Information (CSI). Our design is motivated by our observation that CSI is highly sensitive to environment variation, like a frog eye. We theoretically discuss the relationship between the number of moving people and the variation of wireless channel state. A major challenge in our design of FCC is to find a stable monotonic function to characterize the relationship between the crowd number and various features of CSI. To this end, we propose a metric, the Percentage of nonzero Elements (PEM), in the dilated CSI Matrix. The monotonic relationship can be explicitly formulated by the Grey Verhulst Model, which is used for crowd counting without a labor-intensive site survey. We implement FCC using off-theshelf IEEE 802.11n devices and evaluate its performance via extensive experiments in typical real-world scenarios. Our results demonstrate that FCC outperforms the state-of-art approaches with much better accuracy, scalability and reliability.

376 citations


Proceedings ArticleDOI
01 Dec 2014
TL;DR: It is shown that with off-the-shelf WiFi devices, fine-grained sleep information like a person's respiration, sleeping postures and rollovers can be successfully extracted.
Abstract: Is it possible to leverage WiFi signals collected in bedrooms to monitor a person's sleep? In this paper, we show that with off-the-shelf WiFi devices, fine-grained sleep information like a person's respiration, sleeping postures and rollovers can be successfully extracted. We do this by introducing Wi-Sleep, the first sleep monitoring system based on WiFi signals. Wi-Sleep adopts off-the-shelf WiFi devices to continuously collect the fine-grained wireless channel state information (CSI) around a person. From the CSI, Wi-Sleep extracts rhythmic patterns associated with respiration and abrupt changes due to the body movement. Compared to existing sleep monitoring systems that usually require special devices attached to human body (i.e. Probes, head belt, and wrist band), Wi-Sleep is completely contact less. In addition, different from many vision-based sleep monitoring systems, Wi-Sleep is robust to low-light environments and does not raise privacy concerns. Preliminary testing results show that the Wi-Sleep can reliably track a person's respiration and sleeping postures in different conditions.

278 citations


Proceedings ArticleDOI
01 Jan 2014
TL;DR: In this paper, the authors focus on the optimized placement of VMs to minimize the cost, the combination of N-cost and PM-cost, and prove it to be NP-hard.
Abstract: As tenants take networked virtual machines (VMs) as their requirements, effective placement of VMs is needed to reduce the network cost in cloud data centers. The cost is one of the major concerns for the cloud providers. In addition to the cost caused by network traffics (N-cost), the cost caused by the utilization of physical machines (PM-cost) is also non-negligible. In this paper, we focus on the optimized placement of VMs to minimize the cost, the combination of N-cost and PM-cost. We define N-cost by various functions, according to different communication models. We formulate the placement problem, and prove it to be NP-hard. We investigate the problem from two aspects. Firstly, we put a special emphasis on minimizing the N- cost with fixed PM-cost. For the case that tenants request the same amount of VMs, we present optimal algorithms under various definitions of N-cost. For the case that tenants require different numbers of VMs, we propose an approximation algorithm. Also, a greedy algorithm is implemented as the baseline to evaluate the performance. Secondly, we study the general case of the VM placement problem, in which both N-cost and PM-cost are taken into account. We present an effective binary-search- based algorithm to determine how many PMs should be used, which makes a tradeoff between PM-cost and N-cost. For all of the algorithms, we conduct theoretical analysis and extensive simulations to evaluate their performance and efficiency.

112 citations


Proceedings ArticleDOI
26 May 2014
TL;DR: KEEP uses a validation-recombination mechanism to obtain consistent secret keys from CSI measurements of all subcarriers and achieves high security level of the keys and fast key-generation rate.
Abstract: Device to device (D2D) communication is expected to become a promising technology of the next-generation wireless communication systems. Security issues have become technical barriers of D2D communication due to its “open-air” nature and lack of centralized control. Generating symmetric keys individually on different communication parties without key exchange or distribution is desirable but challenging. Recent work has proposed to extract keys from the measurement of physical layer random variations of a wireless channel, e.g., the channel state information (CSI) from orthogonal frequency-division multiplexing (OFDM). Existing CSI-based key extraction methods usually use the measurement results of individual subcarriers. However, our real world experiment results show that CSI measurements from near-by subcarriers have strong correlations and a generated key may have a large proportion of repeated bit segments. Hence attackers may crack the key in a relatively short time and hence reduce the security level of the generated keys. In this work, we propose a fast secret key extraction protocol, called KEEP. KEEP uses a validation-recombination mechanism to obtain consistent secret keys from CSI measurements of all subcarriers. It achieves high security level of the keys and fast key-generation rate. We implement KEEP using off-the-shelf 802.11n devices and evaluate its performance via extensive experiments. Both theoretical analysis and experimental results demonstrate that KEEP is safer and more effective than the state-of-the-art approaches.

89 citations


Journal ArticleDOI
TL;DR: A passive crowdsourcing channel state information (CSI) based indoor localization scheme, C2IL, built upon an innovative method to accurately estimate the moving speed solely based on 802.11n CSI and designed a trajectory clustering based localization algorithm to provide precise real-time indoor localization and tracking.
Abstract: Numerous indoor localization techniques have been proposed recently to meet the intensive demand for location-based service (LBS). Among them, the most popular solutions are the Wi-Fi fingerprint-based approaches. The core challenge is to lower the cost of fingerprint site-survey. One of the trends is to collect the piecewise data from clients and establish the radio map in crowdsourcing manner. However the low participation rate blocks the practical use. In this work, we propose a passive crowdsourcing channel state information (CSI) based indoor localization scheme, C2IL. Despite a crowdsourcing based approach, our scheme is totally transparent to the client and the only requirement is to connect to our 802.11n access points (APs). C2IL is built upon an innovative method to accurately estimate the moving speed solely based on 802.11n CSI. Knowing the walking speed of a client and its surrounding APs, a graph matching algorithm is employed to extract the received signal strength (RSS) fingerprints and establish the fingerprint map. For localization phase, we design a trajectory clustering based localization algorithm to provide precise real-time indoor localization and tracking. We develop and deploy a practical working system of C2IL in a large office environment. Extensive evaluations indicate that the error of speed estimation is within 3%, and the localization error is within 2 m at 80% time in a very complex indoor environment.

71 citations


Journal ArticleDOI
TL;DR: A set of bidding strategies under several service-level agreement (SLA) constraints is proposed to minimize the monetary cost and volatility of resource provisioning and is able to obtain an optimal randomized bidding strategy through linear programming.
Abstract: With the recent introduction of Spot Instances in the Amazon Elastic Compute Cloud (EC2), users can bid for resources and, thus, control the balance of reliability versus monetary costs. Mechanisms and tools that deal with the cost-reliability tradeoffs under this scheme are of great value for users seeking to reduce their costs while maintaining high reliability. In this paper, we propose a set of bidding strategies under several service-level agreement (SLA) constraints. In particular, we aim to minimize the monetary cost and volatility of resource provisioning. Essentially, to derive an optimal bidding strategy, we formulate this problem as a Constrained Markov Decision Process (CMDP). Based on this model, we are able to obtain an optimal randomized bidding strategy through linear programming. Using real Instance price traces and workload models, we compare several adaptive checkpointing schemes in terms of monetary costs and job completion time. We evaluate our model and demonstrate how users should bid optimally on Spot Instances to reach different objectives with desired levels of confidence.

40 citations


Journal ArticleDOI
TL;DR: This work forms the online sequential channel sensing and accessing problem as a sequencing multi-armed bandit problem, and proposes a novel policy whose regret is in optimal logarithmic rate in time and polynomial in the number of channels.
Abstract: For cognitive wireless networks, one challenge is that the status and statistics of the channels’ availability are difficult to predict. Numerous learning based online channel sensing and accessing strategies have been proposed to address such challenge. In this work, we propose a novel channel sensing and accessing strategy that carefully balances the channel statistics exploration and multichannel diversity exploitation. Unlike traditional MAB-based approaches, in our scheme, a secondary cognitive radio user will sequentially sense the status of multiple channels in a carefully designed order. We formulate the online sequential channel sensing and accessing problem as a sequencing multi-armed bandit problem , and propose a novel policy whose regret is in optimal logarithmic rate in time and polynomial in the number of channels. We conduct extensive simulations to compare the performance of our method with traditional MAB-based approach. Simulation results show that the proposed scheme improves the throughput by more than 30% and speeds up the learning process by more than 100%.

36 citations


Proceedings ArticleDOI
01 Jan 2014

34 citations


Journal ArticleDOI
TL;DR: It is proved that, for the type-sensitive perfectly compressible functions and type-threshold perfectly compressable functions, the aggregation capacities for random extended WSNs with n nodes are of order Θ ((log n)-β/2-1) and Θ ($ log log n)/(log log n)), respectively, where β >; 2 denotes the power attenuation exponent in the generalized physical model.
Abstract: A critical function of wireless sensor networks (WSNs) is data gathering. One is often only interested in collecting a specific function of the sensor measurements at a sink node, rather than downloading all the raw data from all the sensors. In this paper, we study the capacity of computing and transporting the specific functions of sensor measurements to the sink node, called aggregation capacity, for WSNs. We focus on random WSNs that can be classified into two types: random extended WSN and random dense WSN. All existing results about aggregation capacity are studied for dense WSNs, including random cases and arbitrary cases, under the protocol model (ProM) or physical model (PhyM). In this paper, we propose the first aggregation capacity scaling laws for random extended WSNs. We point out that unlike random dense WSNs, for random extended WSNs, the assumption made in ProM and PhyM that each successful transmission can sustain a constant rate is over-optimistic and unpractical due to transmit power limitation. We derive the first result on aggregation capacity for random extended WSNs under the generalized physical model. Particularly, we prove that, for the type-sensitive divisible perfectly compressible functions and type-threshold divisible perfectly compressible functions, the aggregation capacities for random extended WSNs with ${\mbi {n}}$ nodes are of order $\Thetab ({{{({\bf log} {\mbi {n}})}^{ - {\alphab \over {\bf 2}} - {\bf 1}}}})$ and $\Theta ({{{{{({\bf log} {\mbi {n}})}^{ - \alphab /{\bf 2}}}} \over {{\bf log}{\bf log} {\mbi {n}}}}})$ , respectively, where $\alphab \gt {\bf 2}$ denotes the power attenuation exponent in the generalized physical model. Furthermore, we improve the aggregation throughput for general divisible perfectly compressible functions to $\Omegab ({{{({\bf log} {\mbi {n}})}^{ - {\alphab \over {\bf 2}}}}})$ by choosing $\Thetab ({\bf log}\; {\mbi {n}})$ sensors from a small region (relative to the whole region) as sink nodes.

32 citations


Journal ArticleDOI
TL;DR: This paper presents a class of localized scheduling algorithms with provable throughput guarantee subject to physical interference constraints, and claims that the algorithm in the oblivious power setting is the first localized algorithm that achieves at least a constant fraction of the optimal capacity region subject toPhysical interference constraints.
Abstract: We study throughput-optimum localized link scheduling in wireless networks. The majority of results on link scheduling assume binary interference models that simplify interference constraints in actual wireless communication. While the physical interference model reflects the physical reality more precisely, the problem becomes notoriously harder under the physical interference model. There have been just a few existing results on link scheduling under the physical interference model, and even fewer on more practical distributed or localized scheduling. In this paper, we tackle the challenges of localized link scheduling posed by the complex physical interference constraints. By integrating the partition and shifting strategies into the pick-and-compare scheme, we present a class of localized scheduling algorithms with provable throughput guarantee subject to physical interference constraints. The algorithm in the oblivious power setting is the first localized algorithm that achieves at least a constant fraction of the optimal capacity region subject to physical interference constraints. The algorithm in the uniform power setting is the first localized algorithm with a logarithmic approximation ratio to the optimal solution. Our extensive simulation results demonstrate performance efficiency of our algorithms.

29 citations


Journal ArticleDOI
30 Jun 2014-Sensors
TL;DR: The proposed iLoc is an infrastructure-free, in-vehicle, cooperative positioning system via smartphones that uses only embedded sensors in smartphones to determine the phones' seat-level locations in a car.
Abstract: Seat-level positioning of a smartphone in a vehicle can provide a fine-grained context for many interesting in-vehicle applications, including driver distraction prevention, driving behavior estimation, in-vehicle services customization, etc. However, most of the existing work on in-vehicle positioning relies on special infrastructures, such as the stereo, cigarette lighter adapter or OBD (on-board diagnostic) adapter. In this work, we propose iLoc, an infrastructure-free, in-vehicle, cooperative positioning system via smartphones. iLoc does not require any extra devices and uses only embedded sensors in smartphones to determine the phones' seat-level locations in a car. In iLoc, in-vehicle smartphones automatically collect data during certain kinds of events and cooperatively determine the relative left/right and front/back locations. In addition, iLoc is tolerant to noisy data and possible sensor errors. We evaluate the performance of iLoc using experiments conducted in real driving scenarios. Results show that the positioning accuracy can reach 90% in the majority of cases and around 70% even in the worst-cases.

Proceedings ArticleDOI
11 Aug 2014
TL;DR: This work proposes a three-layered system model to formulate data dissemination sessions for social applications in OSNs and derives the traffic load of OSNs under a realistic assumption that every source sustained a data generating rate of constant order.
Abstract: In this paper, we model the data dissemination in online social networks (OSNs) and study the scaling laws of traffic load. We propose a three-layered system model to formulate data dissemination sessions for social applications in OSNs. The layered model consists of the physical network layer, social relationship layer, and application session layer. By analyzing mutual relevances among these three layers, we investigate the geographical distribution feature of dissemination sessions in OSNs. Based on this, we derive the traffic load of OSNs under a realistic assumption that every source sustains a data generating rate of constant order. To the best of our knowledge, this is the first work to address the issue of traffic load scaling for OSNs by modeling the social data dissemination from a layered perspective.

Proceedings ArticleDOI
08 Jul 2014
TL;DR: Re-define the `coverage' and based on the new coverage model, two methods are proposed to partition the deployed sensor nodes into qualified cover sets such that the system lifetime can be maximized by letting these sets work by turns.
Abstract: Wireless sensor networks (WSNs) are generally used to monitor, in an area, certain phenomena which can be events or targets that users are interested. To extend the system lifetime, a widely used technique is 'Energy-Efficient Coverage-Preserving Scheduling(EECPS)', in which at any time, only part of the nodes are activated to fulfill the function. To determine which nodes should be activated at a certain time is the key for the EECPS and this problem has been studied extensively. Existing solutions are based on the assumption that each node has a fixed coverage area, and once the event/target occurs in this area, it can be detected by this sensor. However, this coverage model is not always valid. In some applications such as structural health monitoring (SHM) and volcano monitoring, to fulfill a required function always requires low level collaboration from multiple sensors. The coverage area for individual sensor node therefore cannot be defined explicitly since single sensor is not able to fulfill the function alone, even it is close to the event or target to be monitored. In this paper, using an example of SHM, we illustrate how to support EECPS in some special applications of WSNs. We re-define the 'coverage' and based on the new coverage model, two methods are proposed to partition the deployed sensor nodes into qualified cover sets such that the system lifetime can be maximized by letting these sets work by turns. The performance of the methods is demonstrated through extensive simulation and experiment.

Proceedings ArticleDOI
11 Aug 2014
TL;DR: AEGIS is proposed, which is the first framework of unknown combinatorial auction mechanisms for heterogeneous spectrum redistribution and achieves much better performance than the state-of-the-art mechanisms.
Abstract: With the growing deployment of wireless communication technologies, radio spectrum is becoming a scarce resource. Auctions are believed to be among the most effective tools to solve or relieve the problem of radio spectrum shortage. However, designing a practical spectrum auction mechanism has to consider five major challenges: strategic behaviors of unknown users, channel heterogeneity, preference diversity, channel spatial reusability, and social welfare maximization. Unfortunately, none of existing work fully considered these five challenges. In this paper, we model the problem of heterogeneous spectrum allocation as a combinatorial auction, and propose AEGIS, which is the first framework of unknown combinatorial auction mechanisms for heterogeneous spectrum redistribution. AEGIS contains two mechanisms, namely AEGIS-SG and AEGIS-MP. AEGIS-SG is a direct revelation combinatorial spectrum auction mechanism for unknown single-minded users, achieving strategy-proofness and approximately efficient social welfare. We further design an iterative ascending combinatorial auction, namely AEGIS-MP, to adapt to the scenario with unknown multi-minded users. AEGIS-MP is implemented in a set of undominated strategies and has a good approximation ratio. We evaluate AEGIS on two practical datasets: Google Spectrum Database and GoogleWiFi. Evaluation results show that AEGIS achieves much better performance than the state-of-the-art mechanisms.

Proceedings ArticleDOI
15 Apr 2014
TL;DR: This paper proposes two mechanisms, Power based Counting (Poc) and Power based Identification (Poid), which achieve fast and accurate counting and identification by allowing neighbors to respond simultaneously to a poller.
Abstract: Counting and identifying neighboring active nodes are two fundamental operations in wireless sensor networks (WSNs) In this paper, we propose two mechanisms, Power based Counting (Poc) and Power based Identification (Poid), which achieve fast and accurate counting and identification by allowing neighbors to respond simultaneously to a poller A key observation that motivates our design is that the power of a superposed signal increases with the number of component signals under the condition of constructive interference (CI) However, due to the phase offsets and various hardware limitations (eg, ADC saturation), the increased superposed power exhibits dynamic and diminishing returns as the number of component signals increases This uncertainty of phase offsets and diminishing returns property of the superposed power pose serious challenges to the design of both Poc and Poid To overcome these challenges, we design delay compensation methods to reduce the phase offset of each component signal, and propose a novel probabilistic estimation technique in cooperation with CI We implement Poc and Poid on a testbed of 1 USRP and 50 TelosB nodes, the experimental results show that the accuracy of Poc is above 979%, and the accuracy of Poid is above 965% for most cases In addition to their high accuracy, our methods demonstrate significant advantages over the state-of-the-art solutions in terms of substantially lower energy consumption and estimation delay

Proceedings ArticleDOI
09 Feb 2014
TL;DR: It is proved that k-throwbox placement problem is NP-hard and a set of greedy algorithms are proposed which can efficiently provide quality solutions for this key optimization problem in a time-evolving throwbox-assisted Delay Tolerant Networks.
Abstract: Recent advances in Delay Tolerant Networks (DTNs) have overcome limitations in connectivity by relying on intermittent contacts between mobile nodes to deliver packets. However, lack of rich contact opportunities still causes poor delivery ratio and long delay of DTN routing. One of the solutions to improve mobile DTN performance is to place additional stationary nodes, called throwboxes, to create a greater number of contact opportunities. In this paper, we study a key optimization problem in a time-evolving throwbox-assisted DTN: k-throwbox placement problem, to answer "where should I put my к throwboxes to optimize the performance?". We model a time-evolving DTN as a weighted space-time graph which includes both spacial and temporal information. We prove that k-throwbox placement problem is NP-hard and propose a set of greedy algorithms which can efficiently provide quality solutions. One of the proposed algorithms can guarantee an (1 − 1/e) approximation for the k-throwbox placement problem. Simulation results based on random time-evolving DTNs and real life DTN traces demonstrate the efficiency of the proposed methods.

Proceedings ArticleDOI
18 Dec 2014
TL;DR: It is found that systematic network coding (SNC) is more suitable for SUs' transmission than the general block-based network coding in the sense that it can reduce average perpacket delay without decreasing the throughput gain.
Abstract: In cognitive radio networks (CRNs), secondary users (SUs) may employ network coding to pursue higher throughput. However, because SUs should not interfere with high-priority primary users (PUs), the available transmission time of SUs is usually uncertain, i.e., SUs do not know how long the idle state can last. Meanwhile, existing network coding strategies generally adopt a block-based transmission scheme, implying that all packets in the same block can be decoded simultaneously only with enough coded packets collected. Therefore, the gain induced by network coding may be dramatically decreased once a block cannot be decoded due to the arrival of PUs. In this paper, for the first time, we develop an efficient network coding strategy for SUs while considering the uncertain idle durations in CRNs. To handle the uncertainty of SUs' available transmission time, we first consider how to estimate the length of idle duration. For the case where the length of idle duration is stochastic, we employ confidential interval estimation (CIE) method to estimate the expected length of the idle duration. For the non-stochastic case, we utilize multi-armed bandit (MAB) to determine the idle durations sequentially. After obtaining the estimated length, we further adopt systematic network coding (SNC) in the data transmission of SUs. We find that SNC is more suitable for SUs' transmission than the general block-based network coding in the sense that it can reduce average perpacket delay without decreasing the throughput gain. However, the block size (also the proportion of uncoded packets to be sent) of SNC is hard to determine, due to the complicated correlation among the receptions at different receivers. To solve this problem, we propose an optimal block size selection algorithm for SNC (OSNC) to determine the transmission proportion of uncoded packets, under a given idle duration length. Due to its low computational complexity, OSNC can be used to make an online decision on the optimal block size with small delay. Simulation results show that, compared to traditional block-based network coding and plain retransmission schemes, our proposed scheme achieves highest performance for both stochastic and non-stochastic idle durations.

Proceedings ArticleDOI
19 Dec 2014
TL;DR: This paper designs and theoretically quantifies the influence of the partial centrality on the data forwarding performance using graph spectrum, and applies the scheme on three real opportunistic networking scenarios to show that the OFPC achieves significantly better mean delivery delay and cost compared to the state-of-the-art works.
Abstract: The social-based forwarding scheme has recently been shown to be an effective solution to improve the performance of opportunistic routing. Most of the current works focus on the globally defined node centrality, resulting in a bias towards the most popular nodes. However, these nodes may not be appropriate relay candidates for some target nodes, because they may have low importance relative to these subsets of target nodes. In this paper, to improve the opportunistic forwarding efficiency, we exploit the relative importance (called partial centrality) of a node with respect to a group of nodes. We design a new opportunistic forwarding scheme, opportunistic forwarding with partial centrality (OFPC), and theoretically quantify the influence of the partial centrality on the data forwarding performance using graph spectrum. By applying our scheme on three real opportunistic networking scenarios, our extensive evaluations show that the OFPC achieves significantly better mean delivery delay and cost compared to the state-of-the-art works, while achieving delivery ratios sufficiently close to those by Epidemic under different TTL requirements.

Journal ArticleDOI
TL;DR: This work investigates a fundamental scheduling problem of both theoretical and practical importance, called multi-task schedulability problem, to determine the maximum number of tasks that can be scheduled within their deadlines and work out such a schedule.
Abstract: In many sensor network applications, multiple data forwarding tasks usually exist with different source-destination node pairs. Due to limitations of the duty-cycling operation and interference, however, not all tasks can be guaranteed to be scheduled within their required delay constraints. We investigate a fundamental scheduling problem of both theoretical and practical importance, called multi-task schedulability problem, i.e., given multiple data forwarding tasks, to determine the maximum number of tasks that can be scheduled within their deadlines and work out such a schedule. We formulate the multi-task schedulability problem, prove its NP-Hardness, and propose an approximate algorithm with analysis on the performance bound and complicity. We further extend the proposed algorithm by explicitly altering duty cycles of certain sensor nodes so as to fully support applications with stringent delay requirements to accomplish all tasks. We then design a practical scheduling protocol based on proposed algorithms. We conduct extensive trace-driven simulations to validate the effectiveness and efficiency of our approach with various settings.

Posted Content
TL;DR: The experiment results show that eCIS can effectively protect image privacy and meet the user's adaptive secure demand and reduced the system overheads by up to $4.1\times\sim6.8\times$ compared with the existing CS based image processing approach.
Abstract: Cloud-assisted image services are widely used for various applications. Due to the high computational complexity of existing image encryption technology, it is extremely challenging to provide privacy preserving image services for resource-constrained smart device. In this paper, we propose a novel encrypressive cloud-assisted image service scheme, called eCIS. The key idea of eCIS is to shift the high computational cost to the cloud allowing reduction in complexity of encoder and decoder on resource-constrained device. This is done via compressive sensing (CS) techniques, compared with existing approaches, we are able to achieve privacy protection at no additional transmission cost. In particular, we design an encryption matrix by taking care of image compression and encryption simultaneously. Such that, the goal of our design is to minimize the mutual information of original image and encrypted image. In addition to the theoretical analysis that demonstrates the security properties and complexity of our system, we also conduct extensive experiment to evaluate its performance. The experiment results show that eCIS can effectively protect image privacy and meet the user's adaptive secure demand. eCIS reduced the system overheads by up to $4.1\times\sim6.8\times$ compared with the existing CS based image processing approach.

Proceedings ArticleDOI
01 Dec 2014
TL;DR: A scheme PPSA is proposed, which encrypts users' sensitive data to prevent privacy leakage from both analysts and the aggregation service provider, and fully supports selective aggregate functions for differentially private data analysis.
Abstract: Online user behavior analysis is becoming increasingly important, and offers valuable information to analysts for developing better e-commerce strategies. However, it also raises significant privacy concerns. Recently, growing efforts have been devoted to protecting the privacy of individuals while data aggregation is performed, which is a critical operation in behavior analysis. Unfortunately, existing methods allow very limited aggregation over user data, such as allowing only summation, which hardly satisfies the need of behavior analysis. In this paper, we propose a scheme PPSA, which encrypts users' sensitive data to prevent privacy leakage from both analysts and the aggregation service provider, and fully supports selective aggregate functions for differentially private data analysis. We have implemented our design and evaluated its performance using a trace-driven evaluation based on an online behavior dataset. Evaluation results show that our scheme effectively supports various selective aggregate queries with acceptable computation and communication overheads.

Proceedings ArticleDOI
08 Jul 2014
TL;DR: An elastic data routing strategy is proposed, aiming to achieve deduplication performance comparable to state-of-the-art, while require much less computation resources.
Abstract: As a space-efficient approach to data archive and backup, data deduplication is becoming increasingly popular in storage systems. However, as the data growing rapidly in data centers, single-node storage node is no longer be able to provide the corresponding throughput and capacities as expected. Building deduplication clusters is considered as a promising strategy to leverage such bottle-neck on single-node system. However, deduplication relies on how much the system knows about information of previous stored data. The single-node system obviously obtains all such information and is able to detect duplicate data there; however storage nodes in cluster-based system cannot know information on other nodes. It is nontrivial to route data intelligently enough so that the system could support deduplication performance comparable to that of a single-node system, while also at a trivial cost. In this paper, we propose an elastic data routing strategy, aiming to achieve deduplication performance comparable to state-of-the-art, while require much less computation resources.


Proceedings ArticleDOI
07 Sep 2014
TL;DR: This work exploits the radiation pattern of existing directional panel antenna which is steerable and derives angle-of-arrival (AoA) information from the energy reflected by the target tag when the antenna is rotating.
Abstract: Locating objects labeled with RFID tags is an important issue which should be addressed in many applications, such as warehouse management, goods management in supermarket and finding of lost objects. Some existing works use large numbers of reference tags which involve lots of manpower to deploy them. Others achieve high accuracy, but rely on sophisticated equipments which are hardly available in large scale to the industry. This work exploits the radiation pattern of existing directional panel antenna which is steerable and derives angle-of-arrival (AoA) information from the energy reflected by the target tag when the antenna is rotating. We use Commercial Off-The-Shelf (COTS) equipments and get median position accuracy of 29cm in our preliminary experiment.