scispace - formally typeset
Search or ask a question

Showing papers presented at "Privacy Enhancing Technologies in 2011"


Book ChapterDOI
27 Jul 2011
TL;DR: Protocols that can be used to privately compute aggregate meter measurements over defined sets of meters, allowing for fraud and leakage detection as well as network management and further statistical processing of meter measurements, without revealing any additional information about the individual meter readings are presented.
Abstract: The widespread deployment of smart meters for the modernisation of the electricity distribution network, but also for gas and water consumption, has been associated with privacy concerns due to the potentially large number of measurements that reflect the consumers behaviour. In this paper, we present protocols that can be used to privately compute aggregate meter measurements over defined sets of meters, allowing for fraud and leakage detection as well as network management and further statistical processing of meter measurements, without revealing any additional information about the individual meter readings. Thus, most of the benefits of the Smart Grid can be achieved without revealing individual data. The feasibility of the protocols has been demonstrated with an implementation on current smart meters.

416 citations


Book ChapterDOI
27 Jul 2011
TL;DR: In this paper, the authors investigate the feasibility of using usernames to trace or link multiple profiles across services that belong to the same individual, and they show that a significant portion of the users' profiles can be linked using their Usernames.
Abstract: Usernames are ubiquitously used for identification and authentication purposes on web services and the Internet at large, ranging from the local-part of email addresses to identifiers in social networks. Usernames are generally alphanumerical strings chosen by the users and, by design, are unique within the scope of a single organization or web service. In this paper we investigate the feasibility of using usernames to trace or link multiple profiles across services that belong to the same individual. The intuition is that the probability that two usernames refer to the same physical person strongly depends on the "entropy" of the username string itself. Our experiments, based on usernames gathered from real web services, show that a significant portion of the users' profiles can be linked using their usernames. In collecting the data needed for our study, we also show that users tend to choose a small number of related usernames and use them across many services. To the best of our knowledge, this is the first time that usernames are considered as a source of information when profiling users on the Internet.

148 citations


Book ChapterDOI
27 Jul 2011
TL;DR: In this paper, a plug-in privacy component is put into the communication link between a smart meter and a supplier's back-end system to enable billing with time-of-use tariffs without disclosing the actual consumption profile to the supplier.
Abstract: Traditional electricity meters are replaced by Smart Meters in customers' households. Smart Meters collect fine-grained utility consumption profiles from customers, which in turn enables the introduction of dynamic, time-of-use tariffs. However, the fine-grained usage data that is compiled in this process also allows to infer the inhabitant's personal schedules and habits. We propose a privacy-preserving protocol that enables billing with time-of-use tariffs without disclosing the actual consumption profile to the supplier. Our approach relies on a zero-knowledge proof based on Pedersen Commitments performed by a plug-in privacy component that is put into the communication link between Smart Meter and supplier's back-end system. We require no changes to the Smart Meter hardware and only small changes to the software of Smart Meter and back-end system. In this paper we describe the functional and privacy requirements, the specification and security proof of our solution and give a performance evaluation of a prototypical implementation.

119 citations


Book ChapterDOI
27 Jul 2011
TL;DR: A novel attack, based on constraint satisfaction, is introduced to provide a rigorous analysis for BFE and guidelines regarding how to mitigate risk against the attack and an empirical analysis with data derived from public voter records is conducted to illustrate the feasibility of the attack.
Abstract: For over fifty years, "record linkage" procedures have been refined to integrate data in the face of typographical and semantic errors. These procedures are traditionally performed over personal identifiers (e.g., names), but in modern decentralized environments, privacy concerns have led to regulations that require the obfuscation of such attributes. Various techniques have been proposed to resolve the tension, including secure multi-party computation protocols, however, such protocols are computationally intensive and do not scale for real world linkage scenarios. More recently, procedures based on Bloom filter encoding (BFE) have gained traction in various applications, such as healthcare, where they yield highly accurate record linkage results in a reasonable amount of time. Though promising, no formal security analysis has been designed or applied to this emerging model, which is of concern considering the sensitivity of the corresponding data. In this paper, we introduce a novel attack, based on constraint satisfaction, to provide a rigorous analysis for BFE and guidelines regarding how to mitigate risk against the attack. In addition, we conduct an empirical analysis with data derived from public voter records to illustrate the feasibility of the attack. Our investigations show that the parameters of the BFE protocol can be configured to make it relatively resilient to the proposed attack without significant reduction in record linkage performance.

111 citations


Book ChapterDOI
27 Jul 2011
TL;DR: In this article, the authors propose a systematic way to quantify users' location privacy by modeling both the location-based applications and the locationprivacy preserving mechanisms (LPPMs), and by considering a well-defined adversary model.
Abstract: Mobile users expose their location to potentially untrusted entities by using location-based services. Based on the frequency of location exposure in these applications, we divide them into two main types: Continuous and Sporadic. These two location exposure types lead to different threats. For example, in the continuous case, the adversary can track users over time and space, whereas in the sporadic case, his focus is more on localizing users at certain points in time. We propose a systematic way to quantify users' location privacy by modeling both the location-based applications and the location-privacy preserving mechanisms (LPPMs), and by considering a well-defined adversary model. This framework enables us to customize the LPPMs to the employed location-based application, in order to provide higher location privacy for the users. In this paper, we formalize localization attacks for the case of sporadic location exposure, using Bayesian inference for Hidden Markov Processes. We also quantify user location privacy with respect to the adversaries with two different forms of background knowledge: Those who only know the geographical distribution of users over the considered regions, and those who also know how users move between the regions (i.e., their mobility pattern). Using the Location-Privacy Meter tool, we examine the effectiveness of the following techniques in increasing the expected error of the adversary in the localization attack: Location obfuscation and fake location injection mechanisms for anonymous traces.

108 citations


Book ChapterDOI
27 Jul 2011
TL;DR: Scramble as discussed by the authors is a Firefox extension that allows users to define access control lists (ACL) of authorised users for each piece of data, based on their preferences. But it does not support group-based access control.
Abstract: Social network sites (SNS) allow users to share information with friends, family, and other contacts. However, current SNS sites such as Facebook or Twitter assume that users trust SNS providers with the access control of their data. In this paper we propose Scramble, the implementation of a SNS-independent Firefox extension that allows users to enforce access control over their data. Scramble lets users define access control lists (ACL) of authorised users for each piece of data, based on their preferences. The definition of ACL is facilitated through the possibility of dynamically defining contact groups. In turn, the confidentiality and integrity of one data item is enforced using cryptographic techniques. When accessing a SNS that contains data encrypted using Scramble, the plugin transparently decrypts and checks integrity of the encrypted content.

99 citations


Book ChapterDOI
27 Jul 2011
TL;DR: This paper presents automatic text classification algorithms for classifying enterprise documents as either sensitive or non-sensitive, and introduces a novel training strategy, supplement and adjust, to create a classifier that has a low false discovery rate, even when presented with documents unrelated to the enterprise.
Abstract: Businesses, governments, and individuals leak confidential information, both accidentally and maliciously, at tremendous cost in money, privacy, national security, and reputation. Several security software vendors now offer "data loss prevention" (DLP) solutions that use simple algorithms, such as keyword lists and hashing, which are too coarse to capture the features what makes sensitive documents secret. In this paper, we present automatic text classification algorithms for classifying enterprise documents as either sensitive or non-sensitive. We also introduce a novel training strategy, supplement and adjust, to create a classifier that has a low false discovery rate, even when presented with documents unrelated to the enterprise. We evaluated our algorithm on several corpora that we assembled from confidential documents published on WikiLeaks and other archives. Our classifier had a false negative rate of less than 3.0% and a false discovery rate of less than 1.0% on all our tests (i.e, in a real deployment, the classifier can identify more than 97% of information leaks while raising at most 1 false alarm every 100th time).

91 citations


Book ChapterDOI
27 Jul 2011
TL;DR: N23, an ATM-style per-link algorithm that allows Tor routers to explicitly cap their queue lengths and signal congestion via back-pressure, is implemented, resulting in improved web page response times and faster page loads compared to Tor's current design and other window-based approaches.
Abstract: Tor is one of the most widely used privacy enhancing technologies for achieving online anonymity and resisting censorship. While conventional wisdom dictates that the level of anonymity offered by Tor increases as its user base grows, the most significant obstacle to Tor adoption continues to be its slow performance. We seek to enhance Tor's performance by offering techniques to control congestion and improve flow control, thereby reducing unnecessary delays. To reduce congestion, we first evaluate small fixed-size circuit windows and a dynamic circuit window that adaptively re-sizes in response to perceived congestion. While these solutions improve web page response times and require modification only to exit routers, they generally offer poor flow control and slower downloads relative to Tor's current design. To improve flow control while reducing congestion, we implement N23, an ATM-style per-link algorithm that allows Tor routers to explicitly cap their queue lengths and signal congestion via back-pressure. Our results show that N23 offers better congestion and flow control, resulting in improved web page response times and faster page loads compared to Tor's current design and other window-based approaches. We also argue that our proposals do not enable any new attacks on Tor users' privacy

80 citations


Book ChapterDOI
27 Jul 2011
TL;DR: This paper shows how to achieve the constant complexity in a pairing-based anonymous credential system excluding the RSA by using zero-knowledge proofs of Pairing-based certificates and accumulators to prove AND and OR relations with Constant complexity in the number of finite-set attributes.
Abstract: An anonymous credential system allows the user to convince a verifier of the possession of a certificate issued by the issuing authority anonymously. One of the applications is the privacy-enhancing electronic ID (eID). A previously proposed anonymous credential system achieves constant complexity in the number of finite-set attributes of the user. However, the system is based on the RSA. In this paper, we show how to achieve the constant complexity in a pairing-based anonymous credential system excluding the RSA. The key idea of the construction is the use of a pairing-based accumulator. The accumulator outputs a constant-size value from a large set of input values. Using zero-knowledge proofs of pairing-based certificates and accumulators, we can prove AND and OR relations with constant complexity in the number of finite-set attributes.

52 citations


Book ChapterDOI
27 Jul 2011
TL;DR: This paper proposes two privacy-preserving algorithms for the FRVP problem and analytically evaluates their privacy in both passive and active adversarial scenarios, and studies the practical feasibility and performance of the proposed approaches by implementing them on Nokia mobile devices.
Abstract: Location-Sharing-Based Services (LSBS) complement Location-Based Services by using locations from a group of users, and not just individuals, to provide some contextualized service based on the locations in the group. However, there are growing concerns about the misuse of location data by third-parties, which fuels the need for more privacy controls in such services. We address the relevant problem of privacy in LSBSs by providing practical and effective solutions to the privacy problem in one such service, namely the fair rendezvous point (FRVP) determination service. The privacy preserving FRVP (PPFRVP) problem is general enough and nicely captures the computations and privacy requirements in LSBSs. In this paper, we propose two privacy-preserving algorithms for the FRVP problem and analytically evaluate their privacy in both passive and active adversarial scenarios. We study the practical feasibility and performance of the proposed approaches by implementing them on Nokia mobile devices. By means of a targeted user-study, we attempt to gain further understanding of the popularity, the privacy and acceptance of the proposed solutions.

52 citations


Book ChapterDOI
27 Jul 2011
TL;DR: In this article, the authors present an attack on the I2P Peer-to-Peer network, with the goal of determining the identity of peers that are anonymously hosting HTTP services (Eepsite) in the network.
Abstract: I2P is one of the most widely used anonymizing Peer-to-Peer networks on the Internet today. Like Tor, it uses onion routing to build tunnels between peers as the basis for providing anonymous communication channels. Unlike Tor, I2P integrates a range of anonymously hosted services directly with the platform. This paper presents a new attack on the I2P Peer-to-Peer network, with the goal of determining the identity of peers that are anonymously hosting HTTP services (Eepsite) in the network. Key design choices made by I2P developers, in particular performancebased peer selection, enable a sophisticated adversary with modest resources to break key security assumptions. Our attack first obtains an estimate of the victim's view of the network. Then, the adversary selectively targets a small number of peers used by the victim with a denialof-service attack while giving the victim the opportunity to replace those peers with other peers that are controlled by the adversary. Finally, the adversary performs some simple measurements to determine the identity of the peer hosting the service. This paper provides the necessary background on I2P, gives details on the attack -- including experimental data from measurements against the actual I2P network -- and discusses possible solutions.

Book ChapterDOI
27 Jul 2011
TL;DR: A portable low-cost USRP-based RFID fingerprinter is built and it is shown that this fingerprinters enables reliable identification of individual tags from varying distances and across different tag placements (wallet, shopping bag, etc.).
Abstract: In this work, we demonstrate the practicality of people tracking by means of physical-layer fingerprints of RFID tags that they carry. We build a portable low-cost USRP-based RFID fingerprinter and we show, over a set of 210 EPC C1G2 tags, that this fingerprinter enables reliable identification of individual tags from varying distances and across different tag placements (wallet, shopping bag, etc.). We further investigate the use of this setup for clandestine people tracking in an example Shopping Mall scenario and show that in this scenario the mobility traces of people can be reconstructed with a high accuracy.

Book ChapterDOI
27 Jul 2011
TL;DR: A critical analysis of the system-wide anonymity metric of Edman et al.
Abstract: We give a critical analysis of the system-wide anonymity metric of Edman et al. [3], which is based on the permanent value of a doubly-stochastic matrix. By providing an intuitive understanding of the permanent of such a matrix, we show that a metric that looks no further than this composite value is at best a rough indicator of anonymity. We identify situations where its inaccuracy is acute, and reveal a better anonymity indicator. Also, by constructing an information-preserving embedding of a smaller class of attacks into the wider class for which this metric was proposed, we show that this metric fails to possess desirable generalization properties. Finally, we present a new anonymity metric that does not exhibit these shortcomings. Our new metric is accurate as well as general.

Book ChapterDOI
27 Jul 2011
TL;DR: This work proposes an algorithm that allows ISPs to cooperatively detect anomalies without requiring them to reveal private traffic information, and concludes that privacy-preserving anomaly detection shows promise as a key element of a wider network anomaly detection framework.
Abstract: Detection of malicious traffic in the Internet would be much easier if ISP networks shared their traffic traces. Unfortunately, state-ofthe-art anomaly detection algorithms require detailed traffic information which is considered extremely private by operators. To address this, we propose an algorithm that allows ISPs to cooperatively detect anomalies without requiring them to reveal private traffic information. We leverage secure multiparty computation to design a privacy-preserving variant of principal component analysis (PCA) that limits information propagation across domains. PCA is a well-proven technique for isolating anomalies on network traffic and we target a design that retains its scalability and accuracy. To validate our approach, we evaluate an implementation of our design against traces from the Abilene Internet2 IP backbone network as well as synthetic traces, show that it performs efficiently to support an online anomaly detection system and and conclude that privacy-preserving anomaly detection shows promise as a key element of a wider network anomaly detection framework. In the presence of increasingly serious threats from modern networked malware, our work provides a first step towards enabling larger-scale cooperation across ISPs in the presence of privacy concerns.

Book ChapterDOI
27 Jul 2011
TL;DR: This work considers the problem of broker-based private matching where end-entities do not interact with each other but communicate through a third entity, namely the Broker, which only discovers the number of matching elements.
Abstract: Private matching solutions allow two parties to find common data elements over their own datasets without revealing any additional private information. We propose a new concept involving an intermediate entity in the private matching process: we consider the problem of broker-based private matching where end-entities do not interact with each other but communicate through a third entity, namely the Broker, which only discovers the number of matching elements. Although introducing this third entity enables a complete decoupling between endentities (which may even not know each other), this advantage comes at the cost of higher exposure in terms of privacy and security. After defining the security requirements dedicated to this new concept, we propose a complete solution which combines searchable encryption techniques together with counting Bloom filters to preserve the privacy of end-entities and provide the proof of the matching correctness, respectively.

Proceedings Article
01 Jan 2011
TL;DR: A methodology for simulating Tor relay up/down behavior over time is described and some preliminary results are given.
Abstract: We describe a methodology for simulating Tor relay up/down behavior over time and give some preliminary results.