scispace - formally typeset
Search or ask a question

Showing papers presented at "Privacy Enhancing Technologies in 2006"


Book ChapterDOI
28 Jun 2006
TL;DR: In this paper, a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself.
Abstract: Online social networks such as Friendster, MySpace, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. In this study we survey a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself. We look for underlying demographic or behavioral differences between the communities of the network's members and non-members; we analyze the impact of privacy concerns on members' behavior; we compare members' stated attitudes with actual behavior; and we document the changes in behavior subsequent to privacy-related information exposure. We find that an individual's privacy concerns are only a weak predictor of his membership to the network. Also privacy concerned individuals join the network and reveal great amounts of personal information. Some manage their privacy concerns by trusting their ability to control the information they provide and the external access to it. However, we also find evidence of members' misconceptions about the online community's actual size and composition, and about the visibility of members' profiles.

1,888 citations


Book ChapterDOI
28 Jun 2006
TL;DR: A data model to augment uncertainty to location data is suggested, and imprecise queries that hide the location of the query issuer and yields probabilistic results are proposed that investigate the evaluation and quality aspects for a range query.
Abstract: Location-based services, such as finding the nearest gas station, require users to supply their location information. However, a user's location can be tracked without her consent or knowledge. Lowering the spatial and temporal resolution of location data sent to the server has been proposed as a solution. Although this technique is effective in protecting privacy, it may be overkill and the quality of desired services can be severely affected. In this paper, we suggest a framework where uncertainty can be controlled to provide high quality and privacy-preserving services, and investigate how such a framework can be realized in the GPS and cellular network systems. Based on this framework, we suggest a data model to augment uncertainty to location data, and propose imprecise queries that hide the location of the query issuer and yields probabilistic results. We investigate the evaluation and quality aspects for a range query. We also provide novel methods to protect our solutions against trajectory-tracing. Experiments are conducted to examine the effectiveness of our approaches.

345 citations


Book ChapterDOI
28 Jun 2006
TL;DR: The so-called “Great Firewall of China” operates, in part, by inspecting TCP packets for keywords that are to be blocked, but if the endpoints completely ignore the firewall's resets, then the connection will proceed unhindered.
Abstract: The so-called “Great Firewall of China” operates, in part, by inspecting TCP packets for keywords that are to be blocked. If the keyword is present, TCP reset packets (viz: with the RST flag set) are sent to both endpoints of the connection, which then close. However, because the original packets are passed through the firewall unscathed, if the endpoints completely ignore the firewall's resets, then the connection will proceed unhindered. Once one connection has been blocked, the firewall makes further easy-to-evade attempts to block further connections from the same machine. This latter behaviour can be leveraged into a denial-of-service attack on third-party machines.

193 citations


Book ChapterDOI
28 Jun 2006
TL;DR: This work proposes an application of recent advances in e-cash, anonymous credentials, and proxy re-encryption to the problem of privacy in public transit systems with electronic ticketing and demonstrates security and privacy features offered by the hybrid system that are unavailable in a homogeneous passive transponder architecture.
Abstract: We propose an application of recent advances in e-cash, anonymous credentials, and proxy re-encryption to the problem of privacy in public transit systems with electronic ticketing. We discuss some of the interesting features of transit ticketing as a problem domain, and provide an architecture sufficient for the needs of a typical metropolitan transit system. Our system maintains the security required by the transit authority and the user while significantly increasing passenger privacy. Our hybrid approach to ticketing allows use of passive RFID transponders as well as higher powered computing devices such as smartphones or PDAs. We demonstrate security and privacy features offered by our hybrid system that are unavailable in a homogeneous passive transponder architecture, and which are advantageous for users of passive as well as active devices.

101 citations


Book ChapterDOI
28 Jun 2006
TL;DR: In this article, the authors present an efficient construction of a private disjointness testing protocol that is secure against malicious provers and honest-but-curious (semi-honest) verifiers without the use of random oracles.
Abstract: We present an efficient construction of a private disjointness testing protocol that is secure against malicious provers and honest-but-curious (semi-honest) verifiers, without the use of random oracles. In a completely semi-honest setting, this construction implements a private intersection cardinality protocol. We formally define both private intersection cardinality and private disjointness testing protocols. We prove that our construction is secure under the subgroup decision and subgroup computation assumptions. A major advantage of our construction is that it does not require bilinear groups, random oracles, or non-interactive zero knowledge proofs. Applications of private intersection cardinality and disjointness testing protocols include privacy-preserving data mining and anonymous login systems.

71 citations


Book ChapterDOI
28 Jun 2006
TL;DR: The proposed GCD framework lends itself to many practical instantiations and offers several novel and appealing features such as self-distinction and strong anonymity with reusable credentials, and provides a thorough security analysis and illustrates two concrete framework instantiations.
Abstract: In the society increasingly concerned with the erosion of privacy, privacy-preserving techniques are becoming very important. This motivates research in cryptographic techniques offering built-in privacy. A secret handshake is a protocol whereby participants establish a secure, anonymous and unobservable communication channel only if they are members of the same group. This type of “private” authentication is a valuable tool in the arsenal of privacy-preserving cryptographic techniques. Prior research focused on 2-party secret handshakes with one-time credentials. This paper breaks new ground on two accounts: (1) it shows how to obtain secure and efficient secret handshakes with reusable credentials, and (2) it represents the first treatment of group (or multi-party) secret handshakes, thus providing a natural extension to the secret handshake technology. An interesting new issue encountered in multi-party secret handshakes is the need to ensure that all parties are indeed distinct. (This is a real challenge since the parties cannot expose their identities.) We tackle this and other challenging issues in constructing GCD – a flexible framework for secret handshakes. The proposed GCD framework lends itself to many practical instantiations and offers several novel and appealing features such as self-distinction and strong anonymity with reusable credentials. In addition to describing the motivation and step-by-step construction of the framework, this paper provides a thorough security analysis and illustrates two concrete framework instantiations.

56 citations


Book ChapterDOI
28 Jun 2006
TL;DR: The results suggest that mechanisms based solely on a node's local knowledge of the network are not sufficient to solve the difficult problem of detecting colluding adversarial behavior in a P2P system and that more sophisticated schemes may be needed.
Abstract: MorphMix is a peer-to-peer circuit-based mix network designed to provide low-latency anonymous communication. MorphMix nodes incrementally construct anonymous communication tunnels based on recommendations from other nodes in the system; this P2P approach allows it to scale to millions of users. However, by allowing unknown peers to aid in tunnel construction, MorphMix is vulnerable to colluding attackers that only offer other attacking nodes in their recommendations. To avoid building corrupt tunnels, MorphMix employs a collusion detection mechanism to identify this type of misbehavior. In this paper, we challenge the assumptions of the collusion detection mechanism and demonstrate that colluding adversaries can compromise a significant fraction of all anonymous tunnels, and in some cases, a majority of all tunnels built. Our results suggest that mechanisms based solely on a node's local knowledge of the network are not sufficient to solve the difficult problem of detecting colluding adversarial behavior in a P2P system and that more sophisticated schemes may be needed.

56 citations


Book ChapterDOI
28 Jun 2006
TL;DR: This paper introduces a benchmark metric for measuring the resistance of the system to a single compromised member, and shows how the parameters of the key-tree should be chosen in order to maximize the system's resistance to single member compromise under some constraints on the authentication delay.
Abstract: Key-tree based private authentication has been proposed by Molnar and Wagner as a neat way to efficiently solve the problem of privacy preserving authentication based on symmetric key cryptography. However, in the key-tree based approach, the level of privacy provided by the system to its members may decrease considerably if some members are compromised. In this paper, we analyze this problem, and show that careful design of the tree can help to minimize this loss of privacy. First, we introduce a benchmark metric for measuring the resistance of the system to a single compromised member. This metric is based on the well-known concept of anonymity sets. Then, we show how the parameters of the key-tree should be chosen in order to maximize the system's resistance to single member compromise under some constraints on the authentication delay. In the general case, when any member can be compromised, we give a lower bound on the level of privacy provided by the system. We also present some simulation results that show that this lower bound is quite sharp. The results of this paper can be directly used by system designers to construct optimal key-trees in practice; indeed, we consider this as the main contribution of our work.

55 citations


Book ChapterDOI
28 Jun 2006
TL;DR: It is found the big file is effective in overwriting file data on FAT32, NTFS, and HFS, but not on Ext2fs, Ext3fs, or Reiserfs.
Abstract: Many of today's privacy-preserving tools create a big file that fills up a hard drive or USB storage device in an effort to overwrite all of the “deleted files” that the media contain. But while this technique is widespread, it is largely unvalidated. We evaluate the effectiveness of the “big file technique” using sector-by-sector disk imaging on file systems running under Windows, Mac OS, Linux, and FreeBSD. We find the big file is effective in overwriting file data on FAT32, NTFS, and HFS, but not on Ext2fs, Ext3fs, or Reiserfs. In one case, a total of 248 individual files consisting of 1.75MB of disk space could be recovered in their entirety. Also, file metadata such as filenames are rarely overwritten. We present a theoretical analysis of the file sanitization problem and evaluate the effectiveness of a commercial implementation that implements an improved strategy.

55 citations


Book ChapterDOI
28 Jun 2006
TL;DR: This paper provides a formal proof of security of this protocol, in the random oracle model, under reasonable cryptographic assumptions, of the Tor protocol.
Abstract: Tor is a popular anonymous Internet communication system, used by an estimated 250,000 users to anonymously exchange over five terabytes of data per day. The security of Tor depends on properly authenticating nodes to clients, but Tor uses a custom protocol, rather than an established one, to perform this authentication. In this paper, we provide a formal proof of security of this protocol, in the random oracle model, under reasonable cryptographic assumptions.

54 citations


Book ChapterDOI
28 Jun 2006
TL;DR: Possible privacy breaches within the Liberty identity Federation Framework and Liberty identity Web Services Framework are identified and proposals for improvement in both these frameworks are discussed.
Abstract: Internet usage has been growing significantly, and the issue of online privacy has become a correspondingly greater concern. Several recent surveys show that users' concern about the privacy of their personal information reduces their use of electronic businesses and Internet services; furthermore, many users choose to provide false data in order to protect their real identities. Identity federation aims to assemble an identity virtually from a user's personal information stored across several distinct identity management systems. Liberty Alliance is one of the most recognized projects in developing an open standard for federated network identity. While one of the key objectives of the Liberty Alliance is to enable consumers to protect the privacy and security of their network identity information, this paper identifies and analyzes possible privacy breaches within the Liberty identity Federation Framework and Liberty identity Web Services Framework. Proposals for improvement in both these frameworks are discussed.

Book ChapterDOI
28 Jun 2006
TL;DR: This paper focuses on how to automate theenforcement of privacy within enterprises in a systemic way, in particular privacy-aware access to personal data and enforcement of privacy obligations: this is still open to innovation.
Abstract: It is common practice for enterprises and other organisations to ask people to disclose their personal data in order to grant them access to services and engage in transactions. This practice is not going to disappear, at least in the foreseeable future. Most enterprises need personal information to run their businesses and provide the required services, many of whom have turned to identity management solutions to do this in an efficient and automated way. Privacy laws dictate how enterprises should handle personal data in a privacy compliant way: this requires dealing with privacy rights, permissions and obligations. It involves operational and compliance aspects. Currently much is done by means of manual processes, which make them difficult and expensive to comply with. A key requirement for enterprises is being able to leverage their investments in identity management solutions. This paper focuses on how to automate the enforcement of privacy within enterprises in a systemic way, in particular privacy-aware access to personal data and enforcement of privacy obligations: this is still open to innovation. We introduce our work in these areas: core concepts are described along with our policy enforcement models and related technologies. Two prototypes have been built as a proof of concept and integrated with state-of-the-art (commercial) identity management solutions to demonstrate the feasibility of our work. We provide technical details, discuss open issues and our next steps.


Book ChapterDOI
28 Jun 2006
TL;DR: In this paper, the authors suggest improvements that should be easy to adopt within the existing hidden service design, improvements that will both reduce vulnerability to DoS attacks and add QoS as a service option.
Abstract: Location hidden services have received increasing attention as a means to resist censorship and protect the identity of service operators. Research and vulnerability analysis to date has mainly focused on how to locate the hidden service. But while the hiding techniques have improved, almost no progress has been made in increasing the resistance against DoS attacks directly or indirectly on hidden services. In this paper we suggest improvements that should be easy to adopt within the existing hidden service design, improvements that will both reduce vulnerability to DoS attacks and add QoS as a service option. In addition we show how to hide not just the location but the existence of the hidden service from everyone but the users knowing its service address. Not even the public directory servers will know how a private hidden service can be contacted, or know it exists.

Book ChapterDOI
28 Jun 2006
TL;DR: In a structured overlay organized as a chordal ring, it is found that a technique originally developed for recipient anonymity also improves sender anonymity, based on the use of imprecise entries in the routing tables of each participating peer.
Abstract: In the framework of peer to peer distributed systems, the problem of anonymity in structured overlay networks remains a quite elusive one. It is especially unclear how to evaluate and improve sender anonymity, that is, untraceability of the peers who issue messages to other participants in the overlay. In a structured overlay organized as a chordal ring, we have found that a technique originally developed for recipient anonymity also improves sender anonymity. The technique is based on the use of imprecise entries in the routing tables of each participating peer. Simulations show that the sender anonymity, as measured in terms of average size of anonymity set, decreases slightly if the peers use imprecise routing; yet, the anonymity takes a better distribution, with good anonymity levels becoming more likely at the expenses of very high and very low levels. A better quality of anonymity service is thus provided to participants.

Book ChapterDOI
28 Jun 2006
TL;DR: In this article, the authors describe their experiences in implementing a privacy protection system based on the Intellectual Property Management and Protection (IPMP) components of the MPEG-21 Multimedia Framework, which allows individuals to express their privacy preferences in a way enabling automatic enforcement by data users' computers.
Abstract: A number of authors have observed a duality between privacy protection and copyright protection, and, in particular, observed how digital rights management technology may be used as the basis of a privacy protection system. In this paper, we describe our experiences in implementing a privacy protection system based on the Intellectual Property Management and Protection (“IPMP”) components of the MPEG-21 Multimedia Framework. Our approach allows individuals to express their privacy preferences in a way enabling automatic enforcement by data users' computers. This required the design of an extension to the MPEG Rights Expression Language to cater for privacy applications, and the development of software that allowed individuals' information and privacy preferences to be securely collected, stored and interpreted.

Book ChapterDOI
28 Jun 2006
TL;DR: It is shown that, in principal, almost any anonymity scheme can be made selectively traceable, and that a particular anonymity schemes can be modified to be noncoercible.
Abstract: Anonymous communication can, by its very nature, facilitate socially unacceptable behavior; such abuse of anonymity is a serious impediment to its widespread deployment. This paper studies two notions related to the prevention of abuse. The first is selective traceability, the property that a message's sender can be traced with the help of an explicitly stated set of parties. The second is noncoercibility, the property that no party can convince an adversary (using technical means) that he was not the sender of a message. We show that, in principal, almost any anonymity scheme can be made selectively traceable, and that a particular anonymity scheme can be modified to be noncoercible.

Book ChapterDOI
28 Jun 2006
TL;DR: Privacy Injector as mentioned in this paper protects the complete life cycle of personal data by providing us with a practical implementation of the "sticky policy paradigm" which allows us to add privacy enforcement to existing applications.
Abstract: Protection of personal data is essential for customer acceptance. Even though existing privacy policies can describe how data shall be handled, privacy enforcement remains a challenge. Especially for existing applications, it is unclear how one can effectively ensure correct data handling without completely redesigning the applications. In this paper we introduce Privacy Injector, which allows us to add privacy enforcement to existing applications. Conceptually Privacy Injector consists of two complementary parts, namely, a privacy metadata tracking and a privacy policy enforcement part. We show how Privacy Injector protects the complete life cycle of personal data by providing us with a practical implementation of the “sticky policy paradigm.” Throughout the collection, transformation, disclosure and deletion of personal data, Privacy Injector will automatically assign, preserve and update privacy metadata as well as enforce the privacy policy. As our approach is policy-agnostic, we can enforce any policy language that describes which actions may be performed on which data.

Book ChapterDOI
Andreas Pashalidis1, Bernd Meyer1
28 Jun 2006
TL;DR: A particular attack that may be launched by cooperating organisations in order to link the transactions and the pseudonyms of the users of an anonymous credential system and the results of the analysis are both positive and negative.
Abstract: In this paper we study a particular attack that may be launched by cooperating organisations in order to link the transactions and the pseudonyms of the users of an anonymous credential system. The results of our analysis are both positive and negative. The good (resp. bad) news, from a privacy protection (resp. evidence gathering) viewpoint, is that the attack may be computationally intensive. In particular, it requires solving a problem that is polynomial time equivalent to ALLSAT . The bad (resp. good) news is that a typical instance of this problem may be efficiently solvable.

Book ChapterDOI
28 Jun 2006
TL;DR: In this article, the authors describe how to add alpha-mixing to various mix designs, and show that mix networks with this feature can provide increased anonymity for all senders in the network.
Abstract: Currently fielded anonymous communication systems either introduce too much delay and thus have few users and little security, or have many users but too little delay to provide protection against large attackers. By combining the user bases into the same network, and ensuring that all traffic is mixed together, we hope to lower delay and improve anonymity for both sets of users. Alpha-mixing is an approach that can be added to traditional batching strategies to let senders specify for each message whether they prefer security or speed. Here we describe how to add alpha-mixing to various mix designs, and show that mix networks with this feature can provide increased anonymity for all senders in the network. Along the way we encounter subtle issues to do with the attacker's knowledge of the security parameters of the users.

Book ChapterDOI
28 Jun 2006
TL;DR: The findings suggest that the skew of the distribution of people to places is one of the main factors that drives trail re-identification, and a generative model for location access patterns that simulates observed behavior is developed.
Abstract: In this paper, we investigate how location access patterns influence the re-identification of seemingly anonymous data. In the real world, individuals visit different locations that gather similar information. For instance, multiple hospitals collect health information on the same patient. To protect anonymity for research purposes, hospitals share sensitive data, such as DNA sequences, stripped of explicit identifiers. Separately, for administrative functions, identified data, stripped of DNA, is made available. On a hospital by hospital basis, each pair of DNA and identified databases appears unlinkable, however, links can be established when multiple locations' database are studied. This problem, known as trail re-identification, is a generalized phenomenon and occurs because an individual's location access pattern can be matched across the shared databases. Data holders can not exchange data to find and suppress trails that would be re-identified. Thus, it is important to assess the re-identification risk in a system in order to develop techniques to mitigate it. In this research, we evaluate several real world datasets and observe trail re-identification is related to the number of people to places. To study this phenomenon in more detail, we develop a generative model for location access patterns that simulates observed behavior. We evaluate trail re-identification risk in a range of simulated patterns and our findings suggest that the skew of the distribution of people to places is one of the main factors that drives trail re-identification.

Book ChapterDOI
28 Jun 2006
TL;DR: This paper identifies and explores the loss of privacy inherent in current revocation checking, and constructs a simple, efficient and flexible privacy-preserving component for one well-known revocation method.
Abstract: Digital certificates signed by trusted certification authorities (CAs) are used for multiple purposes, most commonly for secure binding of public keys to names and other attributes of their owners. Although a certificate usually includes an expiration time, it is not uncommon that a certificate needs to be revoked prematurely. For this reason, whenever a client (user or program) needs to assert the validity of another party's certificate, it performs revocation checking. There are many revocation techniques varying in both the operational model and underlying data structures. One common feature is that a client typically contacts an on-line third party (trusted, untrusted or semi-trusted), identifies the certificate of interest and obtains some form of a proof of either revocation or validity (non-revocation) for the certificate in question. While useful, revocation checking can leak potentially sensitive information. In particular, third parties of dubious trustworthiness discover two things: (1) the identity of the party posing the query, as well as (2) the target of the query. The former can be easily remedied with techniques such as onion routing or anonymous web browsing. Whereas, hiding the target of the query is not as obvious. Arguably, a more important loss of privacy results from the third party's ability to tie the source of the revocation check with the query's target. (Since, most likely, the two are about to communicate.) This paper is concerned with the problem of privacy in revocation checking and its contribution is two-fold: it identifies and explores the loss of privacy inherent in current revocation checking, and, it constructs a simple, efficient and flexible privacy-preserving component for one well-known revocation method.

Book ChapterDOI
28 Jun 2006
TL;DR: An approach to support privacy controlled sharing of identity attributes and harmonization of privacy policies in federated environments and mechanisms for tracing the release of user's identity attributes within the federation are developed.
Abstract: Digital identity is defined as the digital representation of the information known about a specific individual or organization. An emerging approach for protecting identities of individuals while at the same time enhancing user convenience is to focus on inter-organization management of identity information. This is referred to as federated identity management. In this paper we develop an approach to support privacy controlled sharing of identity attributes and harmonization of privacy policies in federated environments. Policy harmonizations mechanisms make it possible to determine whether or not the transfer of identity attributes from one entity to another violate the privacy policies stated by the former. We also provide mechanisms for tracing the release of user's identity attributes within the federation. Such approach entails a form of accountability since an entity non-compliant with the users original privacy preferences can be identified. Finally, a comprehensive security analysis details security properties is also offered.

Book ChapterDOI
28 Jun 2006
TL;DR: This paper has proposed a model for privacy infrastructures aiming for the distribution channel such that as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action in the first place.
Abstract: With ubiquitous use of digital camera devices, especially in mobile phones, privacy is no longer threatened by governments and companies only. The new technology creates a new threat by ordinary people, who could take and distribute pictures of an individual with no risk and little cost in any situation in public or private spaces. Fast distribution via web based photo albums, online communities and web pages expose an individual's private life to the public. Social and legal measures are increasingly taken to deal with this problem, but they are hard to enforce in practice. In this paper, we proposed a model for privacy infrastructures aiming for the distribution channel such that as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action in the first place. The implementation issues of the proposed protocol are discussed. Digital rights management techniques are applied in our proposed infrastructure, and data identification techniques such as digital watermarking and robust perceptual hashing are proposed to enhance the distributed content identification.

Book ChapterDOI
28 Jun 2006
TL;DR: This work explores private resource-pairing solutions under two models of participant behavior: honest-but-curious behavior and potentially malicious behavior, and demonstrates significant performance benefits over a popular solution to the similar private matching problem.
Abstract: Protection of information confidentiality can result in obstruction of legitimate access to necessary resources. This paper explores the problem of pairing resource requestors and providers such that neither must sacrifice privacy. While solutions to similar problems exist, these solutions are inadequate or inefficient in the context of private resource pairing. This work explores private resource-pairing solutions under two models of participant behavior: honest-but-curious behavior and potentially malicious behavior. Without compromising security, the foundation of these solutions demonstrates significant performance benefits over a popular solution to the similar private matching problem.

Proceedings Article
01 Jan 2006