scispace - formally typeset
Search or ask a question

Showing papers by "George Danezis published in 2007"


Proceedings ArticleDOI
28 Oct 2007
TL;DR: It is shown that denial of service (DoS) lowers anonymity as messages need to get retransmitted to be delivered, presenting more opportunities for attack.
Abstract: We consider the effect attackers who disrupt anonymous communications have on the security of traditional high- and low-latency anonymous communication systems, as well as on the Hydra-Onion and Cashmere systems that aim to offer reliable mixing, and Salsa, a peer-to-peer anonymous communication network. We show that denial of service (DoS) lowers anonymity as messages need to get retransmitted to be delivered, presenting more opportunities for attack. We uncover a fundamental limit on the security of mix networks, showing that they cannot tolerate a majority of nodes being malicious. Cashmere, Hydra-Onion, and Salsa security is also badly affected by DoS attackers. Our results are backed by probabilistic modeling and extensive simulations and are of direct applicability to deployed anonymity systems.

165 citations


01 Jan 2007
TL;DR: It is shown that denial of service (DoS) lowers anonymity as messages need to get retransmitted to be delivered, presenting more opportunities for attack.
Abstract: We consider the eect attackers who disrupt anonymous communications have on the security of traditional high- and low-latency anonymous communication systems, as well as on the Hydra-Onion and Cashmere systems that aim to oer reliable mixing, and Salsa, a peer-to-peer anonymous communication network. We show that denial of service (DoS) lowers anonymity as messages need to get retransmitted to be delivered, presenting more opportunities for attack. We uncover a fundamental limit on the security of mix networks, showing that they cannot tolerate a majority of nodes being malicious. Cashmere, Hydra-Onion, and Salsa security is also badly aected by DoS attackers. Our results are backed by probabilistic modeling and extensive simulations and are of direct applicability to deployed anonymity systems.

79 citations


Proceedings ArticleDOI
29 Oct 2007
TL;DR: In this paper, the authors present PriPAYD where the premium calculations are performed locally in the vehicle, and only aggregate data arrivesto the insurance company, without leaking location information.
Abstract: Pay-As-You-Drive insurance systems are establishing themselves as the future of car insurance. However, their current implementations entail a serious privacy invasion. We present PriPAYD where the premium calculations are performed locally in the vehicle, and only aggregate data arrivesto the insurance company, without leaking location information. Our system is built on top of well understood security techniques that ensure its correct functioning. We discuss the viability of PriPAYD in terms of cost, security and easeof certification.

72 citations


Book ChapterDOI
20 Jun 2007
TL;DR: A new traffic analysis attack: the Two-sided Statistical Disclosure Attack, that tries to uncover the receivers of messages sent through an anonymizing network supporting anonymous replies, is introduced and a linear approximation describing the likely receivers of sent messages is proposed.
Abstract: We introduce a new traffic analysis attack: the Two-sided Statistical Disclosure Attack, that tries to uncover the receivers of messages sent through an anonymizing network supporting anonymous replies. We provide an abstract model of an anonymity system with users that reply to messages. Based on this model, we propose a linear approximation describing the likely receivers of sent messages. Using simulations, we evaluate the new attack given different traffic characteristics and we show that it is superior to previous attacks when replies are routed in the system.

69 citations


Book ChapterDOI
01 Nov 2007
TL;DR: Civilian infrastructures, on which state and economic actors are increasingly reliant, are ever more vulnerable to traffic analysis: wireless and GSM telephony are replacing traditional systems, routing is transparent and protocols are overlaid over others – giving plenty of opportunity to observe, and take advantage of the traffic data.
Abstract: In the Second World War, traffic analysis was used by the British at Bletchley Park to assess the size of Germany’s air-force, and Japanese traffic analysis countermeasures contributed to the surprise of their 1941 attack on Pearl Harbour. Nowadays, Google uses the incidence of links to assess the relative importance of web pages, credit card companies examine transactions to spot fraudulent patterns of spending, and amateur plane-spotters revealed the CIA’s ‘extraordinary rendition’ programme. Diffie and Landau, in their book on wiretapping, went so far as to say that “traffic analysis, not cryptanalysis, is the backbone of communications intelligence” [1]. However, until recently the topic has been neglected by Computer Science academics. A rich literature discusses how to secure the confidentiality, integrity and availability of communication content, but very little work has considered the information leaked from communications ‘traffic data’ and how these compromises might be minimised. Traffic data records the time and duration of a communication, and traffic analysis examines this data to determine the detailed shape of the communication streams, the identities of the parties communicating, and what can be established about their locations. The data may even be sketchy or incomplete – simply knowing what ‘typical’ communication patterns look like can be used to infer information about a particular observed communication. Civilian infrastructures, on which state and economic actors are increasingly reliant, are ever more vulnerable to traffic analysis: wireless and GSM telephony are replacing traditional systems, routing is transparent and protocols are overlaid over others – giving plenty of opportunity to observe, and take advantage of the traffic data. Concretely, an attacker can make use of this

42 citations


Proceedings ArticleDOI
29 Oct 2007
TL;DR: The relation of information-theoretic anonymity metrics, that use entropy over the distribution of all possible recipients to quantify anonymity, with the Shannon conditional entropy, which is an average overall possible observations.
Abstract: We discuss information-theoretic anonymity metrics, that use entropy over the distribution of all possible recipients to quantify anonymity We identify a common misconception: the entropy of the distribution describing the potentialreceivers does not always decrease given more informationWe show the relation of these a-posteriori distributions with the Shannon conditional entropy, which is an average overall possible observations

28 citations


Journal ArticleDOI
TL;DR: In this paper, the authors analyzed four schemes related to mix networks that make use of Universal Re-encryption and found serious weaknesses in all of them, including two-mix schemes and rWonGoo anonymous channel.
Abstract: Universal Re-encryption allows El-Gamal ciphertexts to be re-encrypted without knowledge of their corresponding public keys. This has made it an enticing building block for anonymous communications protocols. In this work we analyze four schemes related to mix networks that make use of Universal Re-encryption and find serious weaknesses in all of them. Universal Re-encryption of signatures is open to existential forgery; two-mix schemes can be fully compromised by a passive adversary observing a single message close to the sender; the fourth scheme, the rWonGoo anonymous channel, turns out to be less secure than the original Crowds scheme, on which it is based. Our attacks make extensive use of unintended “services” provided by the network nodes acting as decryption and re-routing oracles. Finally, our attacks against rWonGoo demonstrate that anonymous channels are not automatically composable: using two of them in a careless manner makes the system more vulnerable to attack.

26 citations


Book ChapterDOI
09 Oct 2007
TL;DR: This work improves significantly over previous work presented at ISC 2006 by Esponda et al, by showing constructions for negative databases reducible to the security of well understood primitives, such as cryptographic hash functions or the hardness of the Discrete-Logarithm problem.
Abstract: A negative database is a privacy-preserving storage system that allows to efficiently test if an entry is present, but makes it hard to enumerate all encoded entries. We improve significantly over previous work presented at ISC 2006 by Esponda et al. [9], by showing constructions for negative databases reducible to the security of well understood primitives, such as cryptographic hash functions or the hardness of the Discrete-Logarithm problem. Our constructions require only O(m) storage in the number m of entries in the database, and linear query time (compared to O(l ċ m) storage and O(l ċ m) query time, where l is a security parameter.) Our claims are supported by both proofs of security and experimental performance measurements.

23 citations


Book ChapterDOI
12 Feb 2007
TL;DR: This work improves the space efficiency of the Ostrovsky et al.
Abstract: Private keyword search is a technique that allows for searching and retrieving documents matching certain keywords without revealing the search criteria. We improve the space efficiency of the Ostrovsky et al. Private Search [9] scheme, by describing methods that require considerably shorter buffers for returning the results of the search. Our basic decoding scheme recursive extraction, requires buffers of length less than twice the number of returned results and is still simple and highly efficient. Our extended decoding schemes rely on solving systems of simultaneous equations, and in special cases can uncover documents in buffers that are close to 95% full. Finally we note the similarity between our decoding techniques and the ones used to decode rateless codes, and show how such codes can be extracted from encrypted documents.

23 citations


01 Dec 2007
TL;DR: In this paper, the authors improved the space efficiency of the Ostrovsky et al. private keyword search by describing methods that require considerably shorter buffers for returning the results of the search.
Abstract: Private keyword search is a technique that allows for searching and retrieving documents matching certain keywords without revealing the search criteria. We improve the space efficiency of the Ostrovsky et al. Private Search [9] scheme, by describing methods that require considerably shorter buffers for returning the results of the search. Our basic decoding scheme recursive extraction, requires buffers of length less than twice the number of returned results and is still simple and highly efficient. Our extended decoding schemes rely on solving systems of simultaneous equations, and in special cases can uncover documents in buffers that are close to 95% full. Finally we note the similarity between our decoding techniques and the ones used to decode rateless codes, and show how such codes can be extracted from encrypted documents. © IFCA/Springer-Verlag Berlin Heidelberg 2007.

19 citations


Journal Article
TL;DR: In this paper, the security of negative databases has been shown to be reducible to the hardness of the Discrete-Logarithm problem, which is proved to be the case for negative databases with O(m) storage and linear query time.
Abstract: A negative database is a privacy-preserving storage system that allows to efficiently test if an entry is present, but makes it hard to enumerate all encoded entries We improve significantly over previous work presented at ISC 2006 by Esponda et al [9], by showing constructions for negative databases reducible to the security of well understood primitives, such as cryptographic hash functions or the hardness of the Discrete-Logarithm problem Our constructions require only O(m) storage in the number m of entries in the database, and linear query time (compared to O(l ċ m) storage and O(l ċ m) query time, where l is a security parameter) Our claims are supported by both proofs of security and experimental performance measurements

Journal Article
TL;DR: The Two-sided Statistical Disclosure Attack (TSD) as discussed by the authors is a new traffic analysis attack that tries to uncover the receivers of messages sent through an anonymizing network supporting anonymous replies.
Abstract: We introduce a new traffic analysis attack: the Two-sided Statistical Disclosure Attack, that tries to uncover the receivers of messages sent through an anonymizing network supporting anonymous replies. We provide an abstract model of an anonymity system with users that reply to messages. Based on this model, we propose a linear approximation describing the likely receivers of sent messages. Using simulations, we evaluate the new attack given different traffic characteristics and we show that it is superior to previous attacks when replies are routed in the system.

01 Dec 2007
TL;DR: This work discusses membership testing problems and solutions, set in the context of security authentication protocols, and presents new building blocks which could be used to generate secret society protocols more robustly and generically, including the lie channel and the compulsory arbitrary decision model.
Abstract: We continue the popular theme of offline security by considering how computer security might be applied to the challenges presented in running a secret society. We discuss membership testing problems and solutions, set in the context of security authentication protocols, and present new building blocks which could be used to generate secret society protocols more robustly and generically, including the lie channel and the compulsory arbitrary decision model. © Springer-Verlag Berlin Heidelberg 2007.

Book ChapterDOI
18 Apr 2007
TL;DR: This work proposes a scheme that extracts enough information to allow for filtering, based on users being embedded in a social network, and maintains the privacy of the poster, and does not require full identification to work well.
Abstract: We present the problem of abusive, off-topic or repetitive postings on open publishing websites, and the difficulties associated with filtering them out. We propose a scheme that extracts enough information to allow for filtering, based on users being embedded in a social network. Our system maintains the privacy of the poster, and does not require full identification to work well. We present a concrete realization using constructions based on discrete logarithms, and a sketch of how our scheme could be implemented in a centralized fashion.