scispace - formally typeset
Search or ask a question

Showing papers by "George Danezis published in 2017"


Posted Content
TL;DR: A systematic and comprehensive study of blockchain consensus protocols is conducted, developing a framework to evaluate their performance, security and design properties, and using it to systematize key themes in the protocol categories described above.
Abstract: The blockchain initially gained traction in 2008 as the technology underlying bitcoin, but now has been employed in a diverse range of applications and created a global market worth over $150B as of 2017. What distinguishes blockchains from traditional distributed databases is the ability to operate in a decentralized setting without relying on a trusted third party. As such their core technical component is consensus: how to reach agreement among a group of nodes. This has been extensively studied already in the distributed systems community for closed systems, but its application to open blockchains has revitalized the field and led to a plethora of new designs. The inherent complexity of consensus protocols and their rapid and dramatic evolution makes it hard to contextualize the design landscape. We address this challenge by conducting a systematic and comprehensive study of blockchain consensus protocols. After first discussing key themes in classical consensus protocols, we describe: first protocols based on proof-of-work (PoW), second proof-of-X (PoX) protocols that replace PoW with more energy-efficient alternatives, and third hybrid protocols that are compositions or variations of classical consensus protocols. We develop a framework to evaluate their performance, security and design properties, and use it to systematize key themes in the protocol categories described above. This evaluation leads us to identify research gaps and challenges for the community to consider in future research endeavours.

228 citations


Posted Content
TL;DR: This paper defines a game between three parties, Alice, Bob and Eve, and shows that adversarial training can produce robust steganographic techniques: the unsupervised training scheme produces a steganography algorithm that competes with state-of-the-art steganographers techniques.
Abstract: Adversarial training was recently shown to be competitive against supervised learning methods on computer vision tasks, however, studies have mainly been confined to generative tasks such as image synthesis. In this paper, we apply adversarial training techniques to the discriminative task of learning a steganographic algorithm. Steganography is a collection of techniques for concealing information by embedding it within a non-secret medium, such as cover texts or images. We show that adversarial training can produce robust steganographic techniques: our unsupervised training scheme produces a steganographic algorithm that competes with state-of-the-art steganographic techniques, and produces a robust steganalyzer, which performs the discriminative task of deciding if an image contains secret information. We define a game between three parties, Alice, Bob and Eve, in order to simultaneously train both a steganographic algorithm and a steganalyzer. Alice and Bob attempt to communicate a secret message contained within an image, while Eve eavesdrops on their conversation and attempts to determine if secret information is embedded within the image. We represent Alice, Bob and Eve by neural networks, and validate our scheme on two independent image datasets, showing our novel method of studying steganographic problems is surprisingly competitive against established steganographic techniques.

151 citations


Posted Content
TL;DR: This paper presents the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model, using Generative Adversarial Networks.
Abstract: Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator's capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and/or sample quality.

141 citations


Proceedings Article
04 Dec 2017
TL;DR: In this paper, adversarial training is applied to the discriminative task of learning a steganographic algorithm, which is a collection of techniques for concealing the existence of information by embedding it within a non-secret medium, such as cover texts or images.
Abstract: Adversarial training has proved to be competitive against supervised learning methods on computer vision tasks. However, studies have mainly been confined to generative tasks such as image synthesis. In this paper, we apply adversarial training techniques to the discriminative task of learning a steganographic algorithm. Steganography is a collection of techniques for concealing the existence of information by embedding it within a non-secret medium, such as cover texts or images. We show that adversarial training can produce robust steganographic techniques: our unsupervised training scheme produces a steganographic algorithm that competes with state-of-the-art steganographic techniques. We also show that supervised training of our adversarial model produces a robust steganalyzer, which performs the discriminative task of deciding if an image contains secret information. We define a game between three parties, Alice, Bob and Eve, in order to simultaneously train both a steganographic algorithm and a steganalyzer. Alice and Bob attempt to communicate a secret message contained within an image, while Eve eavesdrops on their conversation and attempts to determine if secret information is embedded within the image. We represent Alice, Bob and Eve by neural networks, and validate our scheme on two independent image datasets, showing our novel method of studying steganographic problems is surprisingly competitive against established steganographic techniques.

126 citations


Posted Content
22 May 2017
TL;DR: This paper presents the first membership inference attack on generative models, training a Generative Adversarial Network, which combines a discriminative and a generative model, to detect overfitting and recognize inputs that are part of training datasets by relying on the discriminator's capacity to learn statistical differences in distributions.
Abstract: Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator's capacity to learn statistical differences in distributions We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy) We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and/or sample quality

98 citations


Posted Content
TL;DR: A theoretical analysis of the Poisson mixing strategy as well as an empirical evaluation of the anonymity provided by the protocol and a functional implementation that is analyzed in terms of scalability by running it on AWS EC2 are provided.
Abstract: We present Loopix, a low-latency anonymous communication system that provides bi-directional 'third-party' sender and receiver anonymity and unobservability. Loopix leverages cover traffic and brief message delays to provide anonymity and achieve traffic analysis resistance, including against a global network adversary. Mixes and clients self-monitor the network via loops of traffic to provide protection against active attacks, and inject cover traffic to provide stronger anonymity and a measure of sender and receiver unobservability. Service providers mediate access in and out of a stratified network of Poisson mix nodes to facilitate accounting and off-line message reception, as well as to keep the number of links in the system low, and to concentrate cover traffic. We provide a theoretical analysis of the Poisson mixing strategy as well as an empirical evaluation of the anonymity provided by the protocol and a functional implementation that we analyze in terms of scalability by running it on AWS EC2. We show that a Loopix relay can handle upwards of 300 messages per second, at a small delay overhead of less than 1.5 ms on top of the delays introduced into messages to provide security. Overall message latency is in the order of seconds - which is low for a mix-system. Furthermore, many mix nodes can be securely added to a stratified topology to scale throughput without sacrificing anonymity.

75 citations


Proceedings Article
16 Aug 2017
TL;DR: Loopix as mentioned in this paper is a low-latency anonymous communication system that provides bi-directional 'third-party' sender and receiver anonymity and unobservability, which leverages cover traffic and Poisson mixing to provide anonymity and to achieve traffic analysis resistance against a global network adversary.
Abstract: We present Loopix, a low-latency anonymous communication system that provides bi-directional 'third-party' sender and receiver anonymity and unobservability. Loopix leverages cover traffic and Poisson mixing--brief independent message delays--to provide anonymity and to achieve traffic analysis resistance against, including but not limited to, a global network adversary. Mixes and clients self-monitor and protect against active attacks via self-injected loops of traffic. The traffic loops also serve as cover traffic to provide stronger anonymity and a measure of sender and receiver unobservability. Loopix is instantiated as a network of Poisson mix nodes in a stratified topology with a low number of links, which serve to further concentrate cover traffic. Service providers mediate access in and out of the network to facilitate accounting and off-line message reception. We provide a theoretical analysis of the Poisson mixing strategy as well as an empirical evaluation of the anonymity provided by the protocol and a functional implementation that we analyze in terms of scalability by running it on AWS EC2. We show that mix nodes in Loopix can handle upwards of 300 messages per second, at a small delay overhead of less than 1.5ms on top of the delays introduced into messages to provide security. Overall message latency is on the order of seconds - which is relatively low for a mix-system. Furthermore, many mix nodes can be securely added to the stratified topology to scale throughput without sacrificing anonymity.

59 citations


Posted Content
TL;DR: In this article, the authors introduce universal adversarial networks, a generative network that is capable of fooling a target classifier when it's generated output is added to a clean sample from a dataset.
Abstract: Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally perturbed to remain visually similar to the source input, but cause a misclassification. It was recently shown that given a dataset and classifier, there exists so called universal adversarial perturbations, a single perturbation that causes a misclassification when applied to any input. In this work, we introduce universal adversarial networks, a generative network that is capable of fooling a target classifier when it's generated output is added to a clean sample from a dataset. We show that this technique improves on known universal adversarial attacks.

57 citations


Journal ArticleDOI
01 Oct 2017
TL;DR: It is argued that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems.
Abstract: Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems, as well as key insights for designers of future systems. We show that decentralized designs can enhance privacy, integrity, and availability but also require careful trade-offs in terms of system complexity, properties provided, and degree of decentralization. These trade-offs need to be understood and navigated by designers. We argue that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems.

57 citations



Posted Content
17 Aug 2017
TL;DR: A direct attack against black-box neural networks, that uses another attacker neural network to learn to craft adversarial examples that can transfer to different machine learning models such as Random Forest, SVM, and K-Nearest Neighbor is introduced.
Abstract: Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally perturbed to remain visually similar to the source input, but cause a misclassification. Until now, black-box attacks against neural networks have relied on transferability of adversarial examples. White-box attacks are used to generate adversarial examples on a substitute model and then transferred to the black-box target model. In this paper, we introduce a direct attack against black-box neural networks, that uses another attacker neural network to learn to craft adversarial examples. We show that our attack is capable of crafting adversarial examples that are indistinguishable from the source input and are misclassified with overwhelming probability reducing accuracy of the black-box neural network from 99.4% to 0.77% on the MNIST dataset, and from 91.4% to 6.8% on the CIFAR-10 dataset. Our attack can adapt and reduce the effectiveness of proposed defenses against adversarial examples, requires very little training data, and produces adversarial examples that can transfer to different machine learning models such as Random Forest, SVM, and K-Nearest Neighbor. To demonstrate the practicality of our attack, we launch a live attack against a target black-box model hosted online by Amazon: the crafted adversarial examples reduce its accuracy from 91.8% to 61.3%. Additionally, we show attacks proposed in the literature have unique, identifiable distributions. We use this information to train a classifier that is robust against such attacks.

Posted Content
TL;DR: Chainspace as mentioned in this paper is a distributed ledger that supports user defined smart contracts and executes user-supplied transactions on their objects, and the correct execution of smart contract transactions is verifiable by all.
Abstract: Chainspace is a decentralized infrastructure, known as a distributed ledger, that supports user defined smart contracts and executes user-supplied transactions on their objects. The correct execution of smart contract transactions is verifiable by all. The system is scalable, by sharding state and the execution of transactions, and using S-BAC, a distributed commit protocol, to guarantee consistency. Chainspace is secure against subsets of nodes trying to compromise its integrity or availability properties through Byzantine Fault Tolerance (BFT), and extremely high-auditability, non-repudiation and `blockchain' techniques. Even when BFT fails, auditing mechanisms are in place to trace malicious participants. We present the design, rationale, and details of Chainspace; we argue through evaluating an implementation of the system about its scaling and other features; we illustrate a number of privacy-friendly smart contracts for smart metering, polling and banking and measure their performance.

Patent
02 May 2017
TL;DR: In this paper, the identification and protection of sensitive data in a multiple ways, which can be combined for different workflows, data situations or use cases, is discussed, using differentially private algorithms to reduce or prevent the risk of identification or disclosure of sensitive information.
Abstract: A system allows the identification and protection of sensitive data in a multiple ways, which can be combined for different workflows, data situations or use cases. The system scans datasets to identify sensitive data or identifying datasets, and to enable the anonymisation of sensitive or identifying datasets by processing that data to produce a safe copy. Furthermore, the system prevents access to a raw dataset. The system enables privacy preserving aggregate queries and computations. The system uses differentially private algorithms to reduce or prevent the risk of identification or disclosure of sensitive information. The system scales to big data and is implemented in a way that supports parallel execution on a distributed compute cluster.

Journal ArticleDOI
TL;DR: It is argued that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems.
Abstract: Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems, as well as key insights for designers of future systems. We show that decentralized designs can enhance privacy, integrity, and availability but also require careful trade-offs in terms of system complexity, properties provided, and degree of decentralization. These trade-offs need to be understood and navigated by designers. We argue that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems.

Posted Content
TL;DR: Myst, a practical high-assurance architecture that uses commercial off-the-shelf (COTS) hardware, and provides strong security guarantees, even in the presence of multiple malicious or faulty components, and shows an exponential increase in backdoor-tolerance as more ICs are added.
Abstract: The semiconductor industry is fully globalized and integrated circuits (ICs) are commonly defined, designed and fabricated in different premises across the world. This reduces production costs, but also exposes ICs to supply chain attacks, where insiders introduce malicious circuitry into the final products. Additionally, despite extensive post-fabrication testing, it is not uncommon for ICs with subtle fabrication errors to make it into production systems. While many systems may be able to tolerate a few byzantine components, this is not the case for cryptographic hardware, storing and computing on confidential data. For this reason, many error and backdoor detection techniques have been proposed over the years. So far all attempts have been either quickly circumvented, or come with unrealistically high manufacturing costs and complexity. This paper proposes Myst, a practical high-assurance architecture, that uses commercial off-the-shelf (COTS) hardware, and provides strong security guarantees, even in the presence of multiple malicious or faulty components. The key idea is to combine protective-redundancy with modern threshold cryptographic techniques to build a system tolerant to hardware trojans and errors. To evaluate our design, we build a Hardware Security Module that provides the highest level of assurance possible with COTS components. Specifically, we employ more than a hundred COTS secure crypto-coprocessors, verified to FIPS140-2 Level 4 tamper-resistance standards, and use them to realize high-confidentiality random number generation, key derivation, public key decryption and signing. Our experiments show a reasonable computational overhead (less than 1% for both Decryption and Signing) and an exponential increase in backdoor-tolerance as more ICs are added.

Proceedings ArticleDOI
30 Oct 2017
TL;DR: In this article, the authors proposed Myst, a high-assurance architecture that uses commercial off-the-shelf (COTS) hardware and provides strong security guarantees, even in the presence of multiple malicious or faulty components.
Abstract: The semiconductor industry is fully globalized and integrated circuits (ICs) are commonly defined, designed and fabricated in different premises across the world. This reduces production costs, but also exposes ICs to supply chain attacks, where insiders introduce malicious circuitry into the final products. Additionally, despite extensive post-fabrication testing, it is not uncommon for ICs with subtle fabrication errors to make it into production systems. While many systems may be able to tolerate a few byzantine components, this is not the case for cryptographic hardware, storing and computing on confidential data. For this reason, many error and backdoor detection techniques have been proposed over the years. So far all attempts have been either quickly circumvented, or come with unrealistically high manufacturing costs and complexity. This paper proposes Myst, a practical high-assurance architecture, that uses commercial off-the-shelf (COTS) hardware, and provides strong security guarantees, even in the presence of multiple malicious or faulty components. The key idea is to combine protective-redundancy with modern threshold cryptographic techniques to build a system tolerant to hardware trojans and errors. To evaluate our design, we build a Hardware Security Module that provides the highest level of assurance possible with COTS components. Specifically, we employ more than a hundred COTS secure cryptocoprocessors, verified to FIPS140-2 Level 4 tamper-resistance standards, and use them to realize high-confidentiality random number generation, key derivation, public key decryption and signing. Our experiments show a reasonable computational overhead (less than 1% for both Decryption and Signing) and an exponential increase in backdoor-tolerance as more ICs are added.

Journal Article
TL;DR: Miranda as discussed by the authors uses both the detection of corrupt mixes, as well as detection of faults related to a pair of mixes, without detection of the faulty one among the two, which leads to reduced connectivity for corrupt mixes and reduces their ability to attack.
Abstract: Mix networks are a key technology to achieve network anonymity and private messaging, voting and database lookups. However, simple mix network designs are vulnerable to malicious mixes, which may drop or delay packets to facilitate traffic analysis attacks. Mix networks with provable robustness address this drawback through complex and expensive proofs of correct shuffling but come at a great cost and make limiting or unrealistic systems assumptions. We present Miranda, an efficient mix-net design, which mitigates active attacks by malicious mixes. Miranda uses both the detection of corrupt mixes, as well as detection of faults related to a pair of mixes, without detection of the faulty one among the two. Each active attack -- including dropping packets -- leads to reduced connectivity for corrupt mixes and reduces their ability to attack, and, eventually, to detection of corrupt mixes. We show, through experiments, the effectiveness of Miranda, by demonstrating how malicious mixes are detected and that attacks are neutralized early.

Posted Content
19 Jul 2017
TL;DR: La infraestructura de clave publica (PKI) es un componente necesario para el funcionamiento de las comunicaciones seguras modernas y introduce una problema of privacidad inexistente en disenos centralizados de PKI.
Abstract: La infraestructura de clave publica (PKI) es un componente necesario para el funcionamiento de las comunicaciones seguras modernas. Esta infraestructura permite a los miembros participantes establecer claves criptograficas para sus destinatarios, manteniendo relaciones de alta integridad entre usuarios (nombres, direcciones u otros identificadores) y las claves publicas utilizadas para cifrar y verificar los mensajes. Los sistemas PKI existentes presentan diferentes concesiones con respecto a propiedades de seguridad. La mayoria de disenos mas exibles suelen ser centralizados, lo que facilita la actualizacion de claves y proporciona una buena disponibilidad. Dichos disenos, sin embargo, requieren que los usuarios confien en una fuente de autoridad centralizada, poniendo a esta autoridad en una posicion privilegiada para vigilarlos. Por otro lado, los enfoques actuales que alivian problemas de seguridad derivados de la centralizacion estan limitados en terminos de exibilidad y capacidad para revocar y actualizar rapidamente las claves. Por lo tanto, no son adecuados para aplicaciones modernas que requieran cambios frecuentes de claves por razones de seguridad. En este thesis proponemos un diseno de PKI descentralizado, que llamamos ClaimChain, donde cada usuario o dispositivo mantiene repositorios de afirmaciones acerca de sus propias claves publicas y las de sus contactos. La alta integridad de los repositorios almacenando dichas afirmaciones se mantene mediante el uso de estructuras autenticadas, cadenas de hash y arboles Merkle; y su autenticidad y no repudio gracias a el uso de firmas digitales. Introducimos el concepto de referencias cruzadas sobre cadenas de hash como una manera de verificable de atestiguar sobre los estados de otros usuarios. ClaimChain permite detectar el compromiso de claves, que se manifiestan como bifurcaciones de las cadenas de hash, e implementar diversas politicas sociales para derivar decisiones sobre el estado mas reciente de los usuarios en el sistema. La realizaction de afirmaciones sobre las claves de otras personas introduce una problema de privacidad inexistente en disenos centralizados de PKI. Esta informacion puede revelar el grafo social, y a veces incluso patrones de comunicacion. Para resolver este problema, usamos funciones aleatorias verificables criptograficamente para obtener identificadores privados, que son realeatorizados en cada actualizacion de las cadenas de los usuarios. Esto permite publicar de manera abierta y verificada afirmaciones que solo pueden ser leidas por usuarios autorizados, garantizando la privacidad del grafo social. Ademas, la construccion especifica de los arboles Merkle en ClaimChain, junto a el uso de funciones aleatorias verificables, asegura que los usuarios no pueden enganar acerca de los estados de otras personas. ClaimChain es exible con respecto a las opciones de despliegue, permitiendo despliegues totalmente descentralizados, asi como modos de operacion centralizados, federados e hibridos. Hemos evaluado los requisitos computacionales y de memoria de ClaimChain usando una implementacion prototipo. Tambien hemos simulado el flujo del sistema utilizando el conjunto de datos de Enron, compuesto del historial de correos electronicos reales en el contexto de una organizacion, con el fin de evaluar la eficacia de la propagacion de material clave en un entorno totalmente descentralizado.---ABSTRACT---Public key infrastructure (PKI) is a necessary component for the functioning of modern secure communications. It allows communicating parties to establish cryptographic keys for their correspondents by maintaining high-integrity bindings between users (names, addresses, or other identifiers) and public keys used to encrypt and verify messages. Existing PKI systems provide different trade-offs between security properties. Most designs tend to be centralized, which eases the update of keys and provides good availability. A centralized design, however, requires users to trust the centralized source of authority for honestly providing correct public keys for requested identities, and puts such authority in a privileged position to perform surveillance on users. On the other hand, current approaches that alleviate these issues are limited in terms of exibility and capability to rapidly revoke and update keys. Thus, they are not suitable for modern applications that require frequent key changes for security reasons. We envision a decentralized PKI design, that we call ClaimChain, where each user or device maintains repositories of claims regarding their own key material, and their beliefs about public keys and, generally, state of other users of the system. High integrity of the repositories is maintained by virtue of storing claims on authenticated data structures, namely hash chains and Merkle trees, and their authenticity and non-repudiation by the use of digital signatures. We introduce the concept of crossreferencing of hash chains as a way of efficient and verifiable vouching about states of other users. ClaimChain allows to detect chain compromises, manifested as forks of hash chains, and to implement various social policies for deriving decisions about the latest state of users in the system. The claims about keys of other people introduces a privacy problem that does not exist in the centralized PKI design. Such information can reveal the social graph, and sometimes even communication patterns. To solve this, we use cryptographic verifiable random functions to derive private identifiers that are re-randomized on each chain update. This allows to openly and verifiably publish claims that can only be read by the authorized users, ensuring privacy of the social graph. Moreover, the specific construction of Merkle trees in ClaimChain, along with the usage of verifiable random functions, ensures users can not equivocate about the state of other people. ClaimChain is exible with respect to deployment options, supporting fully decentralized deployments, as well as centralized, federated, and hybrid modes of operation. We have evaluated the ClaimChain's computation and memory requirements using a prototype implementation. We also simulated the ow of the system using Enron dataset, comprising real-world email communication history within an organization, in order to evaluate the effectiveness of propagation of key material in a fully decentralized setting.

Proceedings ArticleDOI
30 Oct 2017
TL;DR: AnNotify as discussed by the authors is a scalable service for private, timely and low-cost online notifications, based on anonymous communication, sharding, dummy queries, and Bloom filters, which can be used for anonymous communications, updates to private cached web and Domain Name Service queries.
Abstract: AnNotify is a scalable service for private, timely and low-cost online notifications, based on anonymous communication, sharding, dummy queries, and Bloom filters. We present the design and analysis of AnNotify, as well as an evaluation of its costs. We outline the design of AnNotify and calculate the concrete advantage of an adversary observing multiple queries. We present a number of extensions, such as generic presence and broadcast notifications, and applications, including notifications for incoming messages in anonymous communications, updates to private cached web and Domain Name Service (DNS) queries.

Posted Content
01 Mar 2017
TL;DR: It is shown that adversarial training can produce robust steganographic techniques: the unsupervised training scheme produces a steganographers algorithm that competes with state-of-the-art steganography techniques, and produces a robust Steganalyzer, which performs the discriminative task of deciding if an image contains secret information.
Abstract: Adversarial training was recently shown to be competitive against supervised learning methods on computer vision tasks, however, studies have mainly been confined to generative tasks such as image synthesis. In this paper, we apply adversarial training techniques to the discriminative task of learning a steganographic algorithm. Steganography is a collection of techniques for concealing information by embedding it within a non-secret medium, such as cover texts or images. We show that adversarial training can produce robust steganographic techniques: our unsupervised training scheme produces a steganographic algorithm that competes with state-of-the-art steganographic techniques, and produces a robust steganalyzer, which performs the discriminative task of deciding if an image contains secret information. We define a game between three parties, Alice, Bob and Eve, in order to simultaneously train both a steganographic algorithm and a steganalyzer. Alice and Bob attempt to communicate a secret message contained within an image, while Eve eavesdrops on their conversation and attempts to determine if secret information is embedded within the image. We represent Alice, Bob and Eve by neural networks, and validate our scheme on two independent image datasets, showing our novel method of studying steganographic problems is surprisingly competitive against established steganographic techniques.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: The design and engineering of LiLAC, a Lightweight Low-latency Anonymous Chat service, is described, that offers both strong anonymity and a lightweight client-side presence and leads to a key trade-off between the system's bandwidth overhead and end-to-end delay along the circuit.
Abstract: Low latency anonymity systems, like Tor and I2P, support private online communications, but offer limited protection against powerful adversaries with widespread eavesdropping capabilities. It is known that general-purpose communications, such as web and file transfer, are difficult to protect in that setting. However, online instant messaging only requires a low bandwidth and we show it to be amenable to strong anonymity protections. In this paper, we describe the design and engineering of LiLAC, a Lightweight Low-latency Anonymous Chat service, that offers both strong anonymity and a lightweight client-side presence. LiLAC implements a set of anonymizing relays, and offers stronger anonymity protections by applying dependent link padding on top of constant-rate traffic flows. This leads to a key trade-off between the system's bandwidth overhead and end-to-end delay along the circuit, which we study. Additionally, we examine the impact of allowing zero-installation overhead on the client side, by instead running LiLAC on web browsers. This introduces potential security risks, by relying on third-party software and requiring user awareness; yet it also reduces the footprint left on the client, enhancing deniability and countering forensics. Those design decisions and trade-offs make LiLAC an interesting case to study for privacy and security engineers.

Proceedings ArticleDOI
30 Oct 2017
TL;DR: This work proposes to securely delegate the eviction to semi-trusted third parties to enable any client to accede the ORAM technology and presents four different designs inspired by mix-net technologies with reasonable periodic costs.
Abstract: Oblivious RAM (ORAM) is a key technology for providing private storage and querying on untrusted machines but is commonly seen as impractical due to the high and recurring overhead of the re-randomization, called the eviction, the client incurs. We propose in this work to securely delegate the eviction to semi-trusted third parties to enable any client to accede the ORAM technology and present four different designs inspired by mix-net technologies with reasonable periodic costs.

Proceedings ArticleDOI
TL;DR: In this article, a cryptographic construction for privacy-preserving authentication of public keys is proposed, called ClaimChain, which allows users to store claims about their identities and keys, as well as their beliefs about others in ClaimChain.
Abstract: The social demand for email end-to-end encryption is barely supported by mainstream service providers. Autocrypt is a new community-driven open specification for e-mail encryption that attempts to respond to this demand. In Autocrypt the encryption keys are attached directly to messages, and thus the encryption can be implemented by email clients without any collaboration of the providers. The decentralized nature of this in-band key distribution, however, makes it prone to man-in-the-middle attacks and can leak the social graph of users. To address this problem we introduce ClaimChain, a cryptographic construction for privacy-preserving authentication of public keys. Users store claims about their identities and keys, as well as their beliefs about others, in ClaimChains. These chains form authenticated decentralized repositories that enable users to prove the authenticity of both their keys and the keys of their contacts. ClaimChains are encrypted, and therefore protect the stored information, such as keys and contact identities, from prying eyes. At the same time, ClaimChain implements mechanisms to provide strong non-equivocation properties, discouraging malicious actors from distributing conflicting or inauthentic claims. We implemented ClaimChain and we show that it offers reasonable performance, low overhead, and authenticity guarantees.

Posted Content
TL;DR: In this article, the authors propose to securely delegate the eviction to semi-trusted third parties to enable any client to accede the ORAM technology and present four different designs inspired by mix-net technologies with reasonable periodic costs.
Abstract: Oblivious RAM (ORAM) is a key technology for providing private storage and querying on untrusted machines but is commonly seen as impractical due to the high overhead of the re-randomization, called the eviction, the client incurs. We propose in this work to securely delegate the eviction to semi-trusted third parties to enable any client to accede the ORAM technology and present four different designs inspired by mix-net technologies with reasonable periodic costs.