scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Mass-surveillance without the State: Strongly Undetectable Algorithm-Substitution Attacks

12 Oct 2015-pp 1431-1440
TL;DR: In this article, the authors present new algorithm substitution attacks (ASAs) on symmetric encryption that improve over prior ones in two ways: first, while prior attacks only break a sub-class of randomized schemes having a property called coin injectivity, their attacks break ALL randomized schemes.
Abstract: We present new algorithm-substitution attacks (ASAs) on symmetric encryption that improve over prior ones in two ways First, while prior attacks only broke a sub-class of randomized schemes having a property called coin injectivity, our attacks break ALL randomized schemes Second, while prior attacks are stateful, ours are stateless, achieving a notion of strong undetectability that we formalize Together this shows that ASAs are an even more dangerous and powerful mass surveillance method than previously thought Our work serves to increase awareness about what is possible with ASAs and to spur the search for deterrents and counter-measures
Citations
More filters
Proceedings ArticleDOI
09 Jul 2018
TL;DR: The effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks is examined.
Abstract: Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models' structure or their observable behavior. However, the underlying cause of this privacy risk is not well understood beyond a handful of anecdotal accounts that suggest overfitting and influence might play a role. This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. Interestingly, our formal analysis also shows that overfitting is not necessary for these attacks and begins to shed light on what other factors may be in play. Finally, we explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks.

469 citations

Posted Content
TL;DR: This article examined the effect of overfitting and influence on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks.
Abstract: Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models' structure or their observable behavior. However, the underlying cause of this privacy risk is not well understood beyond a handful of anecdotal accounts that suggest overfitting and influence might play a role. This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. Interestingly, our formal analysis also shows that overfitting is not necessary for these attacks and begins to shed light on what other factors may be in play. Finally, we explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks.

299 citations

Book ChapterDOI
04 Dec 2016
TL;DR: A digital signature scheme that preserves existential unforgeability when all algorithms including key generation, which was not considered to be under attack before are subject to kleptographic attacks is showcased.
Abstract: Kleptography, introduced 20 years ago by Young and Yung [Crypto '96], considers the insecurity of malicious implementations or instantiations of standard cryptographic primitives that may embed a "backdoor" into the system. Remarkably, crippling subliminal attacks are possible even if the subverted cryptosystem produces output indistinguishable from a truly secure "reference implementation." Bellare, Paterson, and Rogaway [Crypto '14] recently initiated a formal study of such attacks on symmetric key encryption algorithms, demonstrating that kleptographic attacks can be mounted in broad generality against randomized components of cryptographic systems. We enlarge the scope of current work on the problem by permitting adversarial subversion of randomized key generation; in particular, we initiate the study of cryptography in the complete subversion model, where all relevant cryptographic primitives are subject to kleptographic attacks. We construct secure one-way permutations and trapdoor one-way permutations in this "complete subversion" model, describing a general, rigorous immunization strategy to clip the power of kleptographic subversions. Our strategy can be viewed as a formal treatment of the folklore "nothing up my sleeve" wisdom in cryptographic practice. We also describe a related "split program" model that can directly inform practical deployment. We additionally apply our general immunization strategy to directly yield a backdoor-free PRG. This notably amplifies previous results of Dodis, Ganesh, Golovnev, Juels, and Ristenparti¾?[Eurocrypt '15], which require an honestly generated random key. We then examine two standard applications of trapdoor one-way permutations in this complete subversion model and construct "higher level" primitives via black-box reductions. We showcase a digital signature scheme that preserves existential unforgeability when all algorithms including key generation, which was not considered to be under attack before are subject to kleptographic attacks. Additionally, we demonstrate that the classic Blum---Micali pseudorandom generator PRG, using an "immunized" one-way permutation, yields a backdoor-free PRG. Alongside development of these secure primitives, we set down a hierarchy of kleptographic attack models which we use to organize past results and our new contributions; this taxonomy may be valuable for future work.

78 citations

Book ChapterDOI
14 Aug 2016
TL;DR: In this paper, Mironov and Stephens-Davidowitz introduced a new framework for solving such problems: reverse firewalls, a third party that "sits between Alice and the outside world" and modifies her sent and received messages so that even if the her machine has been corrupted, Alice's security is still guaranteed.
Abstract: Suppose Alice wishes to send a message to Bob privately over an untrusted channel. Cryptographers have developed a whole suite of tools to accomplish this task, with a wide variety of notions of security, setup assumptions, and running times. However, almost all prior work on this topic made a seemingly innocent assumption: that Alice has access to a trusted computer with a proper implementation of the protocol. The Snowden revelations show us that, in fact, powerful adversaries can and will corrupt users' machines in order to compromise their security. And, presumably accidental vulnerabilities are regularly found in popular cryptographic software, showing that users cannot even trust implementations that were created honestly. This leads to the following seemingly absurd question: "Can Alice securely send a message to Bob even if she cannot trust her own computer?!" Bellare, Paterson, and Rogaway recently studied this question. They show a strong impossibility result that in particular rules out even semantically secure public-key encryption in their model. However, Mironov and Stephens-Davidowitz recently introduced a new framework for solving such problems: reverse firewalls. A secure reverse firewall is a third party that "sits between Alice and the outside world" and modifies her sent and received messages so that even if the her machine has been corrupted, Alice's security is still guaranteed. We show how to use reverse firewalls to sidestep the impossibility result of Bellare et al., and we achieve strong security guarantees in this extreme setting. Indeed, we find a rich structure of solutions that vary in efficiency, security, and setup assumptions, in close analogy with message transmission in the classical setting. Our strongest and most important result shows a protocol that achieves interactive, concurrent CCA-secure message transmission with a reverse firewall--i.e., CCA-secure message transmission on a possibly compromised machine! Surprisingly, this protocol is quite efficient and simple, requiring only four rounds and a small constant number of public-key operations for each party. It could easily be used in practice. Behind this result is a technical composition theorem that shows how key agreement with a sufficiently secure reverse firewall can be used to construct a message-transmission protocol with its own secure reverse firewall.

66 citations

Proceedings ArticleDOI
30 Oct 2017
TL;DR: (IND-CPA)
Abstract: Notable recent security incidents have generated intense interest in adversaries which attempt to subvert---perhaps covertly---crypto\-graphic algorithms. In this paper we develop (IND-CPA) Semantically Secure encryption in this challenging setting. This fundamental encryption primitive has been previously studied in the "kleptographic setting," though existing results must relax the model by introducing trusted components or otherwise constraining the subversion power of the adversary: designing a Public Key System that is kletographically semantically secure (with minimal trust) has remained elusive to date. In this work, we finally achieve such systems, even when all relevant cryptographic algorithms are subject to adversarial (kleptographic) subversion. To this end we exploit novel inter-component randomized cryptographic checking techniques (with an offline checking component), combined with common and simple software engineering modular programming techniques (applied to the system's black box specification level). Moreover, our methodology yields a strong generic technique for the preservation of any semantically secure cryptosystem when incorporated into the strong kleptographic adversary setting.

53 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, a constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented, which is a deterministic polynomial-time algorithm that transforms pairs (g, r), where g is any one-way function and r is a random k-bit string, to computable functions.
Abstract: A constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented. This generator is a deterministic polynomial-time algorithm that transforms pairs (g, r), where g is any one-way function and r is a random k-bit string, to polynomial-time computable functions ƒr: {1, … , 2k} → {1, … , 2k}. These ƒr's cannot be distinguished from random functions by any probabilistic polynomial-time algorithm that asks and receives the value of a function at arguments of its choice. The result has applications in cryptography, random constructions, and complexity theory.

2,043 citations

Book ChapterDOI
01 Jan 1984
TL;DR: Two accomplices in a crime have been arrested and are about to be locked in widely separated cells and their only means of communication after they are locked up will he by way of messages conveyed for them by trustees -- who are known to be agents of the warden.
Abstract: Two accomplices in a crime have been arrested and are about to be locked in widely separated cells. Their only means of communication after they are locked up will he by way of messages conveyed for them by trustees -- who are known to be agents of the warden. The warden is willing to allow the prisoners to exchange messages in the hope that he can deceive at least one of them into accepting as a genuine communication from the other either a fraudulent message created by the warden himself or else a modification by him of a genuine message. However, since he has every reason to suspect that the prisoners want to coordinate an escape plan, the warden will only permit the exchanges to occur if the information contained in the messages is completely open to him -- and presumably innocuous. The prisoners, on the other hand, are willing to accept these conditions, i.e., to accept some risk of deception in order to be able to communicate at all, since they need to coordinate their plans. To do this they will have to deceive the warden by finding a way of communicating secretly in the exchanges, i.e., of establishing a “subliminal channel” between them in full view of the warden, even though the messages themselves contain no secret (to the warden) information‡. Since they anticipate that the warden will try to deceive them by introducing fraudulent messages, they will only exchange messages if they are permitted to authenticate them.

1,001 citations

Journal Article
TL;DR: It is demonstrated that for DES parameters (56-bit keys and 64-bit plaintexts) an adversary's maximal advantage against triple encryption is small until it asks about 278 queries.
Abstract: We show that, in the ideal-cipher model, triple encryption (the cascade of three independently-keyed blockciphers) is more secure than single or double encryption, thereby resolving a long-standing open problem. Our result demonstrates that for DES parameters (56-bit keys and 64-bit plaintexts) an adversary's maximal advantage against triple encryption is small until it asks about 2 78 queries. Our proof uses code-based game-playing in an integral way, and is facilitated by a framework for such proofs that we provide.

704 citations

Book ChapterDOI
21 Aug 1994
TL;DR: This work provides its first formal justification, showing the following general lemma: that cipher block chaining a pseudorandom function gives a Pseudo-Cipher Block Chaining function.
Abstract: The Cipher Block Chaining - Message Authentication Code (CBC MAC) specifies that a, message x = x1 ... xm be authenticated among parties who share a secret key a by tagging x with a prefix of fa(m)(x) def = fa(fa(... fa(fa(x1)?x2)?...?xm-1)?xm), where f is some underlying block cipher (eg. f = DES). This method is a pervasively used international and U.S. standard. We provide its first formal justification, showing the following general lemma: that cipher block chaining a pseudorandom function gives a pseudorandom function. Underlying our results is a technical lemma of independent interest, bounding the success probability of a computationally unbounded adversary in distinguishing between a random ml-bit to l-bit function and the CBC MAC of a random l-bit to l-bit function.

465 citations

Book ChapterDOI
11 May 1997
TL;DR: The SETUP mechanisms presented here, in contrast with previous ones, leak secret key information without using an explicit subliminal channel, which extends the study of stealing information securely and subliminally from black-box cryptosystems.
Abstract: The notion of a Secretly Embedded Trapdoor with Universal Protection (SETUP) has been recently introduced. In this paper we extend the study of stealing information securely and subliminally from black-box cryptosystems. The SETUP mechanisms presented here, in contrast with previous ones, leak secret key information without using an explicit subliminal channel. This extends this area of threats, which we call "kleptography". We introduce new definitions of SETUP attacks (strong, regular, and weak SETUPS) and the notion of m out of n leakage bandwidth. We show a strong attack which is based on the discrete logarithm problem. We then show how to use this setup to compromise the Diffie-Hellman key exchange protocol. We also strengthen the previous SETUP against RSA. The strong attacks employ the discrete logarithm as a one-way function (assuring what is called "forward secrecy"), public-key cryptography, and a technique which we call probabilistic bias removal.

228 citations