scispace - formally typeset
Search or ask a question

Showing papers by "Moni Naor published in 2006"


Book ChapterDOI
28 May 2006
TL;DR: In this paper, a distributed protocol for generating shares of random noise, secure against malicious participants, was proposed, where the purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers.
Abstract: In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form ∑if(di), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.

1,567 citations


Journal Article
TL;DR: This work provides efficient distributed protocols for generating shares of random noise, secure against malicious participants, and introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches.
Abstract: In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.

391 citations


Book ChapterDOI
20 Aug 2006
TL;DR: The first universally verifiable voting scheme that can be based on a general assumption (existence of a non-interactive commitment scheme) is presented, and the first receipt-free scheme to give “everlasting privacy” for votes is presented.
Abstract: We present the first universally verifiable voting scheme that can be based on a general assumption (existence of a non-interactive commitment scheme). Our scheme is also the first receipt-free scheme to give “everlasting privacy” for votes: even a computationally unbounded party does not gain any information about individual votes (other than what can be inferred from the final tally). Our voting protocols are designed to be used in a “traditional” setting, in which voters cast their ballots in a private polling booth (which we model as an untappable channel between the voter and the tallying authority). Following in the footsteps of Chaum and Neff [7,16], our protocol ensures that the integrity of an election cannot be compromised even if the computers running it are all corrupt (although ballot secrecy may be violated in this case). We give a generic voting protocol which we prove to be secure in the Universal Composability model, given that the underlying commitment is universally composable. We also propose a concrete implementation, based on the hardness of discrete log, that is slightly more efficient (and can be used in practice).

201 citations


Journal ArticleDOI
TL;DR: Oblivious polynomial evaluation can be used as a primitive in many applications, including protocols for private comparison of data, for mutually authenticated key exchange based on (possibly weak) passwords, and for anonymous coupons.
Abstract: Oblivious polynomial evaluation is a protocol involving two parties, a sender whose input is a polynomial P, and a receiver whose input is a value $\alpha$. At the end of the protocol the receiver learns $P(\alpha)$ and the sender learns nothing. We describe efficient constructions for this protocol, which are based on new intractability assumptions that are closely related to noisy polynomial reconstruction. Oblivious polynomial evaluation can be used as a primitive in many applications. We describe several such applications, including protocols for private comparison of data, for mutually authenticated key exchange based on (possibly weak) passwords, and for anonymous coupons.

165 citations


Proceedings ArticleDOI
21 Oct 2006
TL;DR: The study of compression that preserves the solution to an instance of a problem rather than preserving the instance itself is initiated, and a new classification of NP is given with respect to compression, which forms a stratification of NP that is called the VC hierarchy.
Abstract: We initiate the study of compression that preserves the solution to an instance of a problem rather than preserving the instance itself Our focus is on the compressibility of NP decision problems We consider NP problems that have long instances but relatively short witnesses The question is, can one efficiently compress an instance and store a shorter representation that maintains the information of whether the original input is in the language or not We want the length of the compressed instance to be polynomial in the length of the witness rather than the length of original input Such compression enables to succinctly store instances until a future setting will allow solving them, either via a technological or algorithmic breakthrough or simply until enough time has elapsed We give a new classification of NP with respect to compression This classification forms a stratification of NP that we call the VC hierarchy The hierarchy is based on a new type of reduction called W-reduction and there are compression-complete problems for each class Our motivation for studying this issue stems from the vast cryptographic implications compressibility has For example, we say that SAT is compressible if there exists a polynomial p(middot, middot) so that given a formula consisting of m clauses over n variables it is possible to come up with an equivalent (wrt satisfiability) formula of size at most p(n, log m) Then given a compression algorithm for SAT we provide a construction of collision resistant hash functions from any one-way function This task was shown to be impossible via black-box reductions (D Simon, 1998), and indeed the construction presented is inherently non-black-box Another application of SAT compressibility is a cryptanalytic result concerning the limitation of everlasting security in the bounded storage model when mixed with (time) complexity based cryptography In addition, we study an approach to constructing an oblivious transfer protocol from any one-way function This approach is based on compression for SAT that also has a property that we call witness retrievability However, we mange to prove severe limitations on the ability to achieve witness retrievable compression of SAT

115 citations


Journal Article
TL;DR: In this paper, the authors studied the problem of sublinear authentication, where the user wants to encode and store the file in a way that allows him to verify that it has not been corrupted, but without reading the entire file.
Abstract: We consider the problem of storing a large file on a remote and unreliable server. To verify that the file has not been corrupted, a user could store a small private (randomized) “fingerprint” on his own computer. This is the setting for the well-studied authentication problem in cryptography, and the required fingerprint size is well understood. We study the problem of sublinear authentication: suppose the user would like to encode and store the file in a way that allows him to verify that it has not been corrupted, but without reading the entire file. If the user only wants to read q bits of the file, how large does the size s of the private fingerprint need to beq We define this problem formally, and show a tight lower bound on the relationship between s and q when the adversary is not computationally bounded, namely: s × q e Ω(n), where n is the file size. This is an easier case of the online memory checking problem, introduced by Blum et al. [1991], and hence the same (tight) lower bound applies also to that problem. It was previously shown that, when the adversary is computationally bounded, under the assumption that one-way functions exist, it is possible to construct much better online memory checkers. The same is also true for sublinear authentication schemes. We show that the existence of one-way functions is also a necessary condition: even slightly breaking the s × q e Ω(n) lower bound in a computational setting implies the existence of one-way functions.

96 citations


Journal Article
TL;DR: In this article, the authors studied the problem of preserving the solution to an instance of a problem rather than preserving the instance itself and showed that SAT is compressible if there exists a polynomial p(middot, middot) formula of size at most p(n, log m).
Abstract: We initiate the study of compression that preserves the solution to an instance of a problem rather than preserving the instance itself. Our focus is on the compressibility of NP decision problems. We consider NP problems that have long instances but relatively short witnesses. The question is, can one efficiently compress an instance and store a shorter representation that maintains the information of whether the original input is in the language or not. We want the length of the compressed instance to be polynomial in the length of the witness rather than the length of original input. Such compression enables to succinctly store instances until a future setting will allow solving them, either via a technological or algorithmic breakthrough or simply until enough time has elapsed. We give a new classification of NP with respect to compression. This classification forms a stratification of NP that we call the VC hierarchy. The hierarchy is based on a new type of reduction called W-reduction and there are compression-complete problems for each class. Our motivation for studying this issue stems from the vast cryptographic implications compressibility has. For example, we say that SAT is compressible if there exists a polynomial p(middot, middot) so that given a formula consisting of m clauses over n variables it is possible to come up with an equivalent (w.r.t satisfiability) formula of size at most p(n, log m). Then given a compression algorithm for SAT we provide a construction of collision resistant hash functions from any one-way function. This task was shown to be impossible via black-box reductions (D. Simon, 1998), and indeed the construction presented is inherently non-black-box. Another application of SAT compressibility is a cryptanalytic result concerning the limitation of everlasting security in the bounded storage model when mixed with (time) complexity based cryptography. In addition, we study an approach to constructing an oblivious transfer protocol from any one-way function. This approach is based on compression for SAT that also has a property that we call witness retrievability. However, we mange to prove severe limitations on the ability to achieve witness retrievable compression of SAT

89 citations


Book ChapterDOI
28 May 2006
TL;DR: In this article, the authors propose simple, realistic protocols for polling that allow the responder to plausibly repudiate his response, while at the same time allowing accurate statistical analysis of poll results.
Abstract: We propose simple, realistic protocols for polling that allow the responder to plausibly repudiate his response, while at the same time allow accurate statistical analysis of poll results. The protocols use simple physical objects (envelopes or scratch-off cards) and can be performed without the aid of computers. One of the main innovations of this work is the use of techniques from theoretical cryptography to rigorously prove the security of a realistic, physical protocol. We show that, given a few properties of physical envelopes, the protocols are unconditionally secure in the universal composability framework.

46 citations


Journal Article
TL;DR: This work proposes simple, realistic protocols for polling that allow the responder to plausibly repudiate his response, while at the same time allow accurate statistical analysis of poll results.
Abstract: We propose simple, realistic protocols for polling that allow the responder to plausibly repudiate his response, while at the same time allow accurate statistical analysis of poll results. The protocols use simple physical objects (envelopes or scratch-off cards) and can be performed without the aid of computers. One of the main innovations of this work is the use of techniques from theoretical cryptography to rigorously prove the security of a realistic, physical protocol. We show that, given a few properties of physical envelopes, the protocols are unconditionally secure in the universal composability framework.

43 citations


Book ChapterDOI
20 Aug 2006
TL;DR: It is proved that one-way functions are essential (and sufficient) for the existence of protocols breaking the above lower bounds in the computational setting.
Abstract: We address the message authentication problem in two seemingly different communication models. In the first model, the sender and receiver are connected by an insecure channel and by a low-bandwidth auxiliary channel, that enables the sender to “manually” authenticate one short message to the receiver (for example, by typing a short string or comparing two short strings). We consider this model in a setting where no computational assumptions are made, and prove that for any 0 < e< 1 there exists a log*n-round protocol for authenticating n-bit messages, in which only 2 log(1 /e) + O(1) bits are manually authenticated, and any adversary (even computationally unbounded) has probability of at most e to cheat the receiver into accepting a fraudulent message. Moreover, we develop a proof technique showing that our protocol is essentially optimal by providing a lower bound of 2 log(1/ e) – 6 on the required length of the manually authenticated string. The second model we consider is the traditional message authentication model. In this model the sender and the receiver share a short secret key; however, they are connected only by an insecure channel. Once again, we apply our proof technique, and prove a lower bound of 2 log(1/ e) – 2 on the required Shannon entropy of the shared key. This settles an open question posed by Gemmell and Naor (CRYPTO '93). Finally, we prove that one-way functions are essential (and sufficient) for the existence of protocols breaking the above lower bounds in the computational setting.

30 citations


Book ChapterDOI
10 Jul 2006
TL;DR: In this article, the authors studied the possibility and impossibility of everlasting security in the hybrid bounded storage model with low memory requirements, and showed the equivalence of indistinguishability of encryptions and semantic security.
Abstract: The bounded storage model (BSM) bounds the storage space of an adversary rather than its running time It utilizes the public transmission of a long random string ${\cal R}$ of length r, and relies on the assumption that an eavesdropper cannot possibly store all of this string Encryption schemes in this model achieve the appealing property of everlasting security In short, this means that an encrypted message remains secure even if the adversary eventually gains more storage or gains knowledge of (original) secret keys that may have been used However, if the honest parties do not share any private information in advance, then achieving everlasting security requires high storage capacity from the honest parties (storage of $\Omega(\sqrt{r})$, as shown in [9]) We consider the idea of a hybrid bounded storage model were computational limitations on the eavesdropper are assumed up until the time that the transmission of ${\cal R}$ has ended For example, can the honest parties run a computationally secure key agreement protocol in order to agree on a shared private key for the BSM, and thus achieve everlasting security with low memory requirements? We study the possibility and impossibility of everlasting security in the hybrid bounded storage model We start by formally defining the model and everlasting security for this model We show the equivalence of two flavors of definitions: indistinguishability of encryptions and semantic security On the negative side, we show that everlasting security with low storage requirements cannot be achieved by black-box reductions in the hybrid BSM This serves as a further indication to the hardness of achieving low storage everlasting security, adding to previous results of this nature [9, 15] On the other hand, we show two augmentations of the model that allow for low storage everlasting security The first is by adding a random oracle to the model, while the second bounds the accessibility of the adversary to the broadcast string ${\cal R}$ Finally, we show that in these two modified models, there also exist bounded storage oblivious transfer protocols with low storage requirements

Journal Article
TL;DR: It is shown that everlasting security with low storage requirements cannot be achieved by black-box reductions in the hybrid BSM, and two augmentations of the model are shown that allow for low storage everlasting security.
Abstract: The bounded storage model (BSM) bounds the storage space of an adversary rather than its running time. It utilizes the public transmission of a long random string R of length r, and relies on the assumption that an eavesdropper cannot possibly store all of this string. Encryption schemes in this model achieve the appealing property of everlasting security. In short, this means that an encrypted message remains secure even if the adversary eventually gains more storage or gains knowledge of (original) secret keys that may have been used. However, if the honest parties do not share any private information in advance, then achieving everlasting security requires high storage capacity from the honest parties (storage of Ω(√r); as shown in [9]). We consider the idea of a hybrid bounded storage model were computational limitations on the eavesdropper are assumed up until the time that the transmission of R has ended. For example, can the honest parties run a computationally secure key agreement protocol in order to agree on a shared private key for the BSM, and thus achieve everlasting security with low memory requirements? We study the possibility and impossibility of everlasting security in the hybrid bounded storage model. We start by formally defining the model and everlasting security for this model. We show the equivalence of two flavors of definitions: indistinguishability of encryptions and semantic security. On the negative side. we show that everlasting security with low storage requirements cannot be achieved by black-box reductions in the hybrid BSM. This serves a.s a further indication to the hardness of achieving low storage everlasting security, adding to previous results of this nature [9,15]. On the other hand, we show two augmentations of the model that allow for low storage everlasting security. The first is by adding a random oracle to the model, while the second bounds the accessibility of the adversary to the broadcast string R. Finally, we show that in these two modified models, there also exist bounded storage oblivious transfer protocols with low storage requirements.

Journal ArticleDOI
TL;DR: A computational criterion is presented for a function f to be complete for the asymmetric case and a matching criterion called computational row transitivity is shown for f to have a simple SFE (based on no additional assumptions).
Abstract: A Secure Function Evaluation (SFE) of a two-variable function f(·,·) is a protocol that allows two parties with inputs x and y to evaluate f(x,y) in a manner where neither party learns "more than is necessary". A rich body of work deals with the study of completeness for secure two-party computation. A function f is complete for SFE if a protocol for securely evaluating f allows the secure evaluation of all (efficiently computable) functions. The questions investigated are which functions are complete for SFE, which functions have SFE protocols unconditionally and whether there are functions that are neither complete nor have efficient SFE protocols. The previous study of these questions was mainly conducted from an information theoretic point of view and provided strong answers in the form of combinatorial properties. However, we show that there are major differences between the information theoretic and computational settings. In particular, we show functions that are considered as having SFE unconditionally by the combinatorial criteria but are actually complete in the computational setting. We initiate the fully computational study of these fundamental questions. Somewhat surprisingly, we manage to provide an almost full characterization of the complete functions in this model as well. More precisely, we present a computational criterion (called computational row non-transitivity) for a function f to be complete for the asymmetric case. Furthermore, we show a matching criterion called computational row transitivity for f to have a simple SFE (based on no additional assumptions). This criterion is close to the negation of the computational row non-transitivity and thus we essentially characterize all "nice" functions as either complete or having SFE unconditionally.

Proceedings ArticleDOI
25 Jun 2006
TL;DR: If one-way functions do not exist, then an efficient Eve can learn to impersonate any efficient Bob nearly as well as an unbounded Eve, and tightly bound the number of observations Eve makes in terms of the secret's entropy.
Abstract: Consider Alice and Bob, who have some shared secret which helps Alice to identify Bob-impersonators, and Eve, who does not know their secret. Eve wants to impersonate Bob and "fool" Alice. If Eve is computationally unbounded, how long does she need to observe Bob before she can impersonate him? What is a good strategy for Eve? If (cryptographic) one-way functions exist, an efficient Eve cannot impersonate even very simple Bobs, but if they do not exist, can Eve learn to impersonate any efficient Bob?We formalize these questions in a new computational learning model, which we believe captures a wide variety of natural learning tasks, and tightly bound the number of observations Eve makes in terms of the secret's entropy. We then show that if one-way functions do not exist, then an efficient Eve can learn to impersonate any efficient Bob nearly as well as an unbounded Eve.For the full version of this work see (Naor & Rothblum, 2006).

Journal Article
TL;DR: A new method for reducing the size of families given by previous constructions of k-wise almost independent permutations, which relies on pseudorandom generators for space-bounded computations.
Abstract: Constructions of k-wise almost independent permutations have been receiving a growing amount of attention in recent years. However, unlike the case of k-wise independent functions, the size of previously constructed families of such permutations is far from optimal. This paper gives a new method for reducing the size of families given by previous constructions. Our method relies on pseudorandom generators for space-bounded computations. In fact, all we need is a generator, that produces “pseudorandom walks” on undirected graphs with a consistent labelling. One such generator is implied by Reingold’s log-space algorithm for undirected connectivity (Reingold/Reingold et al. in Proc. of the 37th/38th Annual Symposium on Theory of Computing, pp. 376–385/457–466, 2005/2006). We obtain families of k-wise almost independent permutations, with an optimal description length, up to a constant factor. More precisely, if the distance from uniform for any k tuple should be at most δ, then the size of the description of a permutation in the family is $O(kn+\log \frac{1}{\delta})$.

Posted Content
TL;DR: In this paper, the authors consider the problem of sublinear authentication, where the adversary is not computationally bounded, and show that the existence of one-way functions is also a necessary condition.
Abstract: We consider the problem of storing a large flle on a remote and unreliable server. To verify that the flle has not been corrupted, a user could store a small private (randomized) \flngerprint" on his own computer. This is the setting for the well-studied authentication problem in cryptography, and the required flngerprint size is well understood. We study the problem of sub-linear authentication: suppose the user would like to encode and store the flle in a way that allows him to verify that it has not been corrupted, but without reading the entire flle. If the user only wants to read t bits of the flle, how large does the size s of the private flngerprint need to be? We deflne this problem formally, and show a tight lower bound on the relationship between s and t when the adversary is not computationally bounded, namely: s £ t = ›(n), where n is the flle size. This is an easier case of the online memory checking problem, introduced by Blum et al. in 1991, and hence the same (tight) lower bound applies also to that problem. It was previously shown that when the adversary is computationally bounded, under the assumption that one-way functions exist, it is possible to construct much better online memory checkers and sub-linear authentication schemes. We show that the existence of one-way functions is also a necessary condition: even slightly breaking the s £ t = ›(n) lower bound in a computational setting implies the existence of one-way functions.