scispace - formally typeset
Search or ask a question

Showing papers by "Moni Naor published in 2003"


Journal ArticleDOI
TL;DR: An elegant and remarkably simple algorithm ("the threshold algorithm", or TA) is analyzed that is optimal in a much stronger sense than FA, and is essentially optimal, not just for some monotone aggregation functions, but for all of them, and not just in a high-probability worst-case sense, but over every database.

1,315 citations


Book ChapterDOI
17 Aug 2003
TL;DR: This work proposes several open problems regarding cryptographic tasks that currently do not have a good challenge of that sort by creating a challenge to their validity by classifying computational assumptions based on the complexity of falsifying them.
Abstract: We deal with computational assumptions needed in order to design secure cryptographic schemes. We suggest a classification of such assumptions based on the complexity of falsifying them (in case they happen not to be true) by creating a challenge (competition) to their validity. As an outcome of this classification we propose several open problems regarding cryptographic tasks that currently do not have a good challenge of that sort. The most outstanding one is the design of an efficient block ciphers.

343 citations


Book ChapterDOI
17 Aug 2003
TL;DR: In 1992, Dwork and Naor as discussed by the authors proposed that e-mail messages be accompanied by easy-to-check proofs of computational effort in order to discourage junk e-mails, now known as spam.
Abstract: In 1992, Dwork and Naor proposed that e-mail messages be accompanied by easy-to-check proofs of computational effort in order to discourage junk e-mail, now known as spam. They proposed specific CPU-bound functions for this purpose. Burrows suggested that, since memory access speeds vary across machines much less than do CPU speeds, memory-bound functions may behave more equitably than CPU-bound functions; this approach was first explored by Abadi, Burrows, Manasse, and Wobber [3].

284 citations


Proceedings ArticleDOI
07 Jun 2003
TL;DR: It is shown that, using the proposed approach, it is possible to construct any family of constant degree graphs in a dynamic environment, though with worst parameters, and it is expected that more distributed data structures could be designed and implemented in aynamic environment.
Abstract: We propose a new approach for constructing P2P networks based on a dynamic decomposition of a continuous space into cells corresponding to processors. We demonstrate the power of these design rules by suggesting two new architectures, one for DHT (Distributed Hash Table) and the other for dynamic expander networks. The DHT network, which we call Distance Halving allows logarithmic routing and load, while preserving constant degrees. It offers an optimal tradeoff between the degree and the dilation in the sense that degree d guarantees a dilation of O(log dn). Another advantage over previous constructions is its relative simplicity. A major new contribution of this construction is a dynamic caching technique that maintains low load and storage even under the occurrence of hot spots. Our second construction builds a network that is guaranteed to be an expander. The resulting topologies are simple to maintain and implement. Their simplicity makes it easy to modify and add protocols. A small variation yields a DHT which is robust against random faults. Finally we show that, using our approach, it is possible to construct any family of constant degree graphs in a dynamic environment, though with worst parameters. Therefore we expect that more distributed data structures could be designed and implemented in a dynamic environment.

254 citations


Journal Article
TL;DR: In this article, the authors deal with computational assumptions needed in order to design secure cryptographic schemes and suggest a classification of such assumptions based on the complexity of falsifying them by creating a challenge (competition) to their validity.
Abstract: We deal with computational assumptions needed in order to design secure cryptographic schemes. We suggest a classification of such assumptions based on the complexity of falsifying them (in case they happen not to be true) by creating a challenge (competition) to their validity. As an outcome of this classification we propose several open problems regarding cryptographic tasks that currently do not have a good challenge of that sort. The most outstanding one is the design of an efficient block ciphers.

253 citations


Book ChapterDOI
21 Feb 2003
TL;DR: A construction which is fault tolerant against random deletions and has an optimal degree-dilation tradeoff is shown, which has improved parameters when compared to other DHT’s.
Abstract: We introduce a distributed hash table (DHT) with logarithmic degree and logarithmic dilation. We show two lookup algorithms. The first has a message complexity of log n and is robust under random deletion of nodes. The second has parallel time of log n and message complexity of log2 n. It is robust under spam induced by a random subset of the nodes. We then show a construction which is fault tolerant against random deletions and has an optimal degree-dilation tradeoff. The construction has improved parameters when compared to other DHT’s. Its main merits are its simplicity, its flexibility and the fresh ideas introduced in its design. It is very easy to modify and to add more sophisticated protocols, such as dynamic caching and erasure correcting codes.

149 citations


Journal ArticleDOI
TL;DR: It is proved that three apparently unrelated fundamental problems in distributed computing, cryptography, and complexity theory, are essentially the same problem, and the definition of weak zero-knowledge is obtained by a sequence of weakenings of the standard definition, forming a hierarchy.
Abstract: We prove that three apparently unrelated fundamental problems in distributed computing, cryptography, and complexity theory, are essentially the same problem. These three problems and brief descriptions of them follow. (1) The selective decommitment problem. An adversary is given commitments to a collection of messages, and the adversary can ask for some subset of the commitments to be opened. The question is whether seeing the decommitments to these open plaintexts allows the adversary to learn something unexpected about the plaintexts that are unopened. (2) The power of 3-round weak zero-knowledge arguments. The question is what can be proved in (a possibly weakened form of) zero-knowledge in a 3-round argument. In particular, is there a language outside of BPP that has a 3-round public-coin weak zero-knowledge argument? (3) The Fiat-Shamir methodology. This is a method for converting a 3-round public-coin argument (viewed as an identification scheme) to a 1-round signature scheme. The method requires what we call a "magic function" that the signer applies to the first-round message of the argument to obtain a second-round message (queries from the verifier). An open question here is whether every 3-round public-coin argument for a language outside of BPP has a magic function.It follows easily from definitions that if a 3-round public-coin argument system is zero-knowledge in the standard (fairly strong) sense, then it has no magic function. We define a weakening of zero-knowledge such that zero-knowledge ⇒ no-magic-function still holds. For this weakened form of zero-knowledge, we give a partial converse: informally, if a 3-round public-coin argument system is not weakly zero-knowledge, then some form of magic is possible for this argument system. We obtain our definition of weak zero-knowledge by a sequence of weakenings of the standard definition, forming a hierarchy. Intermediate forms of zero-knowledge in this hierarchy are reasonable ones, and they may be useful in applications. Finally, we relate the selective decommitment problem to public-coin proof systems and arguments at an intermediate level of the hierarchy, and obtain several positive security results for selective decommitment.

94 citations


Proceedings ArticleDOI
13 Jul 2003
TL;DR: The quorum system could be viewed as a dynamic adaptation of the Paths system, and therefore has low load high availability and good probe complexity, and it scales gracefully as the number of elements grows.
Abstract: We investigate issues related to the probe complexity of quorum systems and their implementation in a dynamic environment. Our contribution is twofold. The first regards the algorithmic complexity of finding a quorum in case of random failures. We show a tradeoff between the load of a quorum system and its probe complexity for non adaptive algorithms. We analyze the algorithmic probe complexity of the Paths quorum system suggested by Naor and Wool in [18], and present two optimal algorithms. The first is a non adaptive algorithm that matches our lower bound. The second is an adaptive algorithm with a probe complexity that is linear in the minimum between the size of the smallest quorum set and the inverse of the load of the system. We supply a constant degree network in which these algorithms could be executed efficiently. Thus the Paths quorum system is shown to have good balance between many measures of quality. Our second contribution is presenting Dynamic Paths-a suggestion for a dynamic and scalable quorum system, which can operate in an environment where elements join and leave the system. The quorum system could be viewed as a dynamic adaptation of the Paths system, and therefore has low load high availability and good probe complexity. We show that it scales gracefully as the number of elements grows.

56 citations


Journal Article
TL;DR: In this article, the authors study completeness for secure two-party computation in the computational setting and give a characterization of all functions that are complete for secure function evaluation in this setting.
Abstract: A Secure Function Evaluation (SFE) of a two-variable function f(·,·) is a protocol that allows two parties with inputs x and y to evaluate f(x,y) in a manner where neither party learns "more than is necessary". A rich body of work deals with the study of completeness for secure two-party computation. A function f is complete for SFE if a protocol for securely evaluating f allows the secure evaluation of all (efficiently computable) functions. The questions investigated are which functions are complete for SFE, which functions have SFE protocols unconditionally and whether there are functions that are neither complete nor have efficient SFE protocols. The previous study of these questions was mainly conducted from an information theoretic point of view and provided strong answers in the form of combinatorial properties. However, we show that there are major differences between the information theoretic and computational settings. In particular, we show functions that are considered as having SFE unconditionally by the combinatorial criteria but are actually complete in the computational setting. We initiate the fully computational study of these fundamental questions. Somewhat surprisingly, we manage to provide an almost full characterization of the complete functions in this model as well. More precisely, we present a computational criterion (called computational row non-transitivity) for a function f to be complete for the asymmetric case. Furthermore, we show a matching criterion called computational row transitivity for f to have a simple SFE (based on no additional assumptions). This criterion is close to the negation of the computational row non-transitivity and thus we essentially characterize all "nice" functions as either complete or having SFE unconditionally.

22 citations


Book ChapterDOI
15 Dec 2003
TL;DR: A key idea in cryptography is using hard functions in order to obtain secure schemes, and the community has developed a fairly strong understanding of what types of cryptographic primitives can be achieved under which assumption.
Abstract: A key idea in cryptography is using hard functions in order to obtain secure schemes. The theory of hard functions (e.g. one-way functions) has been a great success story, and the community has developed a fairly strong understanding of what types of cryptographic primitives can be achieved under which assumption.

12 citations