scispace - formally typeset
Search or ask a question
Author

Moni Naor

Other affiliations: IBM, Stanford University, University of California, Berkeley  ...read more
Bio: Moni Naor is an academic researcher from Weizmann Institute of Science. The author has contributed to research in topics: Encryption & Cryptography. The author has an hindex of 102, co-authored 338 publications receiving 47090 citations. Previous affiliations of Moni Naor include IBM & Stanford University.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that, using this approach, it is possible to construct any family of constant degree graphs in a dynamic environment, though with worse parameters, and it is expected that more distributed data structures could be designed and implemented in aynamic environment.
Abstract: We propose a new approach for constructing P2P networks based on a dynamic decomposition of a continuous space into cells corresponding to servers. We demonstrate the power of this approach by suggesting two new P2P architectures and various algorithms for them. The first serves as a DHT (distributed hash table) and the other is a dynamic expander network. The DHT network, which we call Distance Halving, allows logarithmic routing and load while preserving constant degrees. It offers an optimal tradeoff between degree and path length in the sense that degree d guarantees a path length of O(logdn). Another advantage over previous constructions is its relative simplicity. A major new contribution of this construction is a dynamic caching technique that maintains low load and storage, even under the occurrence of hot spots. Our second construction builds a network that is guaranteed to be an expander. The resulting topologies are simple to maintain and implement. Their simplicity makes it easy to modify and add protocols. A small variation yields a DHT which is robust against random Byzantine faults. Finally we show that, using our approach, it is possible to construct any family of constant degree graphs in a dynamic environment, though with worse parameters. Therefore, we expect that more distributed data structures could be designed and implemented in a dynamic environment.

98 citations

Journal Article
TL;DR: In this paper, the authors studied the problem of sublinear authentication, where the user wants to encode and store the file in a way that allows him to verify that it has not been corrupted, but without reading the entire file.
Abstract: We consider the problem of storing a large file on a remote and unreliable server. To verify that the file has not been corrupted, a user could store a small private (randomized) “fingerprint” on his own computer. This is the setting for the well-studied authentication problem in cryptography, and the required fingerprint size is well understood. We study the problem of sublinear authentication: suppose the user would like to encode and store the file in a way that allows him to verify that it has not been corrupted, but without reading the entire file. If the user only wants to read q bits of the file, how large does the size s of the private fingerprint need to beq We define this problem formally, and show a tight lower bound on the relationship between s and q when the adversary is not computationally bounded, namely: s × q e Ω(n), where n is the file size. This is an easier case of the online memory checking problem, introduced by Blum et al. [1991], and hence the same (tight) lower bound applies also to that problem. It was previously shown that, when the adversary is computationally bounded, under the assumption that one-way functions exist, it is possible to construct much better online memory checkers. The same is also true for sublinear authentication schemes. We show that the existence of one-way functions is also a necessary condition: even slightly breaking the s × q e Ω(n) lower bound in a computational setting implies the existence of one-way functions.

96 citations

Proceedings ArticleDOI
06 Jul 2001
TL;DR: In this paper, a hash table based on open addressing was proposed to solve the dynamic perfect hashing problem, with expected amortized insertion and deletion time O(1) for fixed-size records.
Abstract: Many data structures give away much more information than they were intended to. Whenever privacy is important, we need to be concerned that it might be possible to infer information from the memory representation of a data structure that is not available through its “legitimate” interface. Word processors that quietly maintain old versions of a document are merely the most egregious example of a general problem.We deal with data structures whose current memory representation does not reveal their history. We focus on dictionaries, where this means revealing nothing about the order of insertions or deletions. Our first algorithm is a hash table based on open addressing, allowing O(1) insertion and search. We also present a history independent dynamic perfect hash table that uses space linear in the number of elements inserted and has expected amortized insertion and deletion time O(1). To solve the dynamic perfect hashing problem we devise a general scheme for history independent memory allocation. For fixed-size records this is quite efficient, with insertion and deletion both linear in the size of the record. Our variable-size record scheme is efficient enough for dynamic perfect hashing but not for general use. The main open problem we leave is whether it is possible to implement a variable-size record scheme with low overhead.

95 citations

Journal ArticleDOI
TL;DR: It is proved that three apparently unrelated fundamental problems in distributed computing, cryptography, and complexity theory, are essentially the same problem, and the definition of weak zero-knowledge is obtained by a sequence of weakenings of the standard definition, forming a hierarchy.
Abstract: We prove that three apparently unrelated fundamental problems in distributed computing, cryptography, and complexity theory, are essentially the same problem. These three problems and brief descriptions of them follow. (1) The selective decommitment problem. An adversary is given commitments to a collection of messages, and the adversary can ask for some subset of the commitments to be opened. The question is whether seeing the decommitments to these open plaintexts allows the adversary to learn something unexpected about the plaintexts that are unopened. (2) The power of 3-round weak zero-knowledge arguments. The question is what can be proved in (a possibly weakened form of) zero-knowledge in a 3-round argument. In particular, is there a language outside of BPP that has a 3-round public-coin weak zero-knowledge argument? (3) The Fiat-Shamir methodology. This is a method for converting a 3-round public-coin argument (viewed as an identification scheme) to a 1-round signature scheme. The method requires what we call a "magic function" that the signer applies to the first-round message of the argument to obtain a second-round message (queries from the verifier). An open question here is whether every 3-round public-coin argument for a language outside of BPP has a magic function.It follows easily from definitions that if a 3-round public-coin argument system is zero-knowledge in the standard (fairly strong) sense, then it has no magic function. We define a weakening of zero-knowledge such that zero-knowledge ⇒ no-magic-function still holds. For this weakened form of zero-knowledge, we give a partial converse: informally, if a 3-round public-coin argument system is not weakly zero-knowledge, then some form of magic is possible for this argument system. We obtain our definition of weak zero-knowledge by a sequence of weakenings of the standard definition, forming a hierarchy. Intermediate forms of zero-knowledge in this hierarchy are reasonable ones, and they may be useful in applications. Finally, we relate the selective decommitment problem to public-coin proof systems and arguments at an intermediate level of the hierarchy, and obtain several positive security results for selective decommitment.

94 citations

Proceedings ArticleDOI
23 Oct 2010
TL;DR: In this article, the authors present a dynamic dictionary that uses only O(1 + o(1)) bits, where O(B) is the information-theoretic lower bound for representing a set of size n from a universe of size u.
Abstract: The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constant-time operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. However, although the first analysis of a dynamic dictionary dates back more than 45 years ago (when Knuth analyzed linear probing in 1963), the trade-off between these aspects of performance is still not completely understood. In this paper we settle two fundamental open problems: \begin{itemize} \item We construct the first dynamic dictionary that enjoys the best of both worlds: it stores $\boldsymbol{n}$ elements using $\boldsymbol{(1 + \epsilon) n}$ memory words, and guarantees constant-time operations in the worst case with high probability. Specifically, for any \boldsymbol{\epsilon = \Omega ( (\log \log n / \log n)^{1/2} )}$ and for any sequence of polynomially many operations, with high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of $\boldsymbol{\epsilon}$. The construction is a two-level variant of cuckoo hashing, augmented with a ``backyard'' that handles a large fraction of the elements, together with a de-amortized perfect hashing scheme for eliminating the dependency on $\boldsymbol{\epsilon}$. \item We present a variant of the above construction that uses only $\boldsymbol{(1 + o(1))\B}$ bits, where $\boldsymbol{\B}$ is the information-theoretic lower bound for representing a set of size $\boldsymbol{n}$ taken from a universe of size $\boldsymbol{u}$, and guarantees constant-time operations in the worst case with high probability, as before. This problem was open even in the {\em amortized} setting. One of the main ingredients of our construction is a permutation-based variant of cuckoo hashing, which significantly improves the space consumption of cuckoo hashing when dealing with a rather small universe. \end{itemize}

92 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
01 Jan 1996
TL;DR: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols.
Abstract: From the Publisher: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper.

13,597 citations

Patent
30 Sep 2010
TL;DR: In this article, the authors proposed a secure content distribution method for a configurable general-purpose electronic commercial transaction/distribution control system, which includes a process for encapsulating digital information in one or more digital containers, a process of encrypting at least a portion of digital information, a protocol for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container, and a process that delivering one or multiple digital containers to a digital information user.
Abstract: PROBLEM TO BE SOLVED: To solve the problem, wherein it is impossible for an electronic content information provider to provide commercially secure and effective method, for a configurable general-purpose electronic commercial transaction/distribution control system. SOLUTION: In this system, having at least one protected processing environment for safely controlling at least one portion of decoding of digital information, a secure content distribution method comprises a process for encapsulating digital information in one or more digital containers; a process for encrypting at least a portion of digital information; a process for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container; a process for delivering one or more digital containers to a digital information user; and a process for using a protected processing environment, for safely controlling at least a portion of the decoding of the digital information. COPYRIGHT: (C)2006,JPO&NCIPI

7,643 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Book ChapterDOI
19 Aug 2001
TL;DR: This work proposes a fully functional identity-based encryption scheme (IBE) based on the Weil pairing that has chosen ciphertext security in the random oracle model assuming an elliptic curve variant of the computational Diffie-Hellman problem.
Abstract: We propose a fully functional identity-based encryption scheme (IBE). The scheme has chosen ciphertext security in the random oracle model assuming an elliptic curve variant of the computational Diffie-Hellman problem. Our system is based on the Weil pairing. We give precise definitions for secure identity based encryption schemes and give several applications for such systems.

7,083 citations