scispace - formally typeset
Search or ask a question

Showing papers by "Moni Naor published in 1991"


Proceedings ArticleDOI
Danny Dolev1, Cynthia Dwork1, Moni Naor1
03 Jan 1991
TL;DR: Non-malleable schemes for each of the contexts of string commitment and zero-knowledge proofs of possession of knowledge, where a user need not know anything about the number or identity of other system users are presented.
Abstract: The notion of non-malleable cryptography, an extension of semantically secure cryptography, is defined. Informally, the additional requirement is that given the ciphertext it is impossible to generate a different ciphertext so that the respective plaintexts are related. The same concept makes sense in the contexts of string commitment and zero-knowledge proofs of possession of knowledge. Non-malleable schemes for each of these three problems are presented. The schemes do not assume a trusted center; a user need not know anything about the number or identity of other system users.

1,180 citations


Journal ArticleDOI
Moni Naor1
TL;DR: It is shown how a pseudorandom generator can provide a bit-commitment protocol and the number of bits communicated when parties commit to many bits simultaneously, and the assumption of the existence of pseudorRandom generators suffices to assure amortized O(1) bits of communication per bit commitment.
Abstract: We show how a pseudorandom generator can provide a bit-commitment protocol. We also analyze the number of bits communicated when parties commit to many bits simultaneously, and show that the assumption of the existence of pseudorandom generators suffices to assure amortized O(1) bits of communication per bit commitment.

870 citations


Journal ArticleDOI
24 Jun 1991
TL;DR: A novel technique, based on the pseudo-random properties of certain graphs known as expanders, is used to obtain novel simple explicit constructions of asymptotically good codes, superior to previously known explicit construction in the zero-rate neighborhood.
Abstract: A novel technique, based on the pseudo-random properties of certain graphs known as expanders, is used to obtain novel simple explicit constructions of asymptotically good codes. In one of the constructions, the expanders are used to enhance Justesen codes by replicating, shuffling, and then regrouping the code coordinates. For any fixed (small) rate, and for a sufficiently large alphabet, the codes thus obtained lie above the Zyablov bound. Using these codes as outer codes in a concatenated scheme, a second asymptotic good construction is obtained which applies to small alphabets (say, GF(2)) as well. Although these concatenated codes lie below the Zyablov bound, they are still superior to previously known explicit constructions in the zero-rate neighborhood. >

311 citations


Proceedings Article
01 Jan 1991

260 citations


Journal ArticleDOI
TL;DR: It is shown that any probabilistic algorithm for 3 coloring the ring must take at least $\frac{1}{2}\log^* n - 2$ rounds, otherwise the probability that all processors are colored legally is less than $1$.
Abstract: Suppose that n processors are arranged in a ring and can communicate only with their immediate neighbors. It is shown that any probabilistic algorithm for 3 coloring the ring must take at least $\frac{1}{2}\log^* n - 2$ rounds, otherwise the probability that all processors are colored legally is less than $\frac{1}{2}$. A similar time bound holds for selecting a maximal independent set. The bound is tight (up to a constant factor) in light of the deterministic algorithms of Cole and Vishkin [Inform, and Control, 70 (1986), pp. 32–53] and extends the lower bound for deterministic algorithms of Linial [Proc. 28th IEEE Foundations of Computer Science Symposium, 1987, pp. 331–335].

144 citations


Proceedings ArticleDOI
01 Sep 1991
TL;DR: The notion of program checking is extended to include programs that alter their environment, in particular, programs that store and retrieve data from memory, where n is the size of the structure.
Abstract: The notion of program checking is extended to include programs that alter their environment, in particular, programs that store and retrieve data from memory. The model considered allows the checker a small amount of reliable memory. The checker is presented with a sequence of requests (online) to a data structure which must reside in a large but unreliable memory. The data structure is viewed as being controlled by an adversary. The checker is to perform each operation in the input sequence using its reliable memory and the unreliable data structure so that any error in the operation of the structure will be detected by the checker with high probability. Checkers for various data structures are presented. Lower bounds of log n on the amount of reliable memory needed by these checkers, where n is the size of the structure, are proved. >

124 citations


Proceedings ArticleDOI
Amos Fiat1, Moni Naor2
03 Jan 1991
TL;DR: In this article, a time space tradeoff of TS2 = N3q(f), where q(f) is the probability that two random elements are mapped to the same image under f, is given.
Abstract: We provide rigorous time-space tradeoffs for inverting any function. Given a function f, we give a time space tradeoff of TS2 = N3q(f), where q(f) is the probability that two random elements are mapped to the same image under f. We also give a more general tradeoff, TS3 = N3, that can invert any function at any point.

59 citations


Proceedings ArticleDOI
01 Sep 1991
TL;DR: It is shown that the CNF search problem is complete for all the variants of decision trees and that the gaps between the nondeterministic, the randomized, and the deterministic complexities can be arbitrarily large for search problems.
Abstract: The relative power of determinism, randomness, and nondeterminism for search problems in the Boolean decision tree model is studied. It is shown that the CNF search problem is complete for all the variants of decision trees. It is then shown that the gaps between the nondeterministic, the randomized, and the deterministic complexities can be arbitrarily large for search problems. The special case of nondeterministic complexity is discussed. >

45 citations


Proceedings ArticleDOI
01 Sep 1991
TL;DR: The authors study the direct sum problem with respect to communication complexity and give a general lower bound on the amortized communication complexity of any function f in terms of its communication complexity C(f).
Abstract: The authors study the direct sum problem with respect to communication complexity: Consider a function f: D to (0, 1), where D contained in (0, 1)/sup n/*(0, 1)/sup n/. The amortized communication complexity of f, i.e. the communication complexity of simultaneously computing f on l instances, divided by l is studied. The authors present, both in the deterministic and the randomized model, functions with communication complexity Theta (log n) and amortized communication complexity O(1). They also give a general lower bound on the amortized communication complexity of any function f in terms of its communication complexity C(f). >

26 citations


Book ChapterDOI
Moni Naor1
08 Jul 1991
TL;DR: This work considers the well known string matching problem where a text and pattern are given and the problem is to determine if the pattern appears as a substring in the text and provides preprocessing and on-line algorithms such that the preprocessing algorithm runs in linear time and requires linear storage and the on- line complexity is logarithmic in theText.
Abstract: We consider the well known string matching problem where a text and pattern are given and the problem is to determine if the pattern appears as a substring in the text. The setting we investigate is where the pattern and the text are preprocessed separately at two different sites. At some point in time the two sites wish to determine if the pattern appears in the text (this is the on-line stage). We provide preprocessing and on-line algorithms such that the preprocessing algorithm runs in linear time and requires linear storage and the on-line complexity is logarithmic in the text. We also give an application of the algorithm to parallel data compression, and show how to implement the Lempel Ziv algorithm in logarithmic time with linear number of processors.

21 citations


Proceedings ArticleDOI
Moni Naor1, Ron M. Roth1
01 Sep 1991
TL;DR: Given a distributed network of processors represented by an undirected graph G=(V, E) and a file size k, the problem of distributing an arbitrary file w of k bits among all nodes of the network G is considered.
Abstract: Given a distributed network of processors represented by an undirected graph G=(V, E) and a file size k, the problem of distributing an arbitrary file w of k bits among all nodes of the network G is considered. Memory devices are to be assigned to the node of G such that, by accessing the memory of its own and of its adjacent nodes, each node can reconstruct the contents of w. The objective is to minimize the total size memory in the network. A file distribution scheme that realizes this objective for k>>log Delta /sub G/, where Delta /sub G/, stands for the maximum degree in G, is presented. For this range of k, the total size of memory required by the suggested scheme approaches an integer programming lower bound on that size. >

Journal ArticleDOI
TL;DR: An implicit data structure for storing n k -key records is described, which supports searching for a record, under any key, in the asymptotically optimal search time O (1g n ), in sharp contrast to an Ω(n 1− 1 k ) lower bound.