scispace - formally typeset
Search or ask a question

Showing papers by "Moni Naor published in 1993"


Proceedings Article
22 Aug 1993
TL;DR: Several schemes are presented that allow a center to broadcast a secret to any subset of privileged users out of a universe of size n so that coalitions of k users not in the privileged set cannot learn the secret.
Abstract: We introduce new theoretical measures for the qualitative and quantitative assessment of encryption schemes designed for broadcast transmissions. The goal is to allow a central broadcast site to broadcast secure transmissions to an arbitrary set of recipients while minimizing key management related transmissions. We present several schemes that allow a center to broadcast a secret to any subset of privileged users out of a universe of size n so that coalitions of k users not in the privileged set cannot learn the secret. The most interesting scheme requires every user to store O(klog klog n) keys and the center to broadcast O(k2 log2 k log n) messages regardless of the size of the privileged set. This scheme is resilient to any coalition of k users. We also present a scheme that is resilient with probability p against a random subset of k users. This scheme requires every user to store O(log k log(l/p)) keys and the center to broadcast O(klog2 fclog(l/p)) messages.

1,449 citations


Journal ArticleDOI
TL;DR: It is shown how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with “almost” equal probability.
Abstract: It is shown how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with “almost” equal probability. They are called $\epsilon $-biased random variables. The number of random bits needed to generate the random variables is $O(\log n + \log \frac{1}{\epsilon })$. Thus, if $\epsilon $ is polynomially small, then the size of the sample space is also polynomial. Random variables that are $\epsilon $-biased can be used to construct “almost” k-wise independent random variables where $\epsilon $ is a function of k.These probability spaces have various applications: l. Derandomization of algorithms: Many randomized algorithms that require only k-wise independence of their random bits (where k is bounded by $O(\log n)$), can be derandomized by using $\epsilon $-biased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive tes...

690 citations


Proceedings ArticleDOI
01 Jun 1993
TL;DR: A study of computation that can be done locally in a distributed network, where \locally" means within time (or distance) independent of the size of the network, and results include Locally Checkable Labeling problems, where the legality of a labeling can be checked locally.
Abstract: The purpose of this paper is a study of computation that can be done locally in a distributed network, where "locally" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time $t$. Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs).

145 citations


Journal ArticleDOI
Noga Alon1, Moni Naor2
TL;DR: This paper answers a question of Ben-Or and Linial by proving that for every perfect information coin-flipping and leader-election game on n players in which no coalition of players can influence the outcome with probability greater than some universal constant times c.
Abstract: Perfect information coin-flipping and leader-election games arise naturally in the study of fault tolerant distributed computing and have been considered in many different scenarios. This paper answers a question of Ben-Or and Linial by proving that for every $c < 1$ there are such games on n players in which no coalition of $cn$ players can influence the outcome with probability greater than some universal constant times c. (Note that this paper actually proves this statement only for all $c < \frac{1}{3}$, but since its universal constant is bigger than 3 the above is trivial for $c \geqslant \frac{1}{3}$.) This paper shows that a random protocol of a certain length has this property and gives an explicit construction as well.

47 citations


Journal ArticleDOI
Amos Fiat1, Moni Naor2
TL;DR: A constructive solution when the domain m is polynomial in n, the number of elements, as well as a nonconstructive proof for m no larger than exponential in ${\operatorname{poly}}(n)$.
Abstract: Given a set of n elements from the domain $\{ {1, \cdots ,m} \}$, this paper investigates how to arrange them in a table of size n, so that searching for an element in the table can be done in constant time. Yao [J. Assoc. Comput. Mach., 28(1981), pp. 615–628] has shown that this cannot be done when the domain is sufficiently large as a function of n.This paper gives a constructive solution when the domain m is polynomial in n, the number of elements, as well as a nonconstructive proof for m no larger than exponential in ${\operatorname{poly}}(n)$. The authors improve upon a result of Yao and give better bounds on the maximum m for which implicit $O(1)$ probe search can be done. The results are achieved by showing the tight relationship between hashing and certain encoding problems called rainbows.

34 citations


Journal ArticleDOI
TL;DR: It is shown that the four-message amortized complexity of all random pairs is exactly log mu .
Abstract: X and Y are random variables. Person P/sub x/ knows X, Person P/sub y/ knows Y, and both know the underlying probability distribution of the random pair (X, Y). Using a predetermined protocol, they exchange messages over a binary, error-free, channel in order for P/sub y/ to learn X. P/sub x/ may or may not learn Y. C/sub m/ is the number of information bits that must be transmitted (by both persons) in the worst case if only m messages are allowed. C/sub infinity / is the corresponding number of bits when there is no restriction on the number of messages exchanged. We consider three aspects of this problem. C/sub 4/. It is known that one-message communication may require exponentially more bits than the minimum possible: for some random pairs, C/sub 1/=2/sup C infinity -1/. Yet just two messages suffice to reduce communication to almost the minimum: for all random pairs, C/sub 2/ or=(2- in )C/sub infinity />or=c. Asymptotically, this is the largest possible discrepancy. Amortized complexity. The amortized complexity of (X,Y) is the limit, as k grows, of the number of bits required in the worst case for L independent repetitions of (X, Y), normalized by k. We show that the four-message amortized complexity of all random pairs is exactly log mu . Hence, when a random pair is repeated many times, no bits can be saved if P/sub x/ knows Y in advance. >

27 citations


Journal ArticleDOI
TL;DR: To distinguish between random generation in bounded, as opposed to expected, polynomial time, a model of Probabalisic Turing Machine (PTM) with the ability to make random choices with any (small) rational bias is necessary.
Abstract: To distinguish between random generation in bounded, as opposed to expected, polynomial time, a model of Probabalisic Turing Machine (PTM) with the ability to make random choices with any (small) rational bias is necessary. This ability is equivalent to that of being able to simulate rolling any k-sided die (where |k| is polynomial in the length of the input). We would like to minimize the amount of hardware required for a machine with this capability. This leads to the problem of efficiently simulating a family of dice with as few different types of biased coins as possible.

18 citations


Proceedings ArticleDOI
07 Jun 1993
TL;DR: The authors present an O(n log n) randomized algorithm and a deterministic one to find the minimum rate at which data must be reserved on a shared storage system in order to provide continuous buffered play-back of a variable-rate output schedule.
Abstract: The minimum reservation rate problem arises in distributed systems for handling digital audio and video data. The problem is to find the minimum rate at which data must be reserved on a shared storage system in order to provide continuous buffered play-back of a variable-rate output schedule. The problem is equivalent to the minimum output rate: given input rates during various time periods, find the minimum output rate under which the buffer never overflows. The authors present an O(n log n) randomized algorithm and an O(n log n log log n) deterministic one. >

8 citations


01 Jan 1993
TL;DR: Several pricing functions are suggested, based on, respectively, extracting square roots modulo a prime, the Fiat- Shamir signature scheme, and the Ong-Schnorr-Shamir (cracked) signa- ture scheme, for controlling access to a shared resource.
Abstract: We present a computational technique for combatting junk mail in particular and controlling access to a shared resource in general. The main idea is to require a user to compute a moderately hard, but not intractable, function in order to gain access to the resource, thus pre- venting frivolous use. To this end we suggest several pricing functions, based on, respectively, extracting square roots modulo a prime, the Fiat- Shamir signature scheme, and the Ong-Schnorr-Shamir (cracked) signa- ture scheme.