scispace - formally typeset
Search or ask a question
Author

David Zuckerman

Bio: David Zuckerman is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Randomness & Randomized algorithm. The author has an hindex of 53, co-authored 141 publications receiving 8843 citations. Previous affiliations of David Zuckerman include Massachusetts Institute of Technology & Northeastern University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, it was shown that any randomized algorithm that runs in spaceSand timeT and uses poly(S) random bits can be simulated using only O(S ) random bits in space Sand timeT+poly(S).

650 citations

Journal ArticleDOI
TL;DR: In this paper, a randomness extractor is an algorithm which extracts randomness from a low-quality random source, using some additional truly random bits, for any α > 0.
Abstract: A randomness extractor is an algorithm which extracts randomness from a low-quality random source, using some additional truly random bits. We construct new extractors which require only log n + O(1) additional random bits for sources with constant entropy rate. We further construct dispersers, which are similar to one-sided extractors, which use an arbitrarily small constant times log n additional random bits for sources with constant entropy rate. Our extractors and dispersers output 1-α fraction of the randomness, for any α>0.We use our dispersers to derandomize results of Hastad [23] and Feige-Kilian [19] and show that for all e>0, approximating MAX CLIQUE and CHROMATIC NUMBER to within n1-e are NP-hard. We also derandomize the results of Khot [29] and show that for some γ > 0, no quasi-polynomial time algorithm approximates MAX CLIQUE or CHROMATIC NUMBER to within n/2(log n)1-γ, unless NP = P.Our constructions rely on recent results in additive number theory and extractors by Bourgain-Katz-Tao [11], Barak-Impagliazzo-Wigderson [5], Barak-Kindler-Shaltiel-Sudakov-Wigderson [6], and Raz [36]. We also simplify and slightly strengthen key theorems in the second and third of these papers, and strengthen a related theorem by Bourgain [10].

597 citations

01 Jan 1995
TL;DR: A erasure-resilient coding scheme that is based on a version of Reed-Solomon codes and which has the property that r = m is described, customized to give the rst real-time implementations of Priority Encoding Transmission (PET) for medium quality video transmission on Sun SPARCstation 20 workstations.
Abstract: An (m; n; b; r)-erasure-resilient coding scheme consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of n packets each containing b bits from a message of m packets containing b bits. The decoding algorithm is able to recover the message from any set of r packets. Erasure-resilient codes have been used to protect real-time traac sent through packet based networks against packet losses. In this paper we describe a erasure-resilient coding scheme that is based on a version of Reed-Solomon codes and which has the property that r = m: Both the encoding and decoding algorithms run in quadratic time and have been customized to give the rst real-time implementations of Priority Encoding Transmission (PET) 2],,1] for medium quality video transmission on Sun SPARCstation 20 workstations.

516 citations

Proceedings Article
01 Jan 1996
TL;DR: Of independent interest is the main technical tool: a procedure which extracts randomness from a defective random source using a small additional number of truly random bits.
Abstract: We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space Sand time T+ poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool : a procedure which extracts randomness from a defective random source using a small additional number of truly random bits.

513 citations

Proceedings ArticleDOI
07 Jun 1993
TL;DR: The authors describe a simple universal strategy S/sup univ/, with the property that, for any algorithm A, T(A,S/Sup univ/)=O (l/sub A/log(l/ sub A/)), which is the best performance that can be achieved, up to a constant factor, by any universal strategy.
Abstract: Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces the correct answer when its stops but whose running time is a random variable. The authors consider the problem of minimizing the expected time required to obtain an answer from A using strategies which simulate A as follows: run A for a fixed amount of time t/sub 1/, then run A independent for a fixed amount of time t/sub 2/, etc. The simulation stops if A completes its execution during any of the runs. Let S=(t/sub 1/, t/sub 2/,. . .) be a strategy, and let l/sub A/=inf/sub S/T(A,S), where T(A,S) is the expected value of the running time of the simulation of A under strategy S. The authors describe a simple universal strategy S/sup univ/, with the property that, for any algorithm A, T(A,S/sup univ/)=O(l/sub A/log(l/sub A/)). Furthermore, they show that this is the best performance that can be achieved, up to a constant factor, by any universal strategy. >

460 citations


Cited by
More filters
Book ChapterDOI
Cynthia Dwork1
10 Jul 2006
TL;DR: In this article, the authors give a general impossibility result showing that a formalization of Dalenius' goal along the lines of semantic security cannot be achieved, and suggest a new measure, differential privacy, which, intuitively, captures the increased risk to one's privacy incurred by participating in a database.
Abstract: In 1977 Dalenius articulated a desideratum for statistical databases: nothing about an individual should be learnable from the database that cannot be learned without access to the database. We give a general impossibility result showing that a formalization of Dalenius' goal along the lines of semantic security cannot be achieved. Contrary to intuition, a variant of the result threatens the privacy even of someone not in the database. This state of affairs suggests a new measure, differential privacy, which, intuitively, captures the increased risk to one's privacy incurred by participating in a database. The techniques developed in a sequence of papers [8, 13, 3], culminating in those described in [12], can achieve any desired level of privacy under this measure. In many cases, extremely accurate information about the database can be provided while simultaneously ensuring very high levels of privacy

4,134 citations

Proceedings Article
16 Nov 2002
TL;DR: LT codes are introduced, the first rateless erasure codes that are very efficient as the data length grows, and are based on EMMARM code, which was introduced in version 2.0.
Abstract: We introduce LT codes, the first rateless erasure codes that are very efficient as the data length grows.

2,970 citations

MonographDOI
20 Apr 2009
TL;DR: This beginning graduate textbook describes both recent achievements and classical results of computational complexity theory and can be used as a reference for self-study for anyone interested in complexity.
Abstract: This beginning graduate textbook describes both recent achievements and classical results of computational complexity theory. Requiring essentially no background apart from mathematical maturity, the book can be used as a reference for self-study for anyone interested in complexity, including physicists, mathematicians, and other scientists, as well as a textbook for a variety of courses and seminars. More than 300 exercises are included with a selected hint set.

2,965 citations

Proceedings ArticleDOI
Oded Regev1
22 May 2005
TL;DR: A public-key cryptosystem whose hardness is based on the worst-case quantum hardness of SVP and SIVP, and an efficient solution to the learning problem implies a quantum, which can be made classical.
Abstract: Our main result is a reduction from worst-case lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the 'learning from parity with error' problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for SVP and SIVP. A main open question is whether this reduction can be made classical.Using the main result, we obtain a public-key cryptosystem whose hardness is based on the worst-case quantum hardness of SVP and SIVP. Previous lattice-based public-key cryptosystems such as the one by Ajtai and Dwork were only based on unique-SVP, a special case of SVP. The new cryptosystem is much more efficient than previous cryptosystems: the public key is of size O(n2) and encrypting a message increases its size by O(n)(in previous cryptosystems these values are O(n4) and O(n2), respectively). In fact, under the assumption that all parties share a random bit string of length O(n2), the size of the public key can be reduced to O(n).

2,620 citations

Book
01 Dec 2008
TL;DR: Markov Chains and Mixing Times as mentioned in this paper is an introduction to the modern approach to the theory of Markov chains and its application in the field of probability theory and linear algebra, where the main goal is to determine the rate of convergence of a Markov chain to the stationary distribution.
Abstract: This book is an introduction to the modern approach to the theory of Markov chains. The main goal of this approach is to determine the rate of convergence of a Markov chain to the stationary distribution as a function of the size and geometry of the state space. The authors develop the key tools for estimating convergence times, including coupling, strong stationary times, and spectral methods. Whenever possible, probabilistic methods are emphasized. The book includes many examples and provides brief introductions to some central models of statistical mechanics. Also provided are accounts of random walks on networks, including hitting and cover times, and analyses of several methods of shuffling cards. As a prerequisite, the authors assume a modest understanding of probability theory and linear algebra at an undergraduate level. ""Markov Chains and Mixing Times"" is meant to bring the excitement of this active area of research to a wide audience.

2,573 citations