scispace - formally typeset
Search or ask a question

Showing papers by "Moni Naor published in 2009"


Proceedings ArticleDOI
31 May 2009
TL;DR: Private data analysis in the setting in which a trusted and trustworthy curator releases to the public a "sanitization" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst is considered.
Abstract: We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a "sanitization" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. The sanitization may be in the form of an arbitrary data structure, accompanied by a computational procedure for determining approximate answers to queries on the original data set, or it may be a "synthetic data set" consisting of data items drawn from the same universe as items in the original data set; queries are carried out as if the synthetic data set were the actual input. In either case the process is non-interactive; once the sanitization has been released the original data and the curator play no further role.For the task of sanitizing with a synthetic dataset output, we map the boundary between computational feasibility and infeasibility with respect to a variety of utility measures. For the (potentially easier) task of sanitizing with unrestricted output format, we show a tight qualitative and quantitative connection between hardness of sanitizing and the existence of traitor tracing schemes.

448 citations


Book ChapterDOI
19 Aug 2009
TL;DR: A generic construction of a public-key encryption scheme that is resilient to key leakage from any universal hash proof system is presented, and variants of the Cramer-Shoup cryptosystem are proved to be CCA1-secure with any leakage of L/4 bits, and CCA2- secure with any leaking of L / polylog(L) bits.
Abstract: Most of the work in the analysis of cryptographic schemes is concentrated in abstract adversarial models that do not capture side-channel attacks. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. Inspired by recent side-channel attacks, especially the "cold boot attacks", Akavia, Goldwasser and Vaikuntanathan (TCC '09) formalized a realistic framework for modeling the security of encryption schemes against a wide class of side-channel attacks in which adversarially chosen functions of the secret key are leaked. In the setting of public-key encryption, Akavia et al. showed that Regev's lattice-based scheme (STOC '05) is resilient to any leakage of L / polylog(L) bits, where L is the length of the secret key. In this paper we revisit the above-mentioned framework and our main results are as follows: We present a generic construction of a public-key encryption scheme that is resilient to key leakage from any universal hash proof system. The construction does not rely on additional computational assumptions, and the resulting scheme is as efficient as the underlying proof system. Existing constructions of such proof systems imply that our construction can be based on a variety of number-theoretic assumptions, including the decisional Diffie-Hellman assumption (and its progressively weaker d-Linear variants), the quadratic residuosity assumption, and Paillier's composite residuosity assumption. We construct a new hash proof system based on the decisional Diffie-Hellman assumption (and its d-Linear variants), and show that the resulting scheme is resilient to any leakage of L(1 ? o(1)) bits. In addition, we prove that the recent scheme of Boneh et al. (CRYPTO '08), constructed to be a "circular-secure" encryption scheme, is resilient to any leakage of L(1 ? o(1)) bits. These two proposed schemes complement each other in terms of efficiency. We extend the framework of key leakage to the setting of chosen-ciphertext attacks. On the theoretical side, we prove that the Naor-Yung paradigm is applicable in this setting as well, and obtain as a corollary encryption schemes that are CCA2-secure with any leakage of L(1 ? o(1)) bits. On the practical side, we prove that variants of the Cramer-Shoup cryptosystem (along the lines of our generic construction) are CCA1-secure with any leakage of L/4 bits, and CCA2-secure with any leakage of L/6 bits.

425 citations


Posted Content
TL;DR: Recently, Naor et al. as discussed by the authors presented a generic construction of a public-key encryption scheme that is resilient to key leakage from any hash proof system, which is based on the decisional Diffie-Hellman assumption and its d-Linear variants.
Abstract: Most of the work in the analysis of cryptographic schemes is concentrated in abstract adversarial models that do not capture side-channel attacks. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. Inspired by recent side-channel attacks, especially the “cold boot attacks” of Halderman et al. (USENIX Security ’08), Akavia, Goldwasser and Vaikuntanathan (TCC ’09) formalized a realistic framework for modeling the security of encryption schemes against a wide class of sidechannel attacks in which adversarially chosen functions of the secret key are leaked. In the setting of public-key encryption, Akavia et al. showed that Regev’s lattice-based scheme (STOC ’05) is resilient to any leakage of L/polylog(L) bits, where L is the length of the secret key. In this paper we revisit the above-mentioned framework and our main results are as follows: • We present a generic construction of a public-key encryption scheme that is resilient to key leakage from any hash proof system. The construction does not rely on additional computational assumptions, and the resulting scheme is as efficient as the underlying hash proof system. Existing constructions of hash proof systems imply that our construction can be based on a variety of number-theoretic assumptions, including the decisional Diffie-Hellman assumption (and its progressively weaker d-Linear variants), the quadratic residuosity assumption, and Paillier’s composite residuosity assumption. • We construct a new hash proof system based on the decisional Diffie-Hellman assumption (and its d-Linear variants), and show that the resulting scheme is resilient to any leakage of L(1−o(1)) bits. In addition, we prove that the recent scheme of Boneh et al. (CRYPTO ’08), constructed to be a “circular-secure” encryption scheme, fits our generic approach and is also resilient to any leakage of L(1− o(1)) bits. • We extend the framework of key leakage to the setting of chosen-ciphertext attacks. On the theoretical side, we prove that the Naor-Yung paradigm is applicable in this setting as well, and obtain as a corollary encryption schemes that are CCA2-secure with any leakage of L(1 − o(1)) bits. On the practical side, we prove that variants of the CramerShoup cryptosystem (along the lines of our generic construction) are CCA1-secure with any leakage of L/4 bits, and CCA2-secure with any leakage of L/6 bits. A preliminary version of this work appeared in Advances in Cryptology – CRYPTO ’09, pages 18–35, 2009. This is the full version that will appear in SIAM Journal on Computing. ∗Incumbent of the Judith Kleeman Professorial Chair, Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel. Email: moni.naor@weizmann.ac.il. Research supported in part by a grant from the Israel Science Foundation. †Microsoft Research, Mountain View, CA 94043, USA. Email: gil.segev@microsoft.com. Most of this work was completed while the author was a Ph.D. student at the Weizmann Institute of Science, and supported by the Adams Fellowship Program of the Israel Academy of Sciences and Humanities.

157 citations


Book ChapterDOI
02 Dec 2009
TL;DR: In this paper, the authors proposed a scheme called hedged public-key encryption (HED-CDA), which achieves IND-CPA security when the randomness used is of high quality, but, when the latter is not the case, rather than breaking completely, they achieve a weaker but still useful notion of security.
Abstract: Public-key encryption schemes rely for their IND-CPA security on per-message fresh randomness. In practice, randomness may be of poor quality for a variety of reasons, leading to failure of the schemes. Expecting the systems to improve is unrealistic. What we show in this paper is that we can, instead, improve the cryptography to offset the lack of possible randomness. We provide public-key encryption schemes that achieve IND-CPA security when the randomness they use is of high quality, but, when the latter is not the case, rather than breaking completely, they achieve a weaker but still useful notion of security that we call IND-CDA. This hedged public-key encryption provides the best possible security guarantees in the face of bad randomness. We provide simple RO-based ways to make in-practice IND-CPA schemes hedge secure with minimal software changes. We also provide non-RO model schemes relying on lossy trapdoor functions (LTDFs) and techniques from deterministic encryption. They achieve adaptive security by establishing and exploiting the anonymity of LTDFs which we believe is of independent interest.

111 citations


Journal ArticleDOI
22 May 2009
TL;DR: In this paper, a pseudorandom generator is used to construct families of k-wise almost independent functions with an optimal description length up to a constant factor, where the distance from uniform for any k tuple should be at most δ, and the size of the description of a permutation in the family is O(kn+log 1/delta).
Abstract: Constructions of k-wise almost independent permutations have been receiving a growing amount of attention in recent years. However, unlike the case of k-wise independent functions, the size of previously constructed families of such permutations is far from optimal. This paper gives a new method for reducing the size of families given by previous constructions. Our method relies on pseudorandom generators for space-bounded computations. In fact, all we need is a generator, that produces “pseudorandom walks” on undirected graphs with a consistent labelling. One such generator is implied by Reingold’s log-space algorithm for undirected connectivity (Reingold/Reingold et al. in Proc. of the 37th/38th Annual Symposium on Theory of Computing, pp. 376–385/457–466, 2005/2006). We obtain families of k-wise almost independent permutations, with an optimal description length, up to a constant factor. More precisely, if the distance from uniform for any k tuple should be at most δ, then the size of the description of a permutation in the family is $O(kn+\log \frac{1}{\delta})$.

103 citations


Book ChapterDOI
20 Feb 2009
TL;DR: The optimal trade-off between the round complexity and the bias of two-party coin-flipping protocols is established: an r -round protocol with bias O (1/r).
Abstract: We address one of the foundational problems in cryptography: the bias of coin-flipping protocols. Coin-flipping protocols allow mutually distrustful parties to generate a common unbiased random bit, guaranteeing that even if one of the parties is malicious, it cannot significantly bias the output of the honest party. A classical result by Cleve [STOC '86] showed that for any two-party r -round coin-flipping protocol there exists an efficient adversary that can bias the output of the honest party by *** (1/r ). However, the best previously known protocol only guarantees $O(1/\sqrt{r})$ bias, and the question of whether Cleve's bound is tight has remained open for more than twenty years. In this paper we establish the optimal trade-off between the round complexity and the bias of two-party coin-flipping protocols. Under standard assumptions (the existence of oblivious transfer), we show that Cleve's lower bound is tight: we construct an r -round protocol with bias O (1/r ).

81 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the problem of sublinear authentication, where the user wants to encode and store the file in a way that allows him to verify that it has not been corrupted, but without reading the entire file.
Abstract: We consider the problem of storing a large file on a remote and unreliable server. To verify that the file has not been corrupted, a user could store a small private (randomized) “fingerprint” on his own computer. This is the setting for the well-studied authentication problem in cryptography, and the required fingerprint size is well understood. We study the problem of sublinear authentication: suppose the user would like to encode and store the file in a way that allows him to verify that it has not been corrupted, but without reading the entire file. If the user only wants to read q bits of the file, how large does the size s of the private fingerprint need to beq We define this problem formally, and show a tight lower bound on the relationship between s and q when the adversary is not computationally bounded, namely: s × q e Ω(n), where n is the file size. This is an easier case of the online memory checking problem, introduced by Blum et al. [1991], and hence the same (tight) lower bound applies also to that problem. It was previously shown that, when the adversary is computationally bounded, under the assumption that one-way functions exist, it is possible to construct much better online memory checkers. The same is also true for sublinear authentication schemes. We show that the existence of one-way functions is also a necessary condition: even slightly breaking the s × q e Ω(n) lower bound in a computational setting implies the existence of one-way functions.

81 citations


Book ChapterDOI
06 Jul 2009
TL;DR: The theoretical analysis and experimental results indicate that the scheme is highly efficient, and provides a practical alternative to the only other known approach for constructing dynamic dictionaries with such worst case guarantees, due to Dietzfelbinger and Meyer auf der Heide.
Abstract: Cuckoo hashing is a highly practical dynamic dictionary: it provides amortized constant insertion time, worst case constant deletion time and lookup time, and good memory utilization. However, with a noticeable probability during the insertion of n elements some insertion requires *** (logn ) time. Whereas such an amortized guarantee may be suitable for some applications, in other applications (such as high-performance routing) this is highly undesirable. Kirsch and Mitzenmacher (Allerton '07) proposed a de-amortization of cuckoo hashing using queueing techniques that preserve its attractive properties. They demonstrated a significant improvement to the worst case performance of cuckoo hashing via experimental results, but left open the problem of constructing a scheme with provable properties. In this work we present a de-amortization of cuckoo hashing that provably guarantees constant worst case operations. Specifically, for any sequence of polynomially many operations, with overwhelming probability over the randomness of the initialization phase, each operation is performed in constant time. In addition, we present a general approach for proving that the performance guarantees are preserved when using hash functions with limited independence instead of truly random hash functions. Our approach relies on a recent result of Braverman (CCC '09) showing that poly-logarithmic independence fools AC 0 circuits, and may find additional applications in various similar settings. Our theoretical analysis and experimental results indicate that the scheme is highly efficient, and provides a practical alternative to the only other known approach for constructing dynamic dictionaries with such worst case guarantees, due to Dietzfelbinger and Meyer auf der Heide (ICALP '90).

71 citations


Book ChapterDOI
20 Feb 2009
TL;DR: This work shows that for checkers that access the remote storage in a deterministic and non-adaptive manner (as do all known memory checkers), their query complexity must be at least *** (logn /loglogn ).
Abstract: We consider the problem of memory checking, where a user wants to maintain a large database on a remote server but has only limited local storage The user wants to use the small (but trusted and secret) local storage to detect faults in the large (but public and untrusted) remote storage A memory checker receives from the user store and retrieve operations to the large database The checker makes its own requests to the (untrusted) remote storage and receives answers to these requests It then uses these responses, together with its small private and reliable local memory, to ascertain that all requests were answered correctly, or to report faults in the remote storage (the public memory) A fruitful line of research investigates the complexity of memory checking in terms of the number of queries the checker issues per user request (query complexity) and the size of the reliable local memory (space complexity) Blum et al, who first formalized the question, distinguished between online checkers (that report faults as soon as they occur) and offline checkers (that report faults only at the end of a long sequence of operations) In this work we revisit the question of memory checking, asking how efficient can memory checking be? For online checkers, Blum et al provided a checker with logarithmic query complexity in n , the database size Our main result is a lower bound: we show that for checkers that access the remote storage in a deterministic and non-adaptive manner (as do all known memory checkers), their query complexity must be at least *** (logn /loglogn ) To cope with this negative result, we show how to trade off the read and write complexity of online memory checkers: for any desired logarithm base d , we construct an online checker where either reading or writing is inexpensive and has query complexity O (log d n ) The price for this is that the other operation (write or read respectively) has query complexity O (d ·log d n ) Finally, if even this performance is unacceptable, offline memory checking may be an inexpensive alternative We provide a scheme with O (1) amortized query complexity, improving Blum et al's construction, which only had such performance for long sequences of at least n operations

64 citations


Posted Content
TL;DR: This paper construction is a two-level variant of cuckoo hashing, augmented with a ``backyard'' that handles a large fraction of the elements, together with a de-amortized perfect hashing scheme for eliminating the dependency on $\boldsymbol{n}$ memory words, and guarantees constant-time operations in the worst case with high probability.
Abstract: The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constant-time operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. However, although the first analysis of a dynamic dictionary dates back more than 45 years ago (when Knuth analyzed linear probing in 1963), the trade-off between these aspects of performance is still not completely understood. In this paper we settle two fundamental open problems: - We construct the first dynamic dictionary that enjoys the best of both worlds: it stores n elements using (1+epsilon)n memory words, and guarantees constant-time operations in the worst case with high probability. Specifically, for any epsilon = \Omega((\log\log n / \log n)^{1/2} ) and for any sequence of polynomially many operations, with high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of epsilon. The construction is a two-level variant of cuckoo hashing, augmented with a "backyard" that handles a large fraction of the elements, together with a de-amortized perfect hashing scheme for eliminating the dependency on epsilon. - We present a variant of the above construction that uses only (1+o(1))B bits, where B is the information-theoretic lower bound for representing a set of size n taken from a universe of size u, and guarantees constant-time operations in the worst case with high probability, as before. This problem was open even in the amortized setting. One of the main ingredients of our construction is a permutation-based variant of cuckoo hashing, which significantly improves the space consumption of cuckoo hashing when dealing with a rather small universe.

56 citations


Posted Content
TL;DR: In this article, the authors proposed a de-amortization of cuckoo hashing that provably guarantees constant worst-case operations, with overwhelming probability over the randomness of the initialization phase, each operation is performed in constant time.
Abstract: Cuckoo hashing is a highly practical dynamic dictionary: it provides amortized constant insertion time, worst case constant deletion time and lookup time, and good memory utilization. However, with a noticeable probability during the insertion of n elements some insertion requires \Omega(log n) time. Whereas such an amortized guarantee may be suitable for some applications, in other applications (such as high-performance routing) this is highly undesirable. Recently, Kirsch and Mitzenmacher (Allerton '07) proposed a de-amortization of cuckoo hashing using various queueing techniques that preserve its attractive properties. Kirsch and Mitzenmacher demonstrated a significant improvement to the worst case performance of cuckoo hashing via experimental results, but they left open the problem of constructing a scheme with provable properties. In this work we follow Kirsch and Mitzenmacher and present a de-amortization of cuckoo hashing that provably guarantees constant worst case operations. Specifically, for any sequence of polynomially many operations, with overwhelming probability over the randomness of the initialization phase, each operation is performed in constant time. Our theoretical analysis and experimental results indicate that the scheme is highly efficient, and provides a practical alternative to the only other known dynamic dictionary with such worst case guarantees, due to Dietzfelbinger and Meyer auf der Heide (ICALP '90).

Journal ArticleDOI
TL;DR: Encryption and physical zero-knowledge proof schemes for Sudoku, a popular combinatorial puzzle, are considered, which are meant to be understood by “lay-people” and implementable without the use of computers.
Abstract: We consider cryptographic and physical zero-knowledge proof schemes for Sudoku, a popular combinatorial puzzle. We discuss methods that allow one party, the prover, to convince another party, the verifier, that the prover has solved a Sudoku puzzle, without revealing the solution to the verifier. The question of interest is how a prover can show: (i) that there is a solution to the given puzzle, and (ii) that he knows the solution, while not giving away any information about the solution to the verifier. In this paper we consider several protocols that achieve these goals. Broadly speaking, the protocols are either cryptographic or physical. By a cryptographic protocol we mean one in the usual model found in the foundations of cryptography literature. In this model, two machines exchange messages, and the security of the protocol relies on computational hardness. By a physical protocol we mean one that is implementable by humans using common objects, and preferably without the aid of computers. In particular, our physical protocols utilize items such as scratch-off cards, similar to those used in lotteries, or even just simple playing cards. The cryptographic protocols are direct and efficient, and do not involve a reduction to other problems. The physical protocols are meant to be understood by “lay-people” and implementable without the use of computers.

Proceedings ArticleDOI
15 Jul 2009
TL;DR: The behavior of humans while playing competitive games as an entropy source is suggested to enhance the quality of the randomness in the system, and a complete working software for a robust PRG is supplied.
Abstract: Randomness is a necessary ingredient in various computational tasks and especially in Cryptography, yet many existing mechanisms for obtaining randomness suffer from numerous problems. We suggest utilizing the behavior of humans while playing competitive games as an entropy source, in order to enhance the quality of the randomness in the system. This idea has two motivations: (i) results in experimental psychology indicate that humans are able to behave quite randomly when engaged in competitive games in which a mixed strategy is optimal, and (ii) people have an affection for games, and this leads to longer play yielding more entropy overall. While the resulting strings are not perfectly random, we show how to integrate such a game into a robust pseudo-random generator that enjoys backward and forward security.We construct a game suitable for randomness extraction, and test users playing patterns. The results show that in less than two minutes a human can generate 128 bits that are 2-64-close to random, even on a limited computer such as a PDA that might have no other entropy source.As proof of concept, we supply a complete working software for a robust PRG. It generates random sequences based solely on human game play, and thus does not depend on the Operating System or any external factor.

Posted Content
TL;DR: This paper establishes the optimal trade-off between the round complexity and the bias of two-party coin-flipping protocols and shows that Cleve’s lower bound is tight.
Abstract: We address one of the foundational problems in cryptography: the bias of coin-flipping protocols. Coin-flipping protocols allow mutually distrustful parties to generate a common unbiased random bit, guaranteeing that even if one of the parties is malicious, it cannot significantly bias the output of the honest party. A classical result by Cleve [STOC ’86] showed that for any twoparty r-round coin-flipping protocol there exists an efficient adversary that can bias the output of the honest party by Ω(1/r). However, the best previously known protocol only guarantees O(1/ √ r) bias, and the question of whether Cleve’s bound is tight has remained open for more than twenty years. In this paper we establish the optimal trade-off between the round complexity and the bias of two-party coin-flipping protocols. Under standard assumptions (the existence of oblivious transfer), we show that Cleve’s lower bound is tight: we construct an r-round protocol with bias O(1/r). A preliminary version of this work appeared in Proceedings of the 6th Theory of Cryptography Conference (TCC), pages 1–18, 2009. ∗Efi Arazi School of Computer Science, IDC Herzliya. Email: talm@idc.ac.il. Supported by the European Union’s Seventh Framework Programme (FP7) via a Marie Curie Career Integration Grant and by the Israel Science Foundation (Grant No. 1790/13). Most of this work was done while the author was a Ph.D. student at the Weizmann Institute of Science. †Incumbent of the Judith Kleeman Professorial Chair, Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel. Email: moni.naor@weizmann.ac.il. Research supported in part by a grant from the I-CORE Program of the Planning and Budgeting Committee, the Israel Science Foundation, BSF and the Israeli Ministry of Science and Technology. ‡School of Computer Science and Engineering, Hebrew University of Jerusalem, Jerusalem 91904, Israel. Email: segev@cs.huji.ac.il. Supported by the European Union’s Seventh Framework Programme (FP7) via a Marie Curie Career Integration Grant, by the Israel Science Foundation (Grant No. 483/13), and by the Israeli Centers of Research Excellence (I-CORE) Program (Center No. 4/11). Most of this work was done while the author was a Ph.D. student at the Weizmann Institute of Science.

Posted Content
TL;DR: It is shown that an IB-HPS almost immediately yields an Identity-Based Encryption (IBE) scheme which is secure against (small) partial leakage of the target identity’s decryption key.
Abstract: We construct the first public-key encryption scheme in the Bounded-Retrieval Model (BRM), providing security against various forms of adversarial “key leakage” attacks. In this model, the adversary is allowed to learn arbitrary information about the decryption key, subject only to the constraint that the overall amount of “leakage” is bounded by at most ` bits. The goal of the BRM is to design cryptographic schemes that can flexibly tolerate arbitrarily leakage bounds ` (few bits or many Gigabytes), by only increasing the size of secret key proportionally, but keeping all the other parameters — including the size of the public key, ciphertext, encryption/decryption time, and the number of secret-key bits accessed during decryption — small and independent of `. As our main technical tool, we introduce the concept of an Identity-Based Hash Proof System (IB-HPS), which generalizes the notion of hash proof systems of Cramer and Shoup [CS02] to the identity-based setting. We give three different constructions of this primitive based on: (1) bilinear groups, (2) lattices, and (3) quadratic residuosity. As a result of independent interest, we show that an IB-HPS almost immediately yields an Identity-Based Encryption (IBE) scheme which is secure against (small) partial leakage of the target identity’s decryption key. As our main result, we use IB-HPS to construct public-key encryption (and IBE) schemes in the Bounded-Retrieval Model.

Journal ArticleDOI
TL;DR: Moran et al. as mentioned in this paper proposed a secure vote storage mechanism in extremely hostile environments, in which the memory is initialized to the all 0's state, and the only operation allowed is flipping bits from 0 to 1.
Abstract: Motivated by the challenging task of designing “secure” vote storage mechanisms, we study information storage mechanisms that operate in extremely hostile environments. In such environments, the majority of existing techniques for information storage and for security are susceptible to powerful adversarial attacks. We propose a mechanism for storing a set of at most K elements from a large universe of size N on write-once memories in a manner that does not reveal the insertion order of the elements. We consider a standard model for write-once memories, in which the memory is initialized to the all 0’s state, and the only operation allowed is flipping bits from 0 to 1. Whereas previously known constructions were either inefficient (required Θ(K) memory), randomized, or employed cryptographic techniques which are unlikely to be available in hostile environments, we eliminate each of these undesirable properties. The total amount of memory used by the mechanism is linear in the number of stored elements and poly-logarithmic in the size of the universe of elements. We also demonstrate a connection between secure vote storage mechanisms and one of the classical distributed computing problems: conflict resolution in multiple-access channels. By establishing a tight connection with the basic building block of our mechanism, we construct the first deterministic and non-adaptive conflict resolution algorithm whose running time is optimal up to poly-logarithmic factors. A preliminary version of this work appeared in Proceedings of the 34th International Colloquium on Automata, Languages and Programming (ICALP), pages 303–315, 2007. ∗Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel. Email: {tal.moran,moni.naor,gil.segev}@weizmann.ac.il. Research supported in part by a grant from the Israel Science Foundation. †Incumbent of the Judith Kleeman Professorial Chair.