scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Computing in 2007"


Journal ArticleDOI
TL;DR: It is shown that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice, and it is proved that the distribution that one obtains after adding Gaussian noise to a lattice has the following interesting property.
Abstract: We show that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice. The lattice problems we consider are the shortest vector problem, the shortest independent vectors problem, the covering radius problem, and the guaranteed distance decoding problem (a variant of the well-known closest vector problem). The approximation factor we obtain is $n \log^{O(1)} n$ for all four problems. This greatly improves on all previous work on the subject starting from Ajtai’s seminal paper [Generating hard instances of lattice problems, in Complexity of Computations and Proofs, Quad. Mat. 13, Dept. Math., Seconda Univ. Napoli, Caserta, Italy, 2004, pp. 1-32] up to the strongest previously known results by Micciancio [SIAM J. Comput., 34 (2004), pp. 118-169]. Our results also bring us closer to the limit where the problems are no longer known to be in NP intersect coNP. Our main tools are Gaussian measures on lattices and the high-dimensional Fourier transform. We start by defining a new lattice parameter which determines the amount of Gaussian noise that one has to add to a lattice in order to get close to a uniform distribution. In addition to yielding quantitatively much stronger results, the use of this parameter allows us to simplify many of the complications in previous work. Our technical contributions are twofold. First, we show tight connections between this new parameter and existing lattice parameters. One such important connection is between this parameter and the length of the shortest set of linearly independent vectors. Second, we prove that the distribution that one obtains after adding Gaussian noise to the lattice has the following interesting property: the distribution of the noise vector when conditioning on the final value behaves in many respects like the original Gaussian noise vector. In particular, its moments remain essentially unchanged.

793 citations


Journal ArticleDOI
TL;DR: This paper shows a reduction from the Unique Games problem to the problem of approximating MAX-CUT to within a factor of $\alpha_{\text{\tiny{GW}}} + \epsilon$ for all $\ep silon > 0$, and indicates that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX- CUT problem.
Abstract: In this paper we show a reduction from the Unique Games problem to the problem of approximating MAX-CUT to within a factor of $\alpha_{\text{\tiny{GW}}} + \epsilon$ for all $\epsilon > 0$; here $\alpha_{\text{\tiny{GW}}} \approx .878567$ denotes the approximation ratio achieved by the algorithm of Goemans and Williamson in [J. Assoc. Comput. Mach., 42 (1995), pp. 1115-1145]. This implies that if the Unique Games Conjecture of Khot in [Proceedings of the 34th Annual ACM Symposium on Theory of Computing, 2002, pp. 767-775] holds, then the Goemans-Williamson approximation algorithm is optimal. Our result indicates that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX-CUT problem. Our reduction relies on a theorem we call Majority Is Stablest. This was introduced as a conjecture in the original version of this paper, and was subsequently confirmed in [E. Mossel, R. O’Donnell, and K. Oleszkiewicz, Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, 2005, pp. 21-30]. A stronger version of this conjecture called Plurality Is Stablest is still open, although [E. Mossel, R. O’Donnell, and K. Oleszkiewicz, Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, 2005, pp. 21-30] contains a proof of an asymptotic version of it. Our techniques extend to several other two-variable constraint satisfaction problems. In particular, subject to the Unique Games Conjecture, we show tight or nearly tight hardness results for MAX-2SAT, MAX-$q$-CUT, and MAX-2LIN($q$). For MAX-2SAT we show approximation hardness up to a factor of roughly $.943$. This nearly matches the $.940$ approximation algorithm of Lewin, Livnat, and Zwick in [Proceedings of the 9th Annual Conference on Integer Programming and Combinatorial Optimization, Springer-Verlag, Berlin, 2002, pp. 67-82]. Furthermore, we show that our .943... factor is actually tight for a slightly restricted version of MAX-2SAT. For MAX-$q$-CUT we show a hardness factor which asymptotically (for large $q$) matches the approximation factor achieved by Frieze and Jerrum [Improved approximation algorithms for MAX k-CUT and MAX BISECTION, in Integer Programming and Combinatorial Optimization, Springer-Verlag, Berlin, pp. 1-13], namely $1 - 1/q + 2({\rm ln}\,q)/q^2$. For MAX-2LIN($q$) we show hardness of distinguishing between instances which are $(1-\epsilon)$-satisfiable and those which are not even, roughly, $(q^{-\epsilon/2})$-satisfiable. These parameters almost match those achieved by the recent algorithm of Charikar, Makarychev, and Makarychev [Proceedings of the 38th Annual ACM Symposium on Theory of Computing, 2006, pp. 205-214]. The hardness result holds even for instances in which all equations are of the form $x_i - x_j = c$. At a more qualitative level, this result also implies that $1-\epsilon$ vs. e hardness for MAX-2LIN($q$) is equivalent to the Unique Games Conjecture.

629 citations


Journal ArticleDOI
TL;DR: An O(N/sup k/(k+1)/) query quantum algorithm is given for the generalization of element distinctness in which the authors have to find k equal items among N items.
Abstract: We use quantum walks to construct a new quantum algorithm for element distinctness and its generalization. For element distinctness (the problem of finding two equal items among $N$ given items), we get an $O(N^{2/3})$ query quantum algorithm. This improves the previous $O(N^{3/4})$ quantum algorithm of Buhrman et al. [SIAM J. Comput., 34 (2005), pp. 1324-1330] and matches the lower bound of Aaronson and Shi [J. ACM, 51 (2004), pp. 595-605]. We also give an $O(N^{k/(k+1)})$ query quantum algorithm for the generalization of element distinctness in which we have to find $k$ equal items among $N$ items.

593 citations


Journal ArticleDOI
TL;DR: It is suggested that the adiabatic computation model and the conventional quantum computation model are polynomially equivalent and this result can be extended to the physically realistic setting of particles arranged on a two-dimensional grid with nearest neighbor interactions.
Abstract: Adiabatic quantum computation has recently attracted attention in the physics and computer science communities, but its computational power was unknown. We describe an efficient adiabatic simulation of any given quantum algorithm, which implies that the adiabatic computation model and the conventional quantum computation model are polynomially equivalent. Our result can be extended to the physically realistic setting of particles arranged on a two-dimensional grid with nearest neighbor interactions. The equivalence between the models allows stating the main open problems in quantum computation using well-studied mathematical objects such as eigenvectors and spectral gaps of sparse matrices.

431 citations


Journal ArticleDOI
TL;DR: This paper considers the problem of designing fast, approximate, combinatorial algorithms for multicommodity flows and other fractional packing problems and presents new, faster, and much simpler algorithms for these problems.
Abstract: This paper considers the problem of designing fast, approximate, combinatorial algorithms for multicommodity flows and other fractional packing problems. We present new, faster, and much simpler algorithms for these problems.

289 citations


Journal ArticleDOI
TL;DR: In this paper, Buhrman et al. presented two new quantum algorithms that either find a triangle (a copy of $K_{3}$) in an undirected graph, or reject if the triangle is triangle free.
Abstract: We present two new quantum algorithms that either find a triangle (a copy of $K_{3}$) in an undirected graph $G$ on $n$ nodes, or reject if $G$ is triangle free The first algorithm uses combinatorial ideas with Grover Search and makes $\tilde{O}(n^{10/7})$ queries The second algorithm uses $\tilde{O}(n^{13/10})$ queries and is based on a design concept of Ambainis [in Proceedings of the $45$th IEEE Symposium on Foundations of Computer Science, 2004, pp 22-31] that incorporates the benefits of quantum walks into Grover Search [L Grover, in Proceedings of the Twenty-Eighth ACM Symposium on Theory of Computing, 1996, pp 212-219] The first algorithm uses only $O(\log n)$ qubits in its quantum subroutines, whereas the second one uses $O(n)$ qubits The Triangle Problem was first treated in [H Buhrman et al, SIAM J Comput, 34 (2005), pp 1324-1330], where an algorithm with $O(n+\sqrt{nm})$ query complexity was presented, where $m$ is the number of edges of $G$

274 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the size of the smallest self-assembly program that builds a shape and the shape's descriptional (Kolmogorov) complexity should be related.
Abstract: The connection between self-assembly and computation suggests that a shape can be considered the output of a self-assembly “program,” a set of tiles that fit together to create a shape. It seems plausible that the size of the smallest self-assembly program that builds a shape and the shape’s descriptional (Kolmogorov) complexity should be related. We show that when using a notion of a shape that is independent of scale, this is indeed so: in the tile assembly model, the minimal number of distinct tile types necessary to self-assemble a shape, at some scale, can be bounded both above and below in terms of the shape’s Kolmogorov complexity. As part of the proof, we develop a universal constructor for this model of self-assembly that can execute an arbitrary Turing machine program specifying how to grow a shape. Our result implies, somewhat counterintuitively, that self-assembly of a scaled-up version of a shape often requires fewer tile types. Furthermore, the independence of scale in self-assembly theory appears to play the same crucial role as the independence of running time in the theory of computability. This leads to an elegant formulation of languages of shapes generated by self-assembly. Considering functions from bit strings to shapes, we show that the running-time complexity, with respect to Turing machines, is polynomially equivalent to the scale complexity of the same function implemented via self-assembly by a finite set of tile types. Our results also hold for shapes defined by Wang tiling—where there is no sense of a self-assembly process—except that here time complexity must be measured with respect to nondeterministic Turing machines.

267 citations


Journal ArticleDOI
TL;DR: This paper gives the first constant-factor approximation algorithm for the rooted Orienteering problem, as well as a new problem that is motivated by the Discounted-Reward traveling salesman problem (TSP), motivated by robot navigation.
Abstract: In this paper, we give the first constant-factor approximation algorithm for the rooted Orienteering problem, as well as a new problem that we call the Discounted-Reward traveling salesman problem (TSP), motivated by robot navigation. In both problems, we are given a graph with lengths on edges and rewards on nodes, and a start node $s$. In the Orienteering problem, the goal is to find a path starting at $s$ that maximizes the reward collected, subject to a hard limit on the total length of the path. In the Discounted-Reward TSP, instead of a length limit we are given a discount factor $\gamma$, and the goal is to maximize the total discounted reward collected, where the reward for a node reached at time $t$ is discounted by $\gamma^t$. This problem is motivated by an approximation to a planning problem in the Markov decision process (MDP) framework under the commonly employed infinite horizon discounted reward optimality criterion. The approximation arises from a need to deal with exponentially large state spaces that emerge when trying to model one-time events and nonrepeatable rewards (such as for package deliveries). We also consider tree and multiple-path variants of these problems and provide approximations for those as well. Although the unrooted Orienteering problem, where there is no fixed start node $s$, has been known to be approximable using algorithms for related problems such as $k$-TSP (in which the amount of reward to be collected is fixed and the total length is approximately minimized), ours is the first to approximate the rooted question, solving an open problem in [E. M. Arkin, J. S. B. Mitchell, and G. Narasimhan, Proceedings of the $14$th ACM Symposium on Computational Geometry, 1998, pp. 307-316] and [B. Awerbuch, Y. Azar, A. Blum, and S. Vempala, SIAM J. Comput., 28 (1998), pp. 254-262]. We complement our approximation result for Orienteering by showing that the problem is APX-hard.

184 citations


Journal ArticleDOI
TL;DR: The approach yields fully polynomial-time approximation schemes for the NP-hard quickest min-cost and multicommodity flow problems and shows that storage of flow at intermediate nodes is unnecessary, and the approximation schemes do not use any.
Abstract: Flows over time (also called dynamic flows) generalize standard network flows by introducing an element of time. They naturally model problems where travel and transmission are not instantaneous. Traditionally, flows over time are solved in time-expanded networks that contain one copy of the original network for each discrete time step. While this method makes available the whole algorithmic toolbox developed for static flows, its main and often fatal drawback is the enormous size of the time-expanded network. We present several approaches for coping with this difficulty. First, inspired by the work of Ford and Fulkerson on maximal $s$-$t$-flows over time (or “maximal dynamic $s$-$t$-flows”), we show that static length-bounded flows lead to provably good multicommodity flows over time. Second, we investigate “condensed” time-expanded networks which rely on a rougher discretization of time. We prove that a solution of arbitrary precision can be computed in polynomial time through an appropriate discretization leading to a condensed time-expanded network of polynomial size. In particular, our approach yields fully polynomial-time approximation schemes for the NP-hard quickest min-cost and multicommodity flow problems. For single commodity problems, we show that storage of flow at intermediate nodes is unnecessary, and our approximation schemes do not use any.

183 citations


Journal ArticleDOI
TL;DR: New techniques to derive upper and lower bounds on the kernel size for certain parameterized problems are developed, including a new set of reduction and coloring rules that allows the derivation of nice combinatorial properties in the kernelized graph leading to a tighter bound on the size of the kernel.
Abstract: Determining whether a parameterized problem is kernelizable and has a small kernel size has recently become one of the most interesting topics of research in the area of parameterized complexity and algorithms. Theoretically, it has been proved that a parameterized problem is kernelizable if and only if it is fixed-parameter tractable. Practically, applying a data reduction algorithm to reduce an instance of a parameterized problem to an equivalent smaller instance (i.e., a kernel) has led to very efficient algorithms and now goes hand-in-hand with the design of practical algorithms for solving $\mathcal{NP}$-hard problems. Well-known examples of such parameterized problems include the vertex cover problem, which is kernelizable to a kernel of size bounded by $2k$, and the planar dominating set problem, which is kernelizable to a kernel of size bounded by $335k$. In this paper we develop new techniques to derive upper and lower bounds on the kernel size for certain parameterized problems. In terms of our lower bound results, we show, for example, that unless $\mathcal{P} = \mathcal{NP}$, planar vertex cover does not have a problem kernel of size smaller than $4k/3$, and planar independent set and planar dominating set do not have kernels of size smaller than $2k$. In terms of our upper bound results, we further reduce the upper bound on the kernel size for the planar dominating set problem to $67 k$, improving significantly the $335 k$ previous upper bound given by Alber, Fellows, and Niedermeier [J. ACM, 51 (2004), pp. 363-384]. This latter result is obtained by introducing a new set of reduction and coloring rules, which allows the derivation of nice combinatorial properties in the kernelized graph leading to a tighter bound on the size of the kernel. The paper also shows how this improved upper bound yields a simple and competitive algorithm for the planar dominating set problem.

155 citations


Journal ArticleDOI
TL;DR: This paper provides a randomized approximation scheme for the k-median problem when the input points lie in the d-dimensional Euclidean space and develops a structure theorem to describe hierarchical decomposition of solutions.
Abstract: This paper provides a randomized approximation scheme for the $k$-median problem when the input points lie in the $d$-dimensional Euclidean space. The worst-case running time is $O(2^{O((\log(1/\epsilon) / \varepsilon)^{d-1})} n \log^{d+6} n ),$ which is nearly linear for any fixed $\varepsilon$ and $d$. Moreover, our method provides the first polynomial-time approximation scheme for and uncapacitated facility location instances in $d$-dimensional Euclidean space for any fixed $d > 2.$ Our work extends techniques introduced originally by Arora for the Euclidean traveling salesman problem (TSP). To obtain the improvement we develop a structure theorem to describe hierarchical decomposition of solutions. The theorem is based on an adaptive decomposition scheme, which guesses at every level of the hierarchy the structure of the optimal solution and accordingly modifies the parameters of the decomposition. We believe that our methodology is of independent interest and may find applications to further geometric problems.

Journal ArticleDOI
TL;DR: The main idea in this generalization is to allow ellipsoids not to contain the whole convex region but a part of it, which makes a powerful theorem even more powerful in the area of geometric algorithms and combinatorial optimization.
Abstract: We provide the first polynomial time exact algorithm for computing an Arrow-Debreu market equilibrium for the case of linear utilities Our algorithm is based on solving a convex program using the ellipsoid algorithm and simultaneous diophantine approximation As a side result, we prove that the set of assignments at equilibrium is convex and the equilibrium prices themselves are log-convex Our convex program is explicit and intuitive, which allows maximizing a concave function over the set of equilibria On the practical side, Ye developed an interior point algorithm [Lecture Notes in Comput Sci 3521, Springer, New York, 2005, pp 3-5] to find an equilibrium based on our convex program We also derive separate combinatorial characterizations of equilibrium for Arrow-Debreu and Fisher cases Our convex program can be extended for many nonlinear utilities and production models Our paper also makes a powerful theorem (Theorem 641 in [M Grotschel, L Lovasz, and A Schrijver, Geometric Algorithms and Combinatorial Optimization, 2nd ed, Springer-Verlag, Berlin, Heidelberg, 1993]) even more powerful (in Theorems 12 and 13) in the area of geometric algorithms and combinatorial optimization The main idea in this generalization is to allow ellipsoids to contain not the whole convex region but a part of it This theorem is of independent interest

Journal ArticleDOI
TL;DR: In this article, Lutz et al. showed that packing dimension can also be characterized in terms of gales, which are betting strategies that generalize martingales, and showed that the effective strong dimension of a set or sequence is at least as great as its effective dimension, with equality for sets or sequences that are sufficiently regular.
Abstract: The two most important notions of fractal dimension are Hausdorff dimension, developed by Hausdorff [Math. Ann., 79 (1919), pp. 157-179], and packing dimension, developed independently by Tricot [Math. Proc. Cambridge Philos. Soc., 91 (1982), pp. 57-74] and Sullivan [Acta Math., 153 (1984), pp. 259-277]. Both dimensions have the mathematical advantage of being defined from measures, and both have yielded extensive applications in fractal geometry and dynamical systems. Lutz [Proceedings of the 15th IEEE Conference on Computational Complexity, Florence, Italy, 2000, IEEE Computer Society Press, Piscataway, NJ, 2000, pp. 158-169] has recently proven a simple characterization of Hausdorff dimension in terms of gales, which are betting strategies that generalize martingales. Imposing various computability and complexity constraints on these gales produces a spectrum of effective versions of Hausdorff dimension, including constructive, computable, polynomial-space, polynomial-time, and finite-state dimensions. Work by several investigators has already used these effective dimensions to shed significant new light on a variety of topics in theoretical computer science. In this paper we show that packing dimension can also be characterized in terms of gales. Moreover, even though the usual definition of packing dimension is considerably more complex than that of Hausdorff dimension, our gale characterization of packing dimension is an exact dual of—and every bit as simple as—the gale characterization of Hausdorff dimension. Effectivizing our gale characterization of packing dimension produces a variety of effective strong dimensions, which are exact duals of the effective dimensions mentioned above. In general (and in analogy with the classical fractal dimensions), the effective strong dimension of a set or sequence is at least as great as its effective dimension, with equality for sets or sequences that are sufficiently regular. We develop the basic properties of effective strong dimensions and prove a number of results relating them to fundamental aspects of randomness, Kolmogorov complexity, prediction, Boolean circuit-size complexity, polynomial-time degrees, and data compression. Aside from the above characterization of packing dimension, our two main theorems are the following. 1. If $\vec{\beta} = (\beta_0,\beta_1,\ldots)$ is a computable sequence of biases that are bounded away from 0 and $R$ is random with respect to $\vec{\beta}$, then the dimension and strong dimension of $R$ are the lower and upper average entropies, respectively, of $\vec{\beta}$. 2. For each pair of $\Delta^0_2$-computable real numbers $0 < \alpha \le \beta \le 1$, there exists $A \in {\rm E}$ such that the polynomial-time many-one degree of $A$ has dimension $\alpha$ in E and strong dimension $\beta$ in E. Our proofs of these theorems use a new large deviation theorem for self-information with respect to a bias sequence $\vec{\beta}$ that need not be convergent.

Journal ArticleDOI
TL;DR: It is shown that any $L_1$ embedding of the transportation cost (a.k.a. Earthmover) metric on probability measures supported on the grid incurs distortion $\Omega(\sqrt{\log n}\right)$.
Abstract: We show that any $L_1$ embedding of the transportation cost (a.k.a. Earthmover) metric on probability measures supported on the grid $\{0,1,\ldots,n\}^2 \subseteq \mathbb{R}^2$ incurs distortion $\Omega \left(\sqrt{\log n}\right)$. We also use Fourier analytic techniques to construct a simple $L_1$ embedding of this space which has distortion $O(\log n)$.

Journal ArticleDOI
TL;DR: The best currently known upper bounds for factoring integers deterministically and for computing the Cartier-Manin operator of hyperelliptic curves are improved.
Abstract: We study the complexity of computing one or several terms (not necessarily consecutive) in a recurrence with polynomial coefficients. As applications, we improve the best currently known upper bounds for factoring integers deterministically and for computing the Cartier-Manin operator of hyperelliptic curves.

Journal ArticleDOI
TL;DR: This is the first algorithm that can learn arbitrary monotone Boolean functions to high accuracy, using random examples only, in time polynomial in a reasonable measure of the complexity of a decision tree size of f.
Abstract: We give an algorithm that learns any monotone Boolean function $\fisafunc$ to any constant accuracy, under the uniform distribution, in time polynomial in $n$ and in the decision tree size of $f.$ This is the first algorithm that can learn arbitrary monotone Boolean functions to high accuracy, using random examples only, in time polynomial in a reasonable measure of the complexity of $f.$ A key ingredient of the result is a new bound showing that the average sensitivity of any monotone function computed by a decision tree of size $s$ must be at most $\sqrt{\log s}$. This bound has proved to be of independent utility in the study of decision tree complexity [O. Schramm, R. O'Donnell, M. Saks, and R. Servedio, Every decision tree has an influential variable, in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, IEEE Computer Society, Los Alamitos, CA, 2005, pp. 31-39]. We generalize the basic inequality and learning result described above in various ways—specifically, to partition size (a stronger complexity measure than decision tree size), $p$-biased measures over the Boolean cube (rather than just the uniform distribution), and real-valued (rather than just Boolean-valued) functions.

Journal ArticleDOI
TL;DR: In this paper, the Fourier transform was used to prove lower bounds on the bounded error quantum communication complexity of functions, for which a polynomial quantum speedup is possible.
Abstract: We prove lower bounds on the bounded error quantum communication complexity. Our methods are based on the Fourier transform of the considered functions. First we generalize a method for proving classical communication complexity lower bounds developed by Raz [Comput. Complexity, 5 (1995), pp. 205-221] to the quantum case. Applying this method, we give an exponential separation between bounded error quantum communication complexity and nondeterministic quantum communication complexity. We develop several other lower bound methods based on the Fourier transform, notably showing that $\sqrt{\bar{s}(f)/\log n}$, for the average sensitivity $\bar{s}(f)$ of a function $f$, yields a lower bound on the bounded error quantum communication complexity of $f((x \wedge y)\oplus z)$, where $x$ is a Boolean word held by Alice and $y,z$ are Boolean words held by Bob. We then prove the first large lower bounds on the bounded error quantum communication complexity of functions, for which a polynomial quantum speedup is possible. For all the functions we investigate, the only previously applied general lower bound method based on discrepancy yields bounds that are $O(\log n)$.

Journal ArticleDOI
TL;DR: This work presents a zap for every language in NP, based on the existence of noninteractive zero-knowledge proofs in the shared random string model, and characterize theexistence of zaps in terms of a primitive called verifiable pseudorandom bit generators.
Abstract: A zap is a 2-round, public coin witness-indistinguishable protocol in which the first round, consisting of a message from the verifier to the prover, can be fixed “once and for all” and applied to any instance. We present a zap for every language in NP, based on the existence of noninteractive zero-knowledge proofs in the shared random string model. The zap is in the standard model and hence requires no common guaranteed random string. We present several applications for zaps, including 3-round concurrent zero-knowledge and 2-round concurrent deniable authentication, in the timing model of Dwork, Naor, and Sahai [J. ACM, 51 (2004), pp. 851-898], using moderately hard functions. We also characterize the existence of zaps in terms of a primitive called verifiable pseudorandom bit generators.

Journal ArticleDOI
TL;DR: In this article, a natural protocol for the agents which combines the following desirable features: it can be implemented in a strongly distributed setting, uses no central control, and has good convergence properties.
Abstract: Suppose that a set of $m$ tasks are to be shared as equally as possible among a set of $n$ resources. A game-theoretic mechanism to find a suitable allocation is to associate each task with a “selfish agent” and require each agent to select a resource, with the cost of a resource being the number of agents that select it. Agents would then be expected to migrate from overloaded to underloaded resources, until the allocation becomes balanced. Recent work has studied the question of how this can take place within a distributed setting in which agents migrate selfishly without any centralized control. In this paper we discuss a natural protocol for the agents which combines the following desirable features: It can be implemented in a strongly distributed setting, uses no central control, and has good convergence properties. For $m \gg n$, the system becomes approximately balanced (an $\epsilon$-Nash equilibrium) in expected time $O(\log \log m)$. We show using a martingale technique that the process converges to a perfectly balanced allocation in expected time $O(\log \log m + n^4)$. We also give a lower bound of $\Omega(\max\{\log \log m, n\})$ for the convergence time.

Journal ArticleDOI
TL;DR: It is proved that an $\omega(\log^4 n)$ lower bound for the three-party number-on-the-forehead (NOF) communication complexity of the set-disjointness function implies an $n^{\omega(1)$ size lower boundFor treelike Lovasz-Schrijver systems that refute unsatisfiable formulas in conjunctive normal form (CNFs).
Abstract: We prove that an $\omega(\log^4 n)$ lower bound for the three-party number-on-the-forehead (NOF) communication complexity of the set-disjointness function implies an $n^{\omega(1)}$ size lower bound for treelike Lovasz-Schrijver systems that refute unsatisfiable formulas in conjunctive normal form (CNFs). More generally, we prove that an $n^{\Omega(1)}$ lower bound for the $(k+1)$-party NOF communication complexity of set disjointness implies a $2^{n^{\Omega(1)}}$ size lower bound for all treelike proof systems whose formulas are degree $k$ polynomial inequalities.

Journal ArticleDOI
TL;DR: This is the first major progress on Sleator and Tarjan's dynamic optimality conjecture of 1985 that O(1)-competitive binary search trees exist and presents an O(lg lg n)-competitive online binary search tree.
Abstract: We present an $O(\lg \lg n)$-competitive online binary search tree, improving upon the best previous (trivial) competitive ratio of $O(\lg n)$. This is the first major progress on Sleator and Tarjan’s dynamic optimality conjecture of 1985 that $O(1)$-competitive binary search trees exist.

Journal ArticleDOI
TL;DR: This is the first construction of an NP proof system achieving a secrecy property, and the commitment scheme is obtained by derandomizing the interactive commitment scheme of Naor.
Abstract: We give two applications of Nisan-Wigderson-type (NW-type) (“noncryptographic”) pseudorandom generators in cryptography. Specifically, assuming the existence of an appropriate NW-type generator, we construct the following two protocols: (1) a one-message witness-indistinguishable proof system for every language in NP, based on any trapdoor permutation. This proof system does not assume a shared random string or any setup assumption, so it is actually an “NP proof system.” (2) a noninteractive bit-commitment scheme based on any one-way function. The specific NW-type generator we need is a hitting set generator fooling nondeterministic circuits. It is known how to construct such a generator if $E = DTIME(2^{O(n)})$ has a function of nondeterministic circuit complexity $2^{\Omega(n)}$. Our witness-indistinguishable proofs are obtained by using the NW-type generator to derandomize the ZAPs of Dwork and Naor [Proceedings of the 41st Annual ACM Symposium on Foundations of Computer Science, 2000, pp. 283-293]. To our knowledge, this is the first construction of an NP proof system achieving a secrecy property. Our commitment scheme is obtained by derandomizing the interactive commitment scheme of Naor [J. Cryptology, 4 (1991), pp. 151-158]. Previous constructions of noninteractive commitment schemes were known only under incomparable assumptions.

Journal ArticleDOI
TL;DR: A new approach for scheduling a set of independent malleable tasks is presented which leads to a worst case guarantee of $\frac{3}{2}+\varepsilon$ for the minimization of the parallel execution time for any fixed $\varePSilon > 0$.
Abstract: A malleable task is a computational unit that may be executed on any arbitrary number of processors, whose execution time depends on the amount of resources allotted to it This paper presents a new approach for scheduling a set of independent malleable tasks which leads to a worst case guarantee of $\frac{3}{2}+\varepsilon$ for the minimization of the parallel execution time for any fixed $\varepsilon > 0$ The main idea of this approach is to focus on the determination of a good allotment and then to solve the resulting problem with a fixed number of processors by a simple scheduling algorithm The first phase is based on a dual approximation technique where the allotment problem is expressed as a knapsack problem for partitioning the set of tasks into two shelves of respective heights $1$ and $\frac{1}{2}$

Journal ArticleDOI
TL;DR: An algorithm for sampling and triangulating a generic $C^2-smooth surface $Sigma\subset \mathbb{R}^3$ that is input with an implicit equation that is guaranteed to be homeomorphic to $\Sigma$.
Abstract: This paper presents an algorithm for sampling and triangulating a generic $C^2$-smooth surface $\Sigma\subset \mathbb{R}^3$ that is input with an implicit equation. The output triangulation is guaranteed to be homeomorphic to $\Sigma$. We also prove that the triangulation has well-shaped triangles, large dihedral angles, and a small size. The only assumption we make is that the input surface representation is amenable to certain types of computations, namely, computations of the intersection points of a line and $\Sigma$, computations of the critical points in a given direction, and computations of certain silhouette points.

Journal ArticleDOI
TL;DR: A framework for extending Szemeredi's regularity lemma is developed as a prerequisite for formulating what kind of information about the input graph will provide us with the correct estimation, and as the means for efficiently gathering this information.
Abstract: Tolerant testing is an emerging topic in the field of property testing, which was defined in [M. Parnas, D. Ron, and R. Rubinfeld, J. Comput. System Sci., 72 (2006), pp. 1012-1042] and has recently become a very active topic of research. In the general setting, there exist properties that are testable but are not tolerantly testable [E. Fischer and L. Fortnow, Proceedings of the $20$th IEEE Conference on Computational Complexity, 2005, pp. 135-140]. On the other hand, we show here that in the setting of the dense graph model, all testable properties are not only tolerantly testable (which was already implicitly proved in [N. Alon, E. Fischer, M. Krivelevich, and M. Szegedy, Combinatorica, 20 (2000), pp. 451-476] and [O. Goldreich and L. Trevisan, Random Structures Algorithms, 23 (2003), pp. 23-57]), but also admit a constant query size algorithm that estimates the distance from the property up to any fixed additive constant. In the course of the proof we develop a framework for extending Szemeredi's regularity lemma, both as a prerequisite for formulating what kind of information about the input graph will provide us with the correct estimation, and as the means for efficiently gathering this information. In particular, we construct a probabilistic algorithm that finds the parameters of a regular partition of an input graph using a constant number of queries, and an algorithm to find a regular partition of a graph using a $\mathrm{TC}_0$ circuit. This, in some ways, strengthens the results of [N. Alon, R. A. Duke, H. Lefmann, V. Rodl, and R. Yuster, J. Algorithms, 16 (1994), pp. 80-109].

Journal ArticleDOI
TL;DR: This paper establishes that the simulation preorder is, in fact, the coarsest refinement of the trace distribution preorder that is compositional for PAs.
Abstract: Probabilistic automata (PAs) constitute a general framework for modeling and analyzing discrete event systems that exhibit both nondeterministic and probabilistic behavior, such as distributed algorithms and network protocols. The behavior of PAs is commonly defined using schedulers (also called adversaries or strategies), which resolve all nondeterministic choices based on past history. From the resulting purely probabilistic structures, trace distributions can be extracted, whose intent is to capture the observable behavior of a PA. However, when PAs are composed via an (asynchronous) parallel composition operator, a global scheduler may establish strong correlations between the behavior of system components and, for example, resolve nondeterministic choices in one PA based on the outcome of probabilistic choices in the other. It is well known that, as a result of this, the (linear-time) trace distribution precongruence is not compositional for PAs. In his 1995 Ph.D. thesis, Segala has shown that the (branching-time) probabilistic simulation preorder is compositional for PAs. In this paper, we establish that the simulation preorder is, in fact, the coarsest refinement of the trace distribution preorder that is compositional. We prove our characterization result by providing (1) a context of a given PA ${\cal A}$, called the tester, which may announce the state of ${\cal A}$ to the outside world, and (2) a specific global scheduler, called the observer, which ensures that the state information that is announced is actually correct. Now when another PA ${\cal B}$ is composed with the tester, it may generate the same external behavior as the observer only when it is able to simulate ${\cal A}$ in the sense that whenever ${\cal A}$ goes to some state $s$, ${\cal B}$ can go to a corresponding state $u$, from which it may generate the same external behavior. Our result shows that probabilistic contexts together with global schedulers are able to exhibit the branching structure of PAs.

Journal ArticleDOI
TL;DR: It is shown that any property of bipartite graphs that is characterized by a finite collection of forbidden induced subgraphs is $\epsilon$-testable, with a number of queries that is polynomial in $1/\ep silon$.
Abstract: Alon et. al. [N. Alon, E. Fischer, M. Krivelevich, and M. Szegedy, Combinatorica, 20 (2000), pp. 451-476] showed that every property that is characterized by a finite collection of forbidden induced subgraphs is $\epsilon$-testable. However, the complexity of the test is double-tower with respect to $1/\epsilon$, as the only tool known to construct such tests uses a variant of Szemeredi's regularity lemma. Here we show that any property of bipartite graphs that is characterized by a finite collection of forbidden induced subgraphs is $\epsilon$-testable, with a number of queries that is polynomial in $1/\epsilon$. Our main tool is a new “conditional” version of the regularity lemma for binary matrices, which may be interesting on its own.

Journal ArticleDOI
TL;DR: The ASG paradigm is defined and demonstrated by using it to turn a variety of (classical) approximate counting algorithms into efficient quantum state generators of nontrivial quantum states, including, for example, the uniform superposition over all perfect matchings in a bipartite graph.
Abstract: The design of new quantum algorithms has proven to be an extremely difficult task. This paper considers a different approach to this task by studying the problem of quantum state generation. We motivate this problem by showing that the entire class of statistical zero knowledge, which contains natural candidates for efficient quantum algorithms such as graph isomorphism and lattice problems, can be reduced to the problem of quantum state generation. To study quantum state generation, we define a paradigm which we call adiabatic state generation (ASG) and which is based on adiabatic quantum computation. The ASG paradigm is not meant to replace the standard quantum circuit model or to improve on it in terms of computational complexity. Rather, our goal is to provide a natural theoretical framework, in which quantum state generation algorithms could be designed. The new paradigm seems interesting due to its intriguing links to a variety of different areas: the analysis of spectral gaps and ground-states of Hamiltonians in physics, rapidly mixing Markov chains, adiabatic computation, and approximate counting. To initiate the study of ASG, we prove several general lemmas that can serve as tools when using this paradigm. We demonstrate the application of the paradigm by using it to turn a variety of (classical) approximate counting algorithms into efficient quantum state generators of nontrivial quantum states, including, for example, the uniform superposition over all perfect matchings in a bipartite graph.

Journal ArticleDOI
TL;DR: This work considers a class of optimization problems where the input is an undirected graph with two weight functions defined for each node, namely the node's profit and its cost, and presents approximation algorithms for three natural optimization criteria that arise in this context.
Abstract: We consider a class of optimization problems where the input is an undirected graph with two weight functions defined for each node, namely the node's profit and its cost. The goal is to find a connected set of nodes of low cost and high profit. We present approximation algorithms for three natural optimization criteria that arise in this context, all of which are NP-hard. The budget problem asks for maximizing the profit of the set subject to a budget constraint on its cost. The quota problem requires minimizing the cost of the set subject to a quota constraint on its profit. Finally, the prize collecting problem calls for minimizing the cost of the set plus the profit (here interpreted as a penalty) of the complement set. For all three problems, our algorithms give an approximation guarantee of $O(\log n)$, where $n$ is the number of nodes. To the best of our knowledge, these are the first approximation results for the quota problem and for the prize collecting problem, both of which are at least as hard to approximate as the set cover. For the budget problem, our results improve on a previous $O(\log^2 n)$ result of Guha et al. Our methods involve new theorems relating tree packings to (node) cut conditions. We also show similar theorems (with better bounds) using edge cut conditions. These imply bounds for the analogous budget and quota problems with edge costs which are comparable to known (constant factor) bounds.

Journal ArticleDOI
TL;DR: This work gives the first constant-factor approximation algorithm for a nontrivial instance of the optimal guarding (coverage) problem in polygons, and gives an $O(1)$-approximation algorithm for placing the fewest point guards on a 1.5D terrain.
Abstract: We present the first constant-factor approximation algorithm for a nontrivial instance of the optimal guarding (coverage) problem in polygons. In particular, we give an $O(1)$-approximation algorithm for placing the fewest point guards on a 1.5D terrain, so that every point of the terrain is seen by at least one guard. While polylogarithmic-factor approximations follow from set cover results, our new results exploit the geometric structure of terrains to obtain a substantially improved approximation algorithm.