scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Computing in 2010"


Journal ArticleDOI
TL;DR: It is shown that the (exact or approximate) computation of Nash equilibria for 3 or more players is complete for FIXP, which captures search problems that can be cast as fixed point computation problems for functions represented by algebraic circuits (straight line programs) over basis with rational constants.
Abstract: We reexamine what it means to compute Nash equilibria and, more generally, what it means to compute a fixed point of a given Brouwer function, and we investigate the complexity of the associated problems. Specifically, we study the complexity of the following problem: given a finite game, $\Gamma$, with 3 or more players, and given $\epsilon>0$, compute an approximation within $\epsilon$ of some (actual) Nash equilibrium. We show that approximation of an actual Nash equilibrium, even to within any nontrivial constant additive factor $\epsilon<1/2$ in just one desired coordinate, is at least as hard as the long-standing square-root sum problem, as well as a more general arithmetic circuit decision problem that characterizes P-time in a unit-cost model of computation with arbitrary precision rational arithmetic; thus, placing the approximation problem in P, or even NP, would resolve major open problems in the complexity of numerical computation. We show similar results for market equilibria: it is hard to estimate with any nontrivial accuracy the equilibrium prices in an exchange economy with a unique equilibrium, where the economy is given by explicit algebraic formulas for the excess demand functions. We define a class, FIXP, which captures search problems that can be cast as fixed point computation problems for functions represented by algebraic circuits (straight line programs) over basis $\{+,*,-,/,\max,\min\}$ with rational constants. We show that the (exact or approximate) computation of Nash equilibria for 3 or more players is complete for FIXP. The price equilibrium problem for exchange economies with algebraic demand functions is another FIXP-complete problem. We show that the piecewise linear fragment of FIXP equals PPAD. Many other problems in game theory, economics, and probability theory can be cast as fixed point problems for such algebraic functions. We discuss several important such problems: computing the value of Shapley's stochastic games and the simpler games of Condon, extinction probabilities of branching processes, probabilities of stochastic context-free grammars, and termination probabilities of recursive Markov chains. We show that for some of them, the approximation, or even exact computation, problem can be placed in PPAD, while for others, they are at least as hard as the square-root sum and arithmetic circuit decision problems.

250 citations


Journal ArticleDOI
TL;DR: It is proved that if $\Gamma$ is any constraint language which has a $k$-edge operation as a polymorphism, then the constraint satisfaction problem for $\langle\Gamma\rangle$ (the closure of $\gamma$ under $\exists\wedge$-atomic expressibility) is globally tractable.
Abstract: A constraint language $\Gamma$ on a finite set $A$ has been called polynomially expressive if the number of $n$-ary relations expressible by $\exists\wedge$-atomic formulas over $\Gamma$ is bounded by $\exp(O(n^k))$ for some constant $k$. It has recently been discovered that this property is characterized by the existence of a $(k+1)$-ary polymorphism satisfying certain identities; such polymorphisms are called $k$-edge operations and include Mal'cev and near-unanimity operations as special cases. We prove that if $\Gamma$ is any constraint language which, for some $k>1$, has a $k$-edge operation as a polymorphism, then the constraint satisfaction problem for $\langle\Gamma\rangle$ (the closure of $\Gamma$ under $\exists\wedge$-atomic expressibility) is globally tractable. We also show that the set of relations definable over $\Gamma$ using quantified generalized formulas is polynomially exactly learnable using improper equivalence queries.

161 citations


Journal ArticleDOI
TL;DR: In this article, the authors obtained a 1.5-approximation algorithm for the metric uncapacitated facility location (UFL) problem, which is the best known algorithm.
Abstract: We obtain a 1.5-approximation algorithm for the metric uncapacitated facility location (UFL) problem, which improves on the previously best known 1.52-approximation algorithm by Mahdian, Ye, and Zhang. Note that the approximability lower bound by Guha and Khuller is $1.463\dots$. An algorithm is a ($\lambda_f$,$\lambda_c$)-approximation algorithm if the solution it produces has total cost at most $\lambda_f\cdot F^*+\lambda_c\cdot C^*$, where $F^*$ and $C^*$ are the facility and the connection cost of an optimal solution. Our new algorithm, which is a modification of the $(1+2/e)$-approximation algorithm of Chudak and Shmoys, is a $(1.6774,1.3738)$-approximation algorithm for the UFL problem and is the first one that touches the approximability limit curve $(\gamma_f,1+2e^{-\gamma_f})$ established by Jain, Mahdian, and Saberi. As a consequence, we obtain the first optimal approximation algorithm for instances dominated by connection costs. When combined with a $(1.11,1.7764)$-approximation algorithm proposed by Jain et al., and later analyzed by Mahdian et al., we obtain the overall approximation guarantee of 1.5 for the metric UFL problem. We also describe how to use our algorithm to improve the approximation ratio for the 3-level version of UFL.

149 citations


Journal ArticleDOI
TL;DR: A bounded-error quantum algorithm that solves the problem of evaluating an AND-OR formula on an N-bit black-box input in time and in particular, approximately balanced formulas can be evaluated in O(\sqrt{N}) queries, which is optimal.
Abstract: Consider the problem of evaluating an AND-OR formula on an $N$-bit black-box input. We present a bounded-error quantum algorithm that solves this problem in time $N^{1/2+o(1)}$. In particular, approximately balanced formulas can be evaluated in $O(\sqrt{N})$ queries, which is optimal. The idea of the algorithm is to apply phase estimation to a discrete-time quantum walk on a weighted tree whose spectrum encodes the value of the formula.

145 citations


Journal ArticleDOI
TL;DR: This framework greatly extends the previously considered case of small-integer-weighted graphs, and incidentally also yields the first truly subcubic result for APSP in real-vertex- Weighted graphs.
Abstract: In the first part of the paper, we reexamine the all-pairs shortest path (APSP) problem and present a new algorithm with running time $O(n^3\log^3\log n/\log^2n)$, which improves all known algorithms for general real-weighted dense graphs. In the second part of the paper, we use fast matrix multiplication to obtain truly subcubic APSP algorithms for a large class of “geometrically weighted” graphs, where the weight of an edge is a function of the coordinates of its vertices. For example, for graphs embedded in Euclidean space of a constant dimension $d$, we obtain a time bound near $O(n^{3-(3-\omega)/(2d+4)})$, where $\omega<2.376$; in two dimensions, this is $O(n^{2.922})$. Our framework greatly extends the previously considered case of small-integer-weighted graphs, and incidentally also yields the first truly subcubic result (near $O(n^{3-(3-\omega)/4})=O(n^{2.844})$ time) for APSP in real-vertex-weighted graphs, as well as an improved result (near $O(n^{(3+\omega)/2})=O(n^{2.688})$ time) for the all-pairs lightest shortest path problem for small-integer-weighted graphs.

138 citations


Journal ArticleDOI
TL;DR: Improved approximation factors for the hitting set or the set cover problems associated with the corresponding range spaces are obtained by plugging the bounds into the technique of Bronnimann and Goodrich or of Even, Rawitz, and Shahar.
Abstract: We show the existence of $\varepsilon$-nets of size $O\left(\frac{1}{\varepsilon}\log\log\frac{1}{\varepsilon}\right)$ for planar point sets and axis-parallel rectangular ranges. The same bound holds for points in the plane and “fat” triangular ranges and for point sets in $\boldsymbol{R}^3$ and axis-parallel boxes; these are the first known nontrivial bounds for these range spaces. Our technique also yields improved bounds on the size of $\varepsilon$-nets in the more general context considered by Clarkson and Varadarajan. For example, we show the existence of $\varepsilon$-nets of size $O\left(\frac{1}{\varepsilon}\log\log\log\frac{1}{\varepsilon}\right)$ for the dual range space of “fat” regions and planar point sets (where the regions are the ground objects and the ranges are subsets stabbed by points). Plugging our bounds into the technique of Bronnimann and Goodrich or of Even, Rawitz, and Shahar, we obtain improved approximation factors (computable in expected polynomial time by a randomized algorithm) for the hitting set or the set cover problems associated with the corresponding range spaces.

134 citations


Journal ArticleDOI
TL;DR: This work uses a precise characterization of the cost-sharing protocols that induce only network games with pure-strategy Nash equilibria to prove, among other results, that the Shapley protocol is optimal in directed graphs and that simple priority protocols are essentially optimal in undirected graphs.
Abstract: Designing and deploying a network protocol determines the rules by which end users interact with each other and with the network. We consider the problem of designing a protocol to optimize the equilibrium behavior of a network with selfish users. We consider network cost-sharing games, where the set of Nash equilibria depends fundamentally on the choice of an edge cost-sharing protocol. Previous research focused on the Shapley protocol, in which the cost of each edge is shared equally among its users. We systematically study the design of optimal cost-sharing protocols for undirected and directed graphs, single-sink and multicommodity networks, and different measures of the inefficiency of equilibria. Our primary technical tool is a precise characterization of the cost-sharing protocols that induce only network games with pure-strategy Nash equilibria. We use this characterization to prove, among other results, that the Shapley protocol is optimal in directed graphs and that simple priority protocols are essentially optimal in undirected graphs.

120 citations


Journal ArticleDOI
TL;DR: The results demonstrate that “local” submodularity is preserved “globally” under this diffusion process, which is of natural computational interest, as many optimization problems have good approximation algorithms for submodular functions.
Abstract: Social networks are often represented as directed graphs, where the nodes are individuals and the edges indicate a form of social relationship. A simple way to model the diffusion of ideas, innovative behavior, or “word-of-mouth” effects on such a graph is to consider an increasing process of “infected” (or active) nodes: each node becomes infected once an activation function of the set of its infected neighbors crosses a certain threshold value. Such a model was introduced by Kempe, Kleinberg, and Tardos (KKT) in [Maximizing the spread of influence through a social network, in Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2003, pp. 137-146] and [Influential nodes in a diffusion model for social networks, in Proceedings of the 32nd International Colloquium on Automata, Languages and Programming (ICALP), 2005], where the authors also impose several natural assumptions: the threshold values are random and the activation functions are monotone and submodular. The monotonicity condition indicates that a node is more likely to become active if more of its neighbors are active, while the submodularity condition indicates that the marginal effect of each neighbor is decreasing when the set of active neighbors increases. For an initial set of active nodes $S$, let $\sigma(S)$ denote the expected number of active nodes at termination. Here, we prove a conjecture of KKT: we show that the function $\sigma(S)$ is submodular under the assumptions above. We prove the same result for the expected value of any monotone, submodular function of the set of active nodes at termination. Roughly, our results demonstrate that “local” submodularity is preserved “globally” under this diffusion process. This is of natural computational interest, as many optimization problems have good approximation algorithms for submodular functions.

120 citations


Journal ArticleDOI
TL;DR: The main result is a dichotomy theorem stating that every partition function is either computable in polynomial time or #P-complete, and it is proved that it is decidable in polynnomial time in terms of the matrix.
Abstract: Partition functions, also known as homomorphism functions, form a rich family of graph invariants that contain combinatorial invariants such as the number of $k$-colorings or the number of independent sets of a graph and also the partition functions of certain “spin glass” models of statistical physics such as the Ising model. Building on earlier work by Dyer and Greenhill [Random Structures Algorithms, 17 (2000), pp. 260-289] and Bulatov and Grohe [Theoret. Comput. Sci., 348 (2005), pp. 148-186], we completely classify the computational complexity of partition functions. Our main result is a dichotomy theorem stating that every partition function is either computable in polynomial time or #P-complete. Partition functions are described by symmetric matrices with real entries, and we prove that it is decidable in polynomial time in terms of the matrix whether a given partition function is in polynomial time or #P-complete. While in general it is very complicated to give an explicit algebraic or combinatorial description of the tractable cases, for partition functions described by Hadamard matrices (these turn out to be central in our proofs) we obtain a simple algebraic tractability criterion, which says that the tractable cases are those “representable” by a quadratic polynomial over the field $\mathbb{F}_2$.

112 citations


Journal ArticleDOI
TL;DR: The first explicit pseudorandom generators of any distribution on $\pmo^n$ that is k-wise independent fools any halfspace (a.k.a. threshold) are obtained.
Abstract: We show that any distribution on $\{-1,+1\}^n$ that is $k$-wise independent fools any halfspace (or linear threshold function) $h:\{-1,+1\}^n\to\{-1,+1\}$, i.e., any function of the form $h(x)=\operatorname{sign}(\sum_{i=1}^{n}w_{i}x_{i}-\theta)$, where the $w_1,\dots,w_n$ and $\theta$ are arbitrary real numbers, with error $\epsilon$ for $k=O(\epsilon^{-2}\log^2(1/\epsilon))$. Our result is tight up to $\log(1/\epsilon)$ factors. Using standard constructions of $k$-wise independent distributions, we obtain the first explicit pseudorandom generators $G:\{-1,+1\}^s\to\{-1,+1\}^n$ that fool halfspaces. Specifically, we fool halfspaces with error $\epsilon$ and seed length $s=k\cdot\log n=O(\log n\cdot\epsilon^{-2}\log^2(1/\epsilon))$. Our approach combines classical tools from real approximation theory with structural results on halfspaces by Servedio [Comput. Complexity, 16 (2007), pp. 180-209].

108 citations


Journal ArticleDOI
TL;DR: Algorithms to count and enumerate representatives of the (right) ideal classes of an Eichler order in a quaternion algebra defined over a number field are provided.
Abstract: We provide algorithms to count and enumerate representatives of the (right) ideal classes of an Eichler order in a quaternion algebra defined over a number field. We analyze the run time of these algorithms and consider several related problems, including the computation of two-sided ideal classes, isomorphism classes of orders, connecting ideals for orders, and ideal principalization. We conclude by giving the complete list of definite Eichler orders with class number at most 2.

Journal ArticleDOI
TL;DR: The first exponential lower bound on the sign-rank of a function in $\mathsf{AC}^0$ is obtained and this result additionally implies a lower bound in learning theory.
Abstract: The sign-rank of a matrix $A=[A_{ij}]$ with $\pm1$ entries is the least rank of a real matrix $B=[B_{ij}]$ with $A_{ij}B_{ij}>0$ for all $i,j$. We obtain the first exponential lower bound on the sign-rank of a function in $\mathsf{AC}^0$. Namely, let $f(x,y)=\bigwedge_{i=1,\dots,m}\bigvee_{j=1,\dots,m^2}(x_{ij}\wedge y_{ij})$. We show that the matrix $[f(x,y)]_{x,y}$ has sign-rank $\exp(\Omega(m))$. This in particular implies that $\Sigma_2^{cc} ot\subseteq\mathsf{UPP}^{cc}$, which solves a longstanding open problem in communication complexity posed by Babai, Frankl, and Simon [Proceedings of the 27th Symposium on Foundations of Computer Science (FOCS), 1986, pp. 337-347]. Our result additionally implies a lower bound in learning theory. Specifically, let $\phi_1,\dots,\phi_r:\{0,1\}^n\to\mathbb{R}$ be functions such that every DNF formula $f:\{0,1\}^n\to\{-1,+1\}$ of polynomial size has the representation $f\equiv\mathrm{sgn}(a_1\phi_1+\dots+a_r\phi_r)$ for some reals $a_1,\dots,a_r$. We prove that then $r\geqslant\exp(\Omega(n^{1/3}))$, which essentially matches an upper bound of $\exp(\tilde{O}(n^{1/3}))$, due to Klivans and Servedio [J. Comput. System Sci., 68 (2004), pp. 303-318]. Finally, our work yields the first exponential lower bound on the size of threshold-of-majority circuits computing a function in $\mathsf{AC}^0$. This substantially generalizes and strengthens the results of Krause and Pudlak [Theoret. Comput. Sci., 174 (1997), pp. 137-156].

Journal ArticleDOI
TL;DR: A new approach to constructing pseudorandom generators that fool low-degree polynomials over finite fields, based on the Gowers norm is presented, which constitutes the first progress on these problems since the long-standing generator by Luby, Velickovic, and Wigderson.
Abstract: We present a new approach to constructing pseudorandom generators that fool low-degree polynomials over finite fields, based on the Gowers norm Using this approach, we obtain the following main constructions of explicitly computable generators $G:\mathbb{F}^s\to\mathbb{F}^n$ that fool polynomials over a finite field $\mathbb{F}$: We stress that the results in (1) and (2) are unconditional, ie, do not rely on any unproven assumption Moreover, the results in (3) rely on a special case of the conjecture which may be easier to prove Our generator for degree-$d$ polynomials is the componentwise sum of $d$ generators for degree-1 polynomials (on independent seeds) Prior to our work, generators with logarithmic seed length were only known for degree-1 (ie, linear) polynomials [J Naor and M Naor, SIAM J Comput, 22 (1993), pp 838-856] In fact, over small fields such as $\mathbb{F}_2=\{0,1\}$, our results constitute the first progress on these problems since the long-standing generator by Luby, Velickovic, and Wigderson [Deterministic approximate counting of depth-2 circuits, in Proceedings of the 2nd Israeli Symposium on Theoretical Computer Science (ISTCS), 1993, pp 18-24], whose seed length is much bigger: $s=\exp\left(\Omega\left(\sqrt{\log n}\right)\right)$, even for the case of degree-2 polynomials over $\mathbb{F}_2$

Journal ArticleDOI
TL;DR: This paper gives the first approximation algorithm for the problem of max-min fair allocation of indivisible goods and design and analyze an iterative method for rounding a fractional matching on a tree which might be of independent interest.
Abstract: In this paper, we give the first approximation algorithm for the problem of max-min fair allocation of indivisible goods. An instance of this problem consists of a set of $k$ people and $m$ indivisible goods. Each person has a known linear utility function over the set of goods which might be different from the utility functions of other people. The goal is to distribute the goods among the people and maximize the minimum utility received by them. The approximation ratio of our algorithm is $\Omega(\frac{1}{\sqrt{k}\log^{3}k})$. As a crucial part of our algorithm, we design and analyze an iterative method for rounding a fractional matching on a tree which might be of independent interest. We also provide better bounds when we are allowed to exclude a small fraction of the people from the problem.

Journal ArticleDOI
TL;DR: The results imply that the running time of many clique-width-based algorithms is essentially the best the authors can hope for (up to a widely believed assumption from parameterized complexity, namely $FPT eq W[1]$).
Abstract: We show that Edge Dominating Set, Hamiltonian Cycle, and Graph Coloring are $W[1]$-hard parameterized by clique-width. It was an open problem, explicitly mentioned in several papers, whether any of these problems is fixed parameter tractable when parameterized by the clique-width, that is, solvable in time $g(k)\cdot n^{O(1)}$ on $n$-vertex graphs of clique-width $k$, where $g$ is some function of $k$ only. Our results imply that the running time $O(n^{f(k)})$ of many clique-width-based algorithms is essentially the best we can hope for (up to a widely believed assumption from parameterized complexity, namely $FPT eq W[1]$).

Journal ArticleDOI
TL;DR: It is shown that compressibility (say, of SAT) would have vast implications for cryptography, including constructions of one-way functions and collision resistant hash functions from any hard-on-average problem in $\mathcal{NP}$ and cryptanalysis of key agreement protocols in the “bounded storage model” when mixed with (time) complexity-based cryptography.
Abstract: We study compression that preserves the solution to an instance of a problem rather than preserving the instance itself. Our focus is on the compressibility of $\mathcal{NP}$ decision problems. We consider $\mathcal{NP}$ problems that have long instances but relatively short witnesses. The question is whether one can efficiently compress an instance and store a shorter representation that maintains the information of whether the original input is in the language or not. We want the length of the compressed instance to be polynomial in the length of the witness and polylog in the length of original input. Such compression enables succinctly storing instances until a future setting will allow solving them, either via a technological or algorithmic breakthrough or simply until enough time has elapsed. In this paper, we first develop the basic complexity theory of compression, including reducibility, completeness, and a stratification of $\mathcal{NP}$ with respect to compression. We then show that compressibility (say, of SAT) would have vast implications for cryptography, including constructions of one-way functions and collision resistant hash functions from any hard-on-average problem in $\mathcal{NP}$ and cryptanalysis of key agreement protocols in the “bounded storage model” when mixed with (time) complexity-based cryptography.

Journal ArticleDOI
TL;DR: This paper addresses the problem of testing whether a Boolean-valued function f is a halfspace, i.e. a function of the form f(x) = sgn(w · x - θ) by giving an algorithm that distinguishes halfspaces from functions that are e-far from any halfspace using only poly(1/e) queries, independent of the dimension n.
Abstract: This paper addresses the problem of testing whether a Boolean-valued function $f$ is a halfspace, i.e., a function of the form $f(x)=\mathrm{sgn}(w\cdot x-\theta)$. We consider halfspaces over the continuous domain $\mathbf{R}^n$ (endowed with the standard multivariate Gaussian distribution) as well as halfspaces over the Boolean cube $\{-1,1\}^n$ (endowed with the uniform distribution). In both cases we give an algorithm that distinguishes halfspaces from functions that are $\epsilon$-far from any halfspace using only $\mathrm{poly}(\frac{1}{\epsilon})$ queries, independent of the dimension $n$. Two simple structural results about halfspaces are at the heart of our approach for the Gaussian distribution: The first gives an exact relationship between the expected value of a halfspace $f$ and the sum of the squares of $f$'s degree-1 Hermite coefficients, and the second shows that any function that approximately satisfies this relationship is close to a halfspace. We prove analogous results for the Boolean cube $\{-1,1\}^n$ (with Fourier coefficients in place of Hermite coefficients) for balanced halfspaces in which all degree-1 Fourier coefficients are small. Dealing with general halfspaces over $\{-1,1\}^n$ poses significant additional complications and requires other ingredients. These include “cross-consistency” versions of the results mentioned above for pairs of halfspaces with the same weights but different thresholds; new structural results relating the largest degree-1 Fourier coefficient and the largest weight in unbalanced halfspaces; and algorithmic techniques from recent work on testing juntas [E. Fischer, G. Kindler, D. Ron, S. Safra, and A. Samorodnitsky, Proceedings of the 43rd IEEE Symposium on Foundations of Computer Science, 2002, pp. 103-112].

Journal ArticleDOI
TL;DR: A $3/4-approximation algorithm for MBA is given and NP-hardness of approximating maximum submodular welfare with demand oracle to a factor better than $15/16$, improving upon the best known hardness of $275/276.
Abstract: In this paper we consider the following maximum budgeted allocation (MBA) problem: Given a set of $m$ indivisible items and $n$ agents, with each agent $i$ willing to pay $b_{ij}$ on item $j$ and with a maximum budget of $B_i$, the goal is to allocate items to agents to maximize revenue. The problem naturally arises as auctioneer revenue maximization in budget-constrained auctions and as the winner determination problem in combinatorial auctions when utilities of agents are budgeted-additive. Our main results are as follows: (i) We give a $3/4$-approximation algorithm for MBA improving upon the previous best of $\simeq0.632$ [N. Andelman and Y. Mansour, Proceedings of the 9th Scandinavian Workshop on Algorithm Theory (SWAT), 2004, pp. 26-38], [J. Vondrak, Proceedings of the 40th Annual ACM Symposium on the Theory of Computing (STOC), 2008, pp. 67-74] (also implied by the result of [U. Feige and J. Vondrak, Proceedings of the 47th IEEE Symposium on Foundations of Computer Science (FOCS), 2006, pp. 667-676]). Our techniques are based on a natural LP relaxation of MBA, and our factor is optimal in the sense that it matches the integrality gap of the LP. (ii) We prove it is NP-hard to approximate MBA to any factor better than $15/16$; previously only NP-hardness was known [T. Sandholm and S. Suri, Games Econom. Behav., 55 (2006), pp. 321-330], [B. Lehmann, D. Lehmann, and N. Nisan, Proceedings of the 3rd ACM Conference on Electronic Commerce (EC), 2001, pp. 18-28]. Our result also implies NP-hardness of approximating maximum submodular welfare with demand oracle to a factor better than $15/16$, improving upon the best known hardness of $275/276$ [U. Feige and J. Vondrak, Proceedings of the 47th IEEE Symposium on Foundations of Computer Science (FOCS), 2006, pp. 667-676]. (iii) Our hardness techniques can be modified to prove that it is NP-hard to approximate the generalized assignment problem (GAP) to any factor better than $10/11$. This improves upon the $422/423$ hardness of [C. Chekuri and S. Khanna, Proceedings of the 11th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2000, pp. 213-222], [M. Chlebik and J. Chlebikova, Proceedings of the 8th Scandinavian Workshop on Algorithm Theory (SWAT), 2002, pp. 170-179]. We use iterative rounding on a natural LP relaxation of the MBA problem to obtain the $3/4$-approximation. We also give a $(3/4-\epsilon)$-factor algorithm based on the primal-dual schema which runs in $\tilde{O}(nm)$ time, for any constant $\epsilon>0$.

Journal ArticleDOI
TL;DR: In this article, it was shown that the value of the game can be well approximated by a semidefinite program (SDP) when the constraints enforced by the verifier are unique constraints (i.e., permutations).
Abstract: We consider one-round games between a classical verifier and two provers who share entanglement. We show that when the constraints enforced by the verifier are “unique” constraints (i.e., permutations), the value of the game can be well approximated by a semidefinite program (SDP). Essentially the only algorithm known previously was for the special case of binary answers, as follows from the work of Tsirelson in 1980. Among other things, our result implies that the variant of the unique games conjecture where we allow the provers to share entanglement is false. Our proof is based on a novel “quantum rounding technique,” showing how to take a solution to an SDP and transform it into a strategy for entangled provers. Using our approximation by an SDP, we also show a parallel repetition theorem for unique entangled games.

Journal ArticleDOI
TL;DR: This work presents a general framework and algorithmic approach for incremental approximation algorithms for cardinality constrained minimization problems, and gives an improved algorithm for a hierarchical version of the k-median problem introduced by Plaxton [31], and shows that the framework applies to hierarchical clustering problems.
Abstract: We present a general framework and algorithmic approach for incremental approximation algorithms. The framework handles cardinality constrained minimization problems, such as the $k$-median and $k$-MST problems. Given some notion of ordering on solutions of different cardinalities $k$, we give solutions for all values of $k$ such that the solutions respect the ordering and such that for any $k$, our solution is close in value to the value of an optimal solution of cardinality $k$. For instance, for the $k$-median problem, the notion of ordering is set inclusion, and our incremental algorithm produces solutions such that for any $k$ and $k'$, $k

Journal ArticleDOI
TL;DR: The complexity of black-box proofs of hardness amplification is studied and the results explain why hardness amplification techniques have failed to transform known lower bounds against constant-depth circuit classes into strong average-case lower bounds.
Abstract: Hardness amplification is the fundamental task of converting a $\delta$-hard function $f:\{0,1\}^n\to\{0,1\}$ into a $(1/2-\epsilon)$-hard function $\mathit{Amp}(f)$, where $f$ is $\gamma$-hard if small circuits fail to compute $f$ on at least a $\gamma$ fraction of the inputs. In this paper we study the complexity of black-box proofs of hardness amplification. A class of circuits $\mathcal{D}$ proves a hardness amplification result if for any function $h$ that agrees with $\mathit{Amp}(f)$ on a $1/2+\epsilon$ fraction of the inputs there exists an oracle circuit $D\in\mathcal{D}$ such that $D^h$ agrees with $f$ on a $1-\delta$ fraction of the inputs. We focus on the case where every $D\in\mathcal{D}$ makes nonadaptive queries to $h$. This setting captures most hardness amplification techniques. We prove two main results: (1) The circuits in $\mathcal{D}$ “can be used” to compute the majority function on $1/\epsilon$ bits. In particular, when $\epsilon\leq1/\log^{\omega(1)}n$, $\mathcal{D}$ cannot consist of oracle circuits that have unbounded fan-in, size $\mathrm{poly}(n)$, and depth $O(1)$. (2) The circuits in $\mathcal{D}$ must make $\Omega\left(\log(1/\delta)/\epsilon^2\right)$ oracle queries. Both our bounds on the depth and on the number of queries are tight up to constant factors. Our results explain why hardness amplification techniques have failed to transform known lower bounds against constant-depth circuit classes into strong average-case lower bounds. Our results reveal a contrast between Yao's XOR lemma ($\mathit{Amp}(f):=f(x_1)\oplus\cdots\oplus f(x_t)\in\{0,1\}$) and the direct-product lemma ($\mathit{Amp}(f):=f(x_1)\circ\cdots\circ f(x_t)\in\{0,1\}^t$; here $\mathit{Amp}(f)$ is non-Boolean). Our results (1) and (2) apply to Yao's XOR lemma, whereas known proofs of the direct-product lemma violate both (1) and (2). One of our contributions is a new technique for handling “nonuniform” reductions, i.e., the case when $\mathcal{D}$ contains many circuits.

Journal ArticleDOI
TL;DR: In this article, a quantum algorithm that additively approximates the value of a tensor network to a certain scale is presented, which provides a complete solution to the problem of quantum computation.
Abstract: We present a quantum algorithm that additively approximates the value of a tensor network to a certain scale. When combined with existing results, this provides a complete problem for quantum computation. The result is a simple new way of looking at quantum computation in which unitary gates are replaced by tensors and time is replaced by the order in which the tensor network is “swallowed.” We use this result to derive new quantum algorithms that approximate the partition function of a variety of classical statistical mechanical models, including the Potts model.

Journal ArticleDOI
TL;DR: In this article, the authors considered broadcasting in an unknown radio network, where every node knows only its own label, while it is unaware of any other parameter of the network, including its neighborhood and even any upper bound on the number of nodes.
Abstract: We consider the problem of broadcasting in an unknown radio network modeled as a directed graph $G=(V,E)$, where $|V|=n$. In unknown networks, every node knows only its own label, while it is unaware of any other parameter of the network, including its neighborhood and even any upper bound on the number of nodes. We show an $\mathcal{O}(n\log n\log\log n)$ upper bound on the time complexity of deterministic broadcasting. This is an improvement over the currently best upper bound $\mathcal{O}(n\log^2n)$ for arbitrary networks, thus shrinking exponentially the existing gap between the lower bound $\Omega(n\log n)$ and the upper bound from $\mathcal{O}(\log n)$ to $\mathcal{O}(\log\log n)$.

Journal ArticleDOI
TL;DR: This work investigates the problem of monotonicity reconstruction, as defined by Ailon et al. (2004) in a localized setting, and constructs an implementation where the time and space per query is $(\log n)^{O(1)}$ and the size of the seed is polynomial in $\log n$ and $d$.
Abstract: We investigate the problem of monotonicity reconstruction, as defined by Ailon et al. (2004) in a localized setting. We have oracle access to a nonnegative real-valued function $f$ defined on the domain $[n]^d=\{1,\dots,n\}^d$ (where $d$ is viewed as a constant). We would like to closely approximate $f$ by a monotone function $g$. This should be done by a procedure (a filter) that given as input a point $x\in[n]^d$ outputs the value of $g(x)$, and runs in time that is polylogarithmic in $n$. The procedure can (indeed must) be randomized, but we require that all of the randomness be specified in advance by a single short random seed. We construct such an implementation where the time and space per query is $(\log n)^{O(1)}$ and the size of the seed is polynomial in $\log n$ and $d$. Furthermore, with high probability, the ratio of the (Hamming) distance between $g$ and $f$ to the minimum possible Hamming distance between a monotone function and $f$ is bounded above by a function of $d$ (independent of $n$). This allows for a local implementation: one can initialize many copies of the filter with the same short random seed, and they can autonomously handle queries, while producing outputs that are consistent with the same approximating function $g$.

Journal ArticleDOI
TL;DR: It is proved that the recent algorithms of Colin de Verdière and Lazarus for shortening embedded graphs and sets of cycles have running times polynomial in the complexity of the surface and the input curves, regardless of thesurface geometry.
Abstract: We describe algorithms to compute the shortest path homotopic to a given path, or the shortest cycle freely homotopic to a given cycle, on an orientable combinatorial surface. Unlike earlier results, our algorithms do not require the input path or cycle to be simple. Given a surface with complexity $n$, genus $g\geq2$, and no boundary, we construct in $O(gn\log n)$ time a tight octagonal decomposition of the surface—a set of simple cycles, each as short as possible in its free homotopy class, that decompose the surface into a complex of octagons meeting four at a vertex. After the surface is preprocessed, we can compute the shortest path homotopic to a given path of complexity $k$ in $O(gnk)$ time, or the shortest cycle homotopic to a given cycle of complexity $k$ in $O(gnk\log(nk))$ time. A similar algorithm computes shortest homotopic curves on surfaces with boundary or with genus 1. We also prove that the recent algorithms of Colin de Verdiere and Lazarus for shortening embedded graphs and sets of cycles have running times polynomial in the complexity of the surface and the input curves, regardless of the surface geometry.

Journal ArticleDOI
TL;DR: A polynomial time algorithm is presented that finds a satisfying assignment of $\boldsymbol{\Phi}$ with high probability for constraint densities with a nonvanishing probability beyond $m/n=1.817\cdot2^k/k$.
Abstract: Let $\boldsymbol{\Phi}$ be a uniformly distributed random $k$-SAT formula with $n$ variables and $m$ clauses. We present a polynomial time algorithm that finds a satisfying assignment of $\boldsymbol{\Phi}$ with high probability for constraint densities $m/n<(1-\varepsilon_k)2^k\ln(k)/k$, where $\varepsilon_k\rightarrow0$. Previously no efficient algorithm was known to find satisfying assignments with a nonvanishing probability beyond $m/n=1.817\cdot2^k/k$ [A. Frieze and S. Suen, J. Algorithms, 20 (1996), pp. 312-355].

Journal ArticleDOI
TL;DR: The current paper answers the question of whether it is possible to obtain a fault tolerant spanner for an arbitrary undirected weighted graph by presenting an $f$-vertex fault tolerant $(2k-1)-spanner of size O(f^{2}k^{f+1}\cdot n^{1+1/k}n)$.
Abstract: This paper concerns graph spanners that are resistant to vertex or edge failures. In the failure-free setting, it is known how to efficiently construct a $(2k-1)$-spanner of size $O(n^{1+1/k})$, and this size-stretch trade-off is conjectured to be tight. The notion of fault tolerant spanners was introduced a decade ago in the geometric setting [C. Levcopoulos, G. Narasimhan, and M. Smid, in Proceedings of the 30th Annual ACM Symposium on Theory of Computing, 1998, pp. 186-195]. A subgraph $H$ is an $f$-vertex fault tolerant $k$-spanner of the graph $G$ if for any set $F\subseteq V$ of size at most $f$ and any pair of vertices $u,v\in V\setminus F$, the distances in $H$ satisfy $\delta_{H\setminus F}(u,v)\leq k\cdot\delta_{G\setminus F}(u,v)$. A fault tolerant geometric spanner with optimal maximum degree and total weight was presented in [A. Czumaj and H. Zhao, Discrete Comput. Geom., 32 (2004), pp. 207-230]. This paper also raised as an open problem the question of whether it is possible to obtain a fault tolerant spanner for an arbitrary undirected weighted graph. The current paper answers this question in the affirmative, presenting an $f$-vertex fault tolerant $(2k-1)$-spanner of size $O(f^{2}k^{f+1}\cdot n^{1+1/k}\log^{1-1/k}n)$. Interestingly, the stretch of the spanner remains unchanged, while the size of the spanner increases only by a factor that depends on the stretch $k$, on the number of potential faults $f$, and on logarithmic terms in $n$. In addition, we consider the simpler setting of $f$-edge fault tolerant spanners (defined analogously). We present an $f$-edge fault tolerant $(2k-1)$-spanner with edge set of size $O(f\cdot n^{1+1/k})$ (only $f$ times larger than standard spanners). For both edge and vertex faults, our results are shown to hold when the given graph $G$ is weighted.

Journal ArticleDOI
TL;DR: This paper shows how to compute $O(\sqrt{\log n})-approximations to the Sparsest Cut and Balanced Separator problems in $\tilde{O}(n^2)$ time, thus improving upon the recent algorithm of Arora, Rao, and Vazirani.
Abstract: This paper shows how to compute $O(\sqrt{\log n})$-approximations to the Sparsest Cut and Balanced Separator problems in $\tilde{O}(n^2)$ time, thus improving upon the recent algorithm of Arora, Rao, and Vazirani [Proceedings of the 36th Annual ACM Symposium on Theory of Computing, 2004, pp. 222-231]. Their algorithm uses semidefinite programming and requires $\tilde{O}(n^{9.5})$ time. Our algorithm relies on efficiently finding expander flows in the graph and does not solve semidefinite programs. The existence of expander flows was also established by Arora, Rao, and Vazirani [Proceedings of the 36th Annual ACM Symposium on Theory of Computing, 2004, pp. 222-231].

Journal ArticleDOI
TL;DR: This work gives the first nontrivial approximation algorithm for NWSN with arbitrary requirements, and gives evidence that a polylogarithmic approximation ratio forNWSN with large $r_{\max}$ might not exist even for $|U|=2$ and unit weights.
Abstract: The (undirected) Steiner Network problem is as follows: given a graph $G=(V,E)$ with edge/node-weights and edge-connectivity requirements $\{r(u,v):u,v\in U\subseteq V\}$, find a minimum-weight subgraph $H$ of $G$ containing $U$ so that the $uv$-edge-connectivity in $H$ is at least $r(u,v)$ for all $u,v\in U$. The seminal paper of Jain [Combinatorica, 21 (2001), pp. 39-60], and numerous papers preceding it, considered the Edge-Weighted Steiner Network problem, with weights on the edges only, and developed novel tools for approximating minimum-weight edge-covers of several types of set functions and families. However, for the Node-Weighted Steiner Network (NWSN) problem, nontrivial approximation algorithms were known only for $0,1$ requirements. We make an attempt to change this situation by giving the first nontrivial approximation algorithm for NWSN with arbitrary requirements. Our approximation ratio for NWSN is $r_{\max}\cdot O(\ln|U|)$, where $r_{\max}=\max_{u,v\in U}r(u,v)$. This generalizes the result of Klein and Ravi [J. Algorithms, 19 (1995), pp. 104-115] for the case $r_{\max}=1$. We also give an $O(\ln|U|)$-approximation algorithm for the node-connectivity variant of NWSN (when the paths are required to be internally disjoint) for the case $r_{\max}=2$. Our results are based on a much more general approximation algorithm for the problem of finding a minimum node-weighted edge-cover of an uncrossable set-family. Finally, we give evidence that a polylogarithmic approximation ratio for NWSN with large $r_{\max}$ might not exist even for $|U|=2$ and unit weights.

Journal ArticleDOI
TL;DR: An algorithm that computes a regular partition of a given (possibly sparse) graph $G$ in polynomial time is provided, and a concept of regularity that takes into account vertex weights is introduced, and it is shown that if $G=(V,E)$ satisfies a certain boundedness condition, then $G $ admits a regular partitions.
Abstract: We deal with two intimately related subjects: quasi-randomness and regular partitions. The purpose of the concept of quasi-randomness is to express how much a given graph “resembles” a random one. Moreover, a regular partition approximates a given graph by a bounded number of quasi-random graphs. Regarding quasi-randomness, we present a new spectral characterization of low discrepancy, which extends to sparse graphs. Concerning regular partitions, we introduce a concept of regularity that takes into account vertex weights, and show that if $G=(V,E)$ satisfies a certain boundedness condition, then $G$ admits a regular partition. In addition, building on the work of Alon and Naor [Proceedings of the 36th ACM Symposium on Theory of Computing (STOC), Chicago, IL, ACM, New York, 2004, pp. 72-80], we provide an algorithm that computes a regular partition of a given (possibly sparse) graph $G$ in polynomial time. As an application, we present a polynomial time approximation scheme for MAX CUT on (sparse) graphs without “dense spots.”