scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 2016"


Journal ArticleDOI
TL;DR: The theory introduces a new notion of types in which interactions involving multiple peers are directly abstracted as a global scenario, and the fundamental properties of the session type discipline, such as communication safety, progress, and session fidelity, are established.
Abstract: Communication is a central elements in software development. As a potential typed foundation for structured communication-centered programming, session types have been studied over the past decade for a wide range of process calculi and programming languages, focusing on binary (two-party) sessions. This work extends the foregoing theories of binary session types to multiparty, asynchronous sessions, which often arise in practical communication-centered applications. Presented as a typed calculus for mobile processes, the theory introduces a new notion of types in which interactions involving multiple peers are directly abstracted as a global scenario. Global types retain the friendly type syntax of binary session types while specifying dependencies and capturing complex causal chains of multiparty asynchronous interactions. A global type plays the role of a shared agreement among communication peers and is used as a basis of efficient type-checking through its projection onto individual peers. The fundamental properties of the session type discipline, such as communication safety, progress, and session fidelity, are established for general n-party asynchronous interactions.

482 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the randomized complexity of four fundamental symmetry-breaking problems on graphs: computing maximal independent sets, maximal matchings, vertex colorings, and ruling sets.
Abstract: Symmetry-breaking problems are among the most well studied in the field of distributed computing and yet the most fundamental questions about their complexity remain open. In this article we work in the LOCAL model (where the input graph and underlying distributed network are identical) and study the randomized complexity of four fundamental symmetry-breaking problems on graphs: computing MISs (maximal independent sets), maximal matchings, vertex colorings, and ruling sets. A small sample of our results includes the following: —An MIS algorithm running in O(log2Δ + 2o(√log log n)) time, where Δ is the maximum degree. This is the first MIS algorithm to improve on the 1986 algorithms of Luby and Alon, Babai, and Itai, when log n L Δ L 2√log n, and comes close to the Ω(log Δ / log log Δ lower bound of Kuhn, Moscibroda, and Wattenhofer. —A maximal matching algorithm running in O(log Δ + log 4log n) time. This is the first significant improvement to the 1986 algorithm of Israeli and Itai. Moreover, its dependence on Δ is nearly optimal. —A (Δ + 1)-coloring algorithm requiring O(log Δ + 2o(√log log n) time, improving on an O(log Δ + √log n)-time algorithm of Schneider and Wattenhofer. —A method for reducing symmetry-breaking problems in low arboricity/degeneracy graphs to low-degree graphs. (Roughly speaking, the arboricity or degeneracy of a graph bounds the density of any subgraph.) Corollaries of this reduction include an O(√log n)-time maximal matching algorithm for graphs with arboricity up to 2√log n and an O(log 2/3n)-time MIS algorithm for graphs with arboricity up to 2(log n)1/3. Each of our algorithms is based on a simple but powerful technique for reducing a randomized symmetry-breaking task to a corresponding deterministic one on a poly(log n)-size graph.

284 citations


Journal ArticleDOI
TL;DR: The first polylogarithmic lower bound on such local computation for (optimization) problems including minimum vertex cover, minimum (connected) dominating set, maximum matching, maximal independent set, and maximal matching is given.
Abstract: The question of what can be computed, and how efficiently, is at the core of computer science. Not surprisingly, in distributed systems and networking research, an equally fundamental question is what can be computed in a distributed fashion. More precisely, if nodes of a network must base their decision on information in their local neighborhood only, how well can they compute or approximate a global (optimization) problem? In this paper we give the first polylogarithmic lower bound on such local computation for (optimization) problems including minimum vertex cover, minimum (connected) dominating set, maximum matching, maximal independent set, and maximal matching. In addition, we present a new distributed algorithm for solving general covering and packing linear programs. For some problems this algorithm is tight with the lower bounds, whereas for others it is a distributed approximation scheme. Together, our lower and upper bounds establish the local computability and approximability of a large class of problems, characterizing how much local information is required to solve these tasks.

177 citations


Journal ArticleDOI
TL;DR: It is shown that if bidders have submodular or, more generally, fractionally subadditive valuation functions, every Bayes-Nash equilibrium of the resulting game provides a 2-approximation to the optimal social welfare.
Abstract: We study the following simple Bayesian auction setting: m items are sold to n selfish bidders in m independent second-price auctions. Each bidder has a private valuation function that specifies his or her complex preferences over all subsets of items. Bidders only have beliefs about the valuation functions of the other bidders, in the form of probability distributions. The objective is to allocate the items to the bidders in a way that provides a good approximation to the optimal social welfare value. We show that if bidders have submodular or, more generally, fractionally subadditive (aka XOS) valuation functions, every Bayes-Nash equilibrium of the resulting game provides a 2-approximation to the optimal social welfare. Moreover, we show that in the full-information game, a pure Nash always exists and can be found in time that is polynomial in both m and n.

137 citations


Journal ArticleDOI
TL;DR: The method achieves, for the first time, exponential expansion combined with cryptographic security and noise tolerance, and has the following new features: cryptographic level of security, tolerating a constant level of imprecision in devices, and requiring only unit size quantum memory in devices.
Abstract: Randomness is a vital resource for modern-day information processing, especially for cryptography. A wide range of applications critically rely on abundant, high-quality random numbers generated securely. Here, we show how to expand a random seed at an exponential rate without trusting the underlying quantum devices. Our approach is secure against the most general adversaries, and has the following new features: cryptographic level of security, tolerating a constant level of imprecision in devices, requiring only unit size quantum memory (for each device component) in an honest implementation, and allowing a large natural class of constructions for the protocol. In conjunction with a recent work by Chung et al. l2014r, it also leads to robust unbounded expansion using just 2 multipart devices. When adapted for distributing cryptographic keys, our method achieves, for the first time, exponential expansion combined with cryptographic security and noise tolerance. The proof proceeds by showing that the Renyi divergence of the outputs of the protocol (for a specific bounding operator) decreases linearly as the protocol iterates. At the heart of the proof are a new uncertainty principle on quantum measurements and a method for simulating trusted measurements with untrusted devices.

105 citations


Journal ArticleDOI
TL;DR: A family of languages that enable combination of data and topology querying for graph databases are presented, and it is shown that it includes efficient and highly expressive formalisms for querying both the structure of the data and the data itself.
Abstract: Graph databases have received much attention as of late due to numerous applications in which data is naturally viewed as a graph; these include social networks, RDF and the Semantic Web, biological databases, and many others. There are many proposals for query languages for graph databases that mainly fall into two categories. One views graphs as a particular kind of relational data and uses traditional relational mechanisms for querying. The other concentrates on querying the topology of the graph. These approaches, however, lack the ability to combine data and topology, which would allow queries asking how data changes along paths and patterns enveloping it. In this article, we present a comprehensive study of languages that enable such combination of data and topology querying. These languages come in two flavors. The first follows the standard approach of path queries, which specify how labels of edges change along a path, but now we extend them with ways of specifying how both labels and data change. From the complexity point of view, the right type of formalisms are subclasses of register automata. These, however, are not well suited for querying. To overcome this, we develop several types of extended regular expressions to specify paths with data and study their querying power and complexity. The second approach adopts the popular XML language XPath and extends it from XML documents to graphs. Depending on the exact set of allowed features, we have a family of languages, and our study shows that it includes efficient and highly expressive formalisms for querying both the structure of the data and the data itself.

101 citations


Journal ArticleDOI
TL;DR: For the special case of uniform matroids on n elements, a 6.75k+o(k)nO(1) time algorithm was given in this article.
Abstract: Let M=(E, I) be a matroid and let S=lS1, ċ , Str be a family of subsets of E of size p. A subfamily S ⊆ S is q-representative for S if for every set Y⊆E of size at most q, if there is a set X ∈ S disjoint from Y with X∪ Y ∈ I, then there is a set Xˆ ∈ S disjoint from Y with Xˆ ∪ Y ∈ I. By the classic result of Bollobas, in a uniform matroid, every family of sets of size p has a q-representative family with at most (p+qp) sets. In his famous “two families theorem” from 1977, Lovasz proved that the same bound also holds for any matroid representable over a field F. We give an efficient construction of a q-representative family of size at most (p+qp) in time bounded by a polynomial in (p+qp), t, and the time required for field operations.We demonstrate how the efficient construction of representative families can be a powerful tool for designing single-exponential parameterized and exact exponential time algorithms. The applications of our approach include the following:—In the Long Directed Cycle problem, the input is a directed n-vertex graph G and the positive integer k. The task is to find a directed cycle of length at least k in G, if such a cycle exists. As a consequence of our 6.75k+o(k)nO(1) time algorithm, we have that a directed cycle of length at least log n, if such a cycle exists, can be found in polynomial time.—In the Minimum Equivalent Graph (MEG) problem, we are seeking a spanning subdigraph D′ of a given n-vertex digraph D with as few arcs as possible in which the reachability relation is the same as in the original digraph D.—We provide an alternative proof of the recent results for algorithms on graphs of bounded treewidth showing that many “connectivity” problems such as Hamiltonian Cycle or Steiner Tree can be solved in time 2O(t)n on n-vertex graphs of treewidth at most t.For the special case of uniform matroids on n elements, we give a faster algorithm to compute a representative family. We use this algorithm to provide the fastest known deterministic parameterized algorithms for k-Path, k-Tree, and, more generally, k-Subgraph Isomorphism, where the k-vertex pattern graph is of constant treewidth.

100 citations


Journal ArticleDOI
TL;DR: This work construction circumvents the n1/3 barrier of Razborov and Yekhanin [2007], which holds for the restricted model of bilinear group-based schemes (covering all previous 2-server schemes).
Abstract: A 2-server Private Information Retrieval (PIR) scheme allows a user to retrieve the ith bit of an n-bit database replicated among two noncommunicating servers, while not revealing any information about i to either server. In this work, we construct a 2-server PIR scheme with total communication cost nO(√log / log n log n). This improves over current 2-server protocols, which all require Ω(n1/3) communication. Our construction circumvents the n1/3 barrier of Razborov and Yekhanin [2007], which holds for the restricted model of bilinear group-based schemes (covering all previous 2-server schemes). The improvement comes from reducing the number of servers in existing protocols, based on Matching Vector Codes, from 3 or 4 servers to 2. This is achieved by viewing these protocols in an algebraic way (using polynomial interpolation) and extending them using partial derivatives.

99 citations


Journal ArticleDOI
TL;DR: This article concludes that new insights in using systems of polynomials allow us to significantly speed up the O(Δ)-coloring algorithms, and devise algorithms with the same running time also in the more complicated settings of dynamic and faulty networks.
Abstract: We study the distributed (Δ + 1)-vertex-coloring and (2Δ − 1)-edge-coloring problems. These problems are among the most important and intensively studied problems in distributed computing. Despite very intensive research in the last 30 years, no deterministic algorithms for these problems with sublinear (in Δ) time have been known so far. Moreover, for more restricted scenarios and some related problems there are lower bounds of Ω(Δ) [Goos et al. 2014; Hirvonen and Suomela 2012; Kuhn and Wattenhofer 2006; Szegedy and Vishwanathan 1993]. The question of the possibility to devise algorithms that overcome this challenging barrier is one of the most fundamental questions in distributed symmetry breaking [Barenboim and Elkin 2009, 2011; Goos et al. 2014; Hirvonen and Suomela 2012; Kuhn 2009; Panconesi and Rizzi 2001]. In this article, we settle this question for (Δ + 1)-vertex-coloring and (2Δ − 1)-edge-coloring by devising deterministic algorithms that require O(Δ3s4log Δ + logan) time in the static, dynamic, and faulty settings. (The term logan is unavoidable in view of the lower bound of Linial [1987].) Moreover, for (1 + o(1))Δ-vertex-coloring and (2 + o(1))Δ-edge-coloring we devise algorithms with O(√Δ + logan) deterministic time. This is roughly a quadratic improvement comparing to the state-of-the-art that requires O(Δ + logan) time [Barenboim and Elkin 2009; Kuhn 2009; Panconesi and Rizzi 2001]. Our results are actually more general than that since they apply also to a variant of the list-coloring problem that generalizes ordinary coloring.Our results are obtained using a novel technique for coloring partially colored graphs (also known as fixing). We partition the uncolored parts into a small number of subgraphs with certain helpful properties. Then we color these subgraphs gradually using a technique that employs constructions of polynomials in a novel way. Our construction is inspired by the algorithm of Linial [1987] for ordinary O(Δ2)-coloring. However, it is a more sophisticated construction that differs from that of Linial [1987] in several important respects. These new insights in using systems of polynomials allow us to significantly speed up the O(Δ)-coloring algorithms. Moreover, they allow us to devise algorithms with the same running time also in the more complicated settings of dynamic and faulty networks.

86 citations


Journal ArticleDOI
TL;DR: This article presents an application of a simple technique of local recompression, previously developed by the author in the context algorithms for compressed strings, to word equations, and gives new, independent, and self-contained proofs of many known results for word equations.
Abstract: In this article, we present an application of a simple technique of local recompression, previously developed by the author in the context algorithms for compressed strings [Jez 2014a, 2015b, 2015a], to word equations. The technique is based on local modification of variables (replacing X by aX or Xa) and iterative replacement of pairs of letters occurring in the equation by a “fresh” letter, which can be seen as a bottom-up compression of the solution of the given word equation, or, to be more specific, building a Straight-Line Programme for the solution of the word equation.Using this technique, we give new, independent, and self-contained proofs of many known results for word equations. To be more specific, the presented (nondeterministic) algorithm runs in O(n log n space and in time polynomial in n and log N, where n is the size of the input equation and N the size of the length-minimal solution of the word equation. Furthermore, for O(1) variables, the bound on the space consumption is in fact linear, that is, O(m), where m is the size of the space used by the input. This yields that for each k the set of satisfiable word equations with k variables is context sensitive. The presented algorithm can be easily generalised to a generator of all solutions of the given word equation (without increasing the space usage). Furthermore, a further analysis of the algorithm yields an independent proof of doubly exponential upper bound on the size of the length-minimal solution. The presented algorithm does not use exponential bound on the exponent of periodicity. Conversely, the analysis of the algorithm yields an independent proof of the exponential bound on exponent of periodicity.

82 citations


Journal ArticleDOI
TL;DR: It is shown that unique shortest paths induce set systems of low VC-dimension, which makes them combinatorially simple, which gives a unified explanation for the performance of several seemingly different approaches.
Abstract: Computing driving directions has motivated many shortest path algorithms based on preprocessing. Given a graph, the preprocessing stage computes a modest amount of auxiliary data, which is then used to speed up online queries. In practice, the best algorithms have storage overhead comparable to the graph size and answer queries very fast, while examining a small fraction of the graph. In this article, we complement the experimental evidence with the first rigorous proofs of efficiency for some of the speedup techniques developed over the past decade or variations thereof. We define highway dimension, which strengthens the notion of doubling dimension. Under the assumption that the highway dimension is low (at most polylogarithmic in the graph size), we show that, for some algorithms or their variants, preprocessing can be implemented in polynomial time, the resulting auxiliary data increases the storage requirements by a polylogarithmic factor, and queries run in polylogarithmic time. This gives a unified explanation for the performance of several seemingly different approaches. Our best bounds are based on a result that may be of independent interest: we show that unique shortest paths induce set systems of low VC-dimension, which makes them combinatorially simple.

Journal ArticleDOI
TL;DR: It is proved that any total boolean function of rank r can be computed by a deterministic communication protocol of complexity O(√ ċ log(r)) and any graph whose adjacency matrix has rank r has chromatic number at most 2O(rċlog(r)).
Abstract: We prove that any total boolean function of rank r can be computed by a deterministic communication protocol of complexity O(√ c log(r)). Equivalently, any graph whose adjacency matrix has rank r has chromatic number at most 2O(√rclog(r)). This gives a nearly quadratic improvement in the dependence on the rank over previous results.

Journal ArticleDOI
TL;DR: The theorems unify and extend all previously known kernelization results for planar graph problems and show that all problems expressible in Counting Monadic Second Order Logic and satisfying a compactness property admit a polynomial kernel on graphs of bounded genus.
Abstract: In a parameterized problem, every instance I comes with a positive integer k. The problem is said to admit a polynomial kernel if, in polynomial time, one can reduce the size of the instance I to a polynomial in k while preserving the answer. In this work, we give two meta-theorems on kernelization. The first theorem says that all problems expressible in counting monadic second-order logic and satisfying a coverability property admit a polynomial kernel on graphs of bounded genus. Our second result is that all problems that have finite integer index and satisfy a weaker coverability property admit a linear kernel on graphs of bounded genus. These theorems unify and extend all previously known kernelization results for planar graph problems.

Journal ArticleDOI
TL;DR: It is shown that every concept class C with VC dimension d has a sample compression scheme of size exponential in d, and an approximate minimax phenomenon for binary matrices of low VC dimension is used, which may be of interest in the context of game theory.
Abstract: Sample compression schemes were defined by Littlestone and Warmuth (1986) as an abstraction of the structure underlying many learning algorithms. Roughly speaking, a sample compression scheme of size k means that given an arbitrary list of labeled examples, one can retain only k of them in a way that allows us to recover the labels of all other examples in the list. They showed that compression implies probably approximately correct learnability for binary-labeled classes and asked whether the other direction holds. We answer their question and show that every concept class C with VC dimension d has a sample compression scheme of size exponential in d.

Journal ArticleDOI
TL;DR: The key point is that algorithmic progress is measured in terms of entropy rather than energy so that termination can be established even under the proliferation of states in which every step of the algorithm (random walk) increases the total number of violated constraints.
Abstract: We give an algorithmic local lemma by establishing a sufficient condition for the uniform random walk on a directed graph to reach a sink quickly. Our work is inspired by Moser’s entropic method proof of the Lovasz Local Lemma (LLL) for satisfiability and completely bypasses the Probabilistic Method formulation of the LLL. In particular, our method works when the underlying state space is entirely unstructured. Similarly to Moser’s argument, the key point is that the inevitability of reaching a sink is established by bounding the entropy of the walk as a function of time.

Journal ArticleDOI
TL;DR: In this article, the authors obtained the first polynomial relationship between treewidth and grid minor size by showing that f(k) = Ω(kδ) for some fixed constant δ > 0, and describe a randomized algorithm, whose running time is polynomially in vV(G)v and k, that with high probability finds a model of such a grid minor in G.
Abstract: One of the key results in Robertson and Seymour’s seminal work on graph minors is the grid-minor theorem (also called the excluded grid theorem). The theorem states that for every grid H, every graph whose treewidth is large enough relative to vV(H)v contains H as a minor. This theorem has found many applications in graph theory and algorithms. Let f(k) denote the largest value such that every graph of treewidth k contains a grid minor of size (f(k) × f(k)). The best previous quantitative bound, due to recent work of Kawarabayashi and Kobayashi, and Leaf and Seymour, shows that f(k)=Ω(√log k/log log k). In contrast, the best known upper bound implies that f(k) = O(√k/log k). In this article, we obtain the first polynomial relationship between treewidth and grid minor size by showing that f(k) = Ω(kδ) for some fixed constant δ > 0, and describe a randomized algorithm, whose running time is polynomial in vV(G)v and k, that with high probability finds a model of such a grid minor in G.

Journal ArticleDOI
TL;DR: A framework for approximating the metric TSP based on a novel use of matchings, which allows for generalizations in a natural way and leads to analogous results for the s, t-path traveling salesman problem on graphic metrics where the start and end vertices are prespecified.
Abstract: We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost.For the TSP on graphic metrics (graph-TSP), we show that the approach gives a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted either to half-integral solutions to the Held-Karp relaxation or to a class of graphs that contains subcubic and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework also allows for generalizations in a natural way and leads to analogous results for the s, t-path traveling salesman problem on graphic metrics where the start and end vertices are prespecified.

Journal ArticleDOI
TL;DR: The results show that a robust, decidable class can be obtained under the assumptions of anonymity and asynchrony, and the problem is undecidable for finite-state machines operating with synchronization primitives, and already for two communicating pushdown machines.
Abstract: We characterize the complexity of the safety verification problem for parameterized systems consisting of a leader process and arbitrarily many anonymous and identical contributors. Processes communicate through a shared, bounded-value register. While each operation on the register is atomic, there is no synchronization primitive to execute a sequence of operations atomically.We analyze the complexity of the safety verification problem when processes are modeled by finite-state machines, pushdown machines, and Turing machines. The problem is coNP-complete when all processes are finite-state machines, and is PSPACE-complete when they are pushdown machines. The complexity remains coNP-complete when each Turing machine is allowed boundedly many interactions with the register. Our proofs use combinatorial characterizations of computations in the model, and in the case of pushdown systems, some language-theoretic constructions of independent interest. Our results are surprising, because parameterized verification problems on slight variations of our model are known to be undecidable. For example, the problem is undecidable for finite-state machines operating with synchronization primitives, and already for two communicating pushdown machines. Thus, our results show that a robust, decidable class can be obtained under the assumptions of anonymity and asynchrony.

Journal ArticleDOI
TL;DR: Two quantitative extensions of Linear Temporal Logic are introduced, one by propositional quality operators and one by discounting operators, and the usefulness of both extensions is demonstrated and the decidability and complexity of the decision and search problems for them are studied.
Abstract: In recent years, there has been a growing need and interest in formally reasoning about the quality of software and hardware systems. As opposed to traditional verification, in which one considers the question of whether a system satisfies a given specification or not, reasoning about quality addresses the question of how well the system satisfies the specification. We distinguish between two approaches to specifying quality. The first, propositional quality, extends the specification formalism with propositional quality operators, which prioritize and weight different satisfaction possibilities. The second, temporal quality, refines the “eventually” operators of the specification formalism with discounting operators, whose semantics takes into an account the delay incurred in their satisfaction. In this article, we introduce two quantitative extensions of Linear Temporal Logic (LTL), one by propositional quality operators and one by discounting operators. In both logics, the satisfaction value of a specification is a number in [0, 1], which describes the quality of the satisfaction. We demonstrate the usefulness of both extensions and study the decidability and complexity of the decision and search problems for them as well as for extensions of LTL that combine both types of operators.

Journal ArticleDOI
TL;DR: In this paper, projection analysis is used to analyze the stopping time of RLNC gossip protocols in a general framework for network and communication models that encompasses and unifies the models used previously in this context.
Abstract: We introduce projection analysis—a new technique to analyze the stopping time of protocols that are based on random linear network coding (RLNC). Projection analysis drastically simplifies, extends, and strengthens previous results on RLNC gossip protocols. We analyze RLNC gossip in a general framework for network and communication models that encompasses and unifies the models used previously in this context. We show, in most settings for the first time, that the RLNC gossip converges with high probability in optimal time. Most stopping times are of the form O(k + T), where k is the number of messages to be distributed and T is the time it takes to disseminate one message. This means RLNC gossip achieves “perfect pipelining.” Our analysis directly extends to highly dynamic networks in which the topology can change completely at any time. This remains true, even if the network dynamics are controlled by a fully adaptive adversary that knows the complete network state. Virtually nothing besides simple O(kT) sequential flooding protocols was previously known for such a setting. While RLNC gossip works in this wide variety of networks our analysis remains the same and extremely simple. This contrasts with more complex proofs that were put forward to give less strong results for various special cases.

Journal ArticleDOI
TL;DR: A general primal-dual approach to solveSemidefinite programs using a generalization of the well-known multiplicative weights update rule to symmetric matrices to yield combinatorial approximation algorithms that are significantly more efficient than interior point methods.
Abstract: Semidefinite programs (SDPs) have been used in many recent approximation algorithms. We develop a general primal-dual approach to solve SDPs using a generalization of the well-known multiplicative weights update rule to symmetric matrices. For a number of problems, such as Sparsest Cut and Balanced Separator in undirected and directed weighted graphs, Min UnCut and Min 2CNF Deletion, this yields combinatorial approximation algorithms that are significantly more efficient than interior point methods. The design of our primal-dual algorithms is guided by a robust analysis of rounding algorithms used to obtain integer solutions from fractional ones. Our ideas have proved useful in quantum computing, especially the recent result of Jain et al. [2011] that QIP = PSPACE.

Journal ArticleDOI
TL;DR: An O(poly log k)-approximation algorithm is shown for EDPwC with congestion c = 2, by rounding the standard multicommodity How relaxation of the problem, which gives the best possible congestion for a sub-polynomial approximation of EDP wC via this relaxation.
Abstract: In the Edge-Disjoint Paths with Congestion problem (EDPwC), we are given an undirected n-vertex graph G, a collection M={ (s1,t1),… ,(sk,tk) } of pairs of vertices called demand pairs, and an integer c. The goal is to connect the maximum possible number of the demand pairs by paths, so that the maximum edge congestion - the number of paths sharing any edge - is bounded by c. When the maximum allowed congestion is c = 1, this is the classical Edge-Disjoint Paths problem (EDP).The best current approximation algorithm for EDP achieves an O(√ n)-approximation by rounding the standard multi-commodity flow relaxation of the problem. This matches the Ω (√ n) lower bound on the integrality gap of this relaxation. We show an O(poly log k)-approximation algorithm for EDPwC with congestion c = 2 by rounding the same multi-commodity flow relaxation. This gives the best possible congestion for a sub-polynomial approximation of EDPwC via this relaxation. Our results are also close to optimal in terms of the number of pairs routed, since EDPwC is known to be hard to approximate to within a factor of ~ Ω ((log n)1/(c+1)) for any constant congestion c. Prior to our work, the best approximation factor for EDPwC with congestion 2 was O(n3/7), and the best algorithm achieving a polylogarithmic approximation required congestion 14.

Journal ArticleDOI
TL;DR: In this article, the authors studied the computational complexity of exact minimization of rational-valued discrete functions and established a dichotomy theorem with respect to exact solvability for all finite-valued constraint languages defined on domains of arbitrary finite size.
Abstract: We study the computational complexity of exact minimization of rational-valued discrete functions. Let Γ be a set of rational-valued functions on a fixed finite domain; such a set is called a finite-valued constraint language. The valued constraint satisfaction problem, VCSP(Γ), is the problem of minimizing a function given as a sum of functions from Γ. We establish a dichotomy theorem with respect to exact solvability for all finite-valued constraint languages defined on domains of arbitrary finite size.We show that every constraint language Γ either admits a binary symmetric fractional polymorphism, in which case the basic linear programming relaxation solves any instance of VCSP(Γ) exactly, or Γ satisfies a simple hardness condition that allows for a polynomial-time reduction from Max-Cut to VCSP(Γ).

Journal ArticleDOI
TL;DR: The main result is analogous to Austrin and Mossel’s, bypassing their Unique-Games Conjecture assumption whenever the predicate is an abelian subgroup, and improves the NP-hardness of approximating Independent-Set on bounded-degree graphs, Almost-Coloring, Label-Cover, and various other problems.
Abstract: We show optimal (up to a constant factor) NP-hardness for a maximum constraint satisfaction problem with k variables per constraint (Max-kCSP) whenever k is larger than the domain size. This follows from our main result concerning CSPs given by a predicate: A CSP is approximation resistant if its predicate contains a subgroup that is balanced pairwise independent. Our main result is analogous to Austrin and Mossel’s, bypassing their Unique-Games Conjecture assumption whenever the predicate is an abelian subgroup. Our main ingredient is a new gap-amplification technique inspired by XOR lemmas. Using this technique, we also improve the NP-hardness of approximating Independent-Set on bounded-degree graphs, Almost-Coloring, Label-Cover, and various other problems.

Journal ArticleDOI
TL;DR: This article addresses the problem of Byzantine agreement, to bring processors to agreement on a bit in the presence of a strong adversary by introducing a method that uses spectral analysis to identify processors that have thwarted this goal by flipping biased coins.
Abstract: We address the problem of Byzantine agreement, to bring processors to agreement on a bit in the presence of a strong adversary. This adversary has full information of the state of all processors, the ability to control message scheduling in an asynchronous model, and the ability to control the behavior of a constant fraction of processors that it may choose to corrupt adaptively. In 1983, Ben-Or proposed an algorithm for solving this problem with expected exponential communication time. In this article, we improve that result to require expected polynomial communication time and computation time. Like Ben-Or’s algorithm, our algorithm uses coinflips from individual processors to repeatedly try to generate a fair global coin. We introduce a method that uses spectral analysis to identify processors that have thwarted this goal by flipping biased coins.

Journal ArticleDOI
TL;DR: In this paper, the authors consider higher-dimensional versions of Kannan and Lipton's Orbit Problem, and show that when ν has dimension one, this problem is solvable in polynomial time.
Abstract: We consider higher-dimensional versions of Kannan and Lipton’s Orbit Problem—determining whether a target vector space ν may be reached from a starting point x under repeated applications of a linear transformation A. Answering two questions posed by Kannan and Lipton in the 1980s, we show that when ν has dimension one, this problem is solvable in polynomial time, and when ν has dimension two or three, the problem is in NPRP.

Journal ArticleDOI
TL;DR: In this article, a polynomial-time encodable/decodable code for oblivious channels was proposed, which is the first one known for any channel model other than adversarial channels.
Abstract: We consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded with high probability by a parameter p and (b) the process that adds the errors can be described by a sufficiently “simple” circuit. Codes for such channel models are attractive since, like codes for standard adversarial errors, they can handle channels whose true behavior is unknown or varying over time.For two classes of channels, we provide explicit, efficiently encodable/decodable codes of optimal rate where only inefficiently decodable codes were previously known. In each case, we provide one encoder/decoder that works for every channel in the class. The encoders are randomized, and probabilities are taken over the (local, unknown to the decoder) coins of the encoder and those of the channel.Unique decoding for additive errors: We give the first construction of a polynomial-time encodable/decodable code for additive (a.k.a. oblivious) channels that achieve the Shannon capacity 1 − H(p). These are channels that add an arbitrary error vector e ∈ l0, 1rN of weight at most pN to the transmitted word; the vector e can depend on the code but not on the randomness of the encoder or the particular transmitted word. Such channels capture binary symmetric errors and burst errors as special cases.List decoding for polynomial-time channels: For every constant c > 0, we construct codes with optimal rate (arbitrarily close to 1 − H(p)) that efficiently recover a short list containing the correct message with high probability for channels describable by circuits of size at most Nc. Our construction is not fully explicit but rather Monte Carlo (we give an algorithm that, with high probability, produces an encoder/decoder pair that works for all time Nc channels). We are not aware of any channel models considered in the information theory literature other than purely adversarial channels, which require more than linear-size circuits to implement. We justify the relaxation to list decoding with an impossibility result showing that, in a large range of parameters (p > 1/4), codes that are uniquely decodable for a modest class of channels (online, memoryless, nonuniform channels) cannot have positive rate.

Journal ArticleDOI
TL;DR: In this article, it was shown that for the case k = n, Codebreaker can find the secret code with O(n log log n) guesses, where n is the number of colors.
Abstract: We analyze the general version of the classic guessing game Mastermind with n positions and k colors. Since the case k l n1 − e, e > 0 a constant, is well understood, we concentrate on larger numbers of colors. For the most prominent case k = n, our results imply that Codebreaker can find the secret code with O(nlog log n) guesses. This bound is valid also when only black answer pegs are used. It improves the O(nlog n) bound first proven by Chvatal. We also show that if both black and white answer pegs are used, then the O(nlog log n) bound holds for up to n2log log n colors. These bounds are almost tight, as the known lower bound of Ω(n) shows. Unlike for k l n1 − e, simply guessing at random until the secret code is determined is not sufficient. In fact, we show that an optimal nonadaptive strategy (deterministic or randomized) needs Θ(nlog n) guesses.

Journal ArticleDOI
TL;DR: This work shows that for any parameter k = k(n), there are unsatisfiable k-CNFs that possess refutations of width O(k), but such that any tree-like refutation must necessarily have doubly exponential size exp (nΩ(k)).
Abstract: We exhibit an unusually strong tradeoff in propositional proof complexity that significantly deviates from the established pattern of almost all results of this kind. Namely, restrictions on one resource (width, in our case) imply an increase in another resource (tree-like size) that is exponential not only with respect to the complexity of the original problem, but also to the whole class of all problems of the same bit size. More specifically, we show that for any parameter k = k(n), there are unsatisfiable k-CNFs that possess refutations of width O(k), but such that any tree-like refutation of width n1 − e/k must necessarily have doubly exponential size exp (nΩ(k)). This means that there exist contradictions that allow narrow refutations, but in order to keep the size of such a refutation even within a single exponent, it must necessarily use a high degree of parallelism. Our construction and proof methods combine, in a non-trivial way, two previously known techniques: the hardness escalation method based on substitution formulas and expansion. This combination results in a hardness compression approach that strives to preserve hardness of a contradiction while significantly decreasing the number of its variables.

Journal ArticleDOI
TL;DR: It is shown that for these problems, polynomial-sized linear programs are exactly as powerful as programs arising from a constant number of rounds of the Sherali-Adams hierarchy.
Abstract: We prove super-polynomial lower bounds on the size of linear programming relaxations for approximation versions of constraint satisfaction problems. We show that for these problems, polynomial-sized linear programs are no more powerful than programs arising from a constant number of rounds of the Sherali--Adams hierarchy. In particular, any polynomial-sized linear program for Max Cut has an integrality gap of ½ and any such linear program for Max 3-Sat has an integrality gap of f.