scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Computing in 2015"


Journal ArticleDOI
TL;DR: A simple randomized linear time algorithm achieving a tight approximation guarantee of 1/2 is presented, thus matching the known hardness result of Feige, Mirrokni, and Vondrak.
Abstract: We consider the \sf Unconstrained Submodular Maximization problem in which we are given a nonnegative submodular function $f:2^{\mathcal{N}}\rightarrow \mathbb{R}^+$, and the objective is to find a subset $S\subseteq \mathcal{N}$ maximizing $f(S)$. This is one of the most basic submodular optimization problems, having a wide range of applications. Some well-known problems captured by \sf Unconstrained Submodular Maximization include \sf Max-Cut, \sf Max-DiCut, and variants of \sf Max-SAT and maximum facility location. We present a simple randomized linear time algorithm achieving a tight approximation guarantee of 1/2, thus matching the known hardness result of Feige, Mirrokni, and Vondrak [SIAM J. Comput., 40 (2011), pp. 1133--1153]. Our algorithm is based on an adaptation of the greedy approach which exploits certain symmetry properties of the problem.

247 citations


Journal ArticleDOI
TL;DR: A novel grammar representation that allows efficient random access to any character or substring without decompressing the string is presented.
Abstract: Grammar-based compression, where one replaces a long string by a small context-free grammar that generates the string, is a simple and powerful paradigm that captures (sometimes with slight reduction in efficiency) many of the popular compression schemes, including the Lempel--Ziv family, run-length encoding, byte-pair encoding, Sequitur, and Re-Pair. In this paper, we present a novel grammar representation that allows efficient random access to any character or substring without decompressing the string. Let $S$ be a string of length $N$ compressed into a context-free grammar $\mathcal{S}$ of size $n$. We present two representations of $\mathcal{S}$ achieving $O(\log N)$ random access time, and either $O(n\cdot\alpha_k(n))$ construction time and space on the pointer machine model, or $O(n)$ construction time and space on the RAM. Here, $\alpha_k(n)$ is the inverse of the $k$th row of Ackermann's function. Our representations also efficiently support decompression of any substring in $S$: we can decompres...

114 citations


Journal ArticleDOI
TL;DR: This work investigates pattern formation by a swarm of mobile robots, which is closely related with the agreement problem in distributed computing, and shows that the set of geometric patterns formable by oblivious robots is shown.
Abstract: We investigate pattern formation, i.e., self-organization, by a swarm of mobile robots, which is closely related with the agreement problem in distributed computing. Consider a system of anonymous mobile robots in a 2-dimensional Euclidean space in which each robot repeatedly executes a “Look-Compute-Move” cycle, to observe the positions of all the robots, to compute a route to the next position using an algorithm, and then to trace the route, where the algorithm is common to all robots. The robots are said to be fully synchronous if their Look-Compute-Move cycles are completely synchronized, and the $i$th Look, Compute, and Move of all robots start and end simultaneously. They are said to be asynchronous if no assumptions are made on their synchrony. The robots are said to be oblivious if they have no memory to memorize the execution history and hence behave based only on the robots' positions observed during the immediately preceding Look. We show that the set of geometric patterns formable by oblivious...

104 citations


Journal ArticleDOI
TL;DR: Given an instance of a hard decision problem, a limited goal is to compress that instance into a smaller, equivalent instances of a second problem.
Abstract: Given an instance of a hard decision problem, a limited goal is to compress that instance into a smaller, equivalent instance of a second problem. As one example, consider the problem where, given ...

102 citations


Journal ArticleDOI
TL;DR: In this article, a new randomized algorithm was proposed to find a coloring as in Spencer's result based on a restricted random walk called Edge-Walk, which does not appeal to the existential arguments.
Abstract: Minimizing the discrepancy of a set system is a fundamental problem in combinatorics. One of the cornerstones in this area is the celebrated six standard deviations result of Spencer [Trans. Amer. Math. Soc., 289 (1985), pp. 679--706]: In any system of $n$ sets in a universe of size $n$, there always exists a coloring which achieves discrepancy $6\sqrt{n}$. The original proof of Spencer was existential in nature and did not give an efficient algorithm to find such a coloring. Recently, a breakthrough work of Bansal [Proceedings of FOCS, 2010, pp. 3--10] gave an efficient algorithm which finds such a coloring. His algorithm was based on an SDP relaxation of the discrepancy problem and a clever rounding procedure. In this work we give a new randomized algorithm to find a coloring as in Spencer's result based on a restricted random walk we call Edge-Walk. Our algorithm and its analysis use only basic linear algebra and is truly constructive in that it does not appeal to the existential arguments, giving a ne...

98 citations


Journal ArticleDOI
TL;DR: A near-linear-time randomized combinatorial construction that transforms any graph on $n$ vertices into an $O(n\log n)-edge graph on the same vertices whose cuts have approximately the same value as the original graph's.
Abstract: We describe random sampling techniques for approximately solving problems that involve cuts and flows in graphs. We give a near-linear-time randomized combinatorial construction that transforms any graph on $n$ vertices into an $O(n\log n)$-edge graph on the same vertices whose cuts have approximately the same value as the original graph's. In this new graph, for example, we can run the $\tilde{O}(m^{3/2})$-time maximum flow algorithm of Goldberg and Rao to find an $s$-$t$ minimum cut in $\tilde{O}(n^{3/2})$ time. This corresponds to a $(1+\epsilon)$-times minimum $s$-$t$ cut in the original graph. A related approach leads to a randomized divide-and-conquer algorithm producing an approximately maximum flow in $\tilde{O}(m\sqrt{n})$ time. Our algorithm can also be used to improve the running time of sparsest cut approximation algorithms from $\tilde{O}(mn)$ to $\tilde{O}(n^2)$ and to accelerate several other recent cut and flow algorithms. Our algorithms are based on a general theorem analyzing the concent...

90 citations


Journal ArticleDOI
TL;DR: A relaxed version of the partition bound of Jain and Klauck is defined and it is proved that it lower bounds the information complexity of any function.
Abstract: We show that almost all known lower bound methods for communication complexity are also lower bounds for the information complexity. In particular, we define a relaxed version of the partition bound of Jain and Klauck [Proceedings of the 2010 IEEE 25th Annual Conference on Computational Complexity, 2010, pp. 247--258] and prove that it lower bounds the information complexity of any function. Our relaxed partition bound subsumes all norm-based methods (e.g., the $\gamma_2$ method) and rectangle-based methods (e.g., the rectangle/corruption bound, the smooth rectangle bound, and the discrepancy bound), except the partition bound. Our result uses a new connection between rectangles and zero-communication protocols, where the players can either output a value or abort. We prove, using a sampling protocol designed by Braverman and Weinstein [in Approximation, Randomization, and Combinatorial Optimization, Lecture Notes in Comput. Sci. 7408, Springer, Heidelberg, 2012, pp. 459--470], the following compression l...

82 citations


Journal ArticleDOI
TL;DR: It is shown that all potential maximal cliques of $G$ can be enumerated in time and implies the existence of an exact exponential algorithm of running time ${\cal O}(1.7347^n)$ for many NP-hard problems related to finding maximum induced subgraphs with different properties.
Abstract: We obtain an algorithmic metatheorem for the following optimization problem. Let $\varphi$ be a counting monadic second order logic (CMSO) formula and $t\geq 0$ be an integer. For a given graph $G=(V,E)$, the task is to maximize $|X|$ subject to the following: there is a set $ F\subseteq V$ such that $X\subseteq F $, the subgraph $G[F]$ induced by $F$ is of treewidth at most $t$, and the structure $(G[F],X)$ models $\varphi$, i.e., $(G[F],X)\models\varphi$. We give an algorithm solving this optimization problem on any $n$-vertex graph $G$ in time ${\cal O}(|\Pi_G| \cdot n^{t+4}\cdot f(t,\varphi))$, where $\Pi_G$ is the set of all potential maximal cliques in $G$ and $f$ is a function of $t$ and $\varphi$ only. Pipelined with the known bounds on the number of potential maximal cliques in different graph classes, there are a plethora of algorithmic consequences extending and subsuming many known results on polynomial-time algorithms for graph classes. We also show that all potential maximal cliques of $G$ can be enumerated in time ${\cal O}(1.7347^n)$. This implies the existence of an exact exponential algorithm of running time ${\cal O}(1.7347^n)$ for many NP-hard problems related to finding maximum induced subgraphs with different properties.

82 citations


Journal ArticleDOI
TL;DR: It is proved that for a fixed H, every graph excluding H as a topological subgraph has a tree decomposition where each part is either “almost embeddable” to a fixed surface or has bounded degree with the exception of a bounded number of vertices.
Abstract: We generalize the structure theorem of Robertson and Seymour for graphs excluding a fixed graph $H$ as a minor to graphs excluding $H$ as a topological subgraph. We prove that for a fixed $H$, every graph excluding $H$ as a topological subgraph has a tree decomposition where each part is either “almost embeddable” to a fixed surface or has bounded degree with the exception of a bounded number of vertices. Furthermore, we prove that such a decomposition is computable by an algorithm that is fixed-parameter tractable with parameter $|H|$. We present two algorithmic applications of our structure theorem. To illustrate the mechanics of a “typical” application of the structure theorem, we show that on graphs excluding $H$ as a topological subgraph, Partial Dominating Set (find $k$ vertices whose closed neighborhood has maximum size) can be solved in time $f(H,k)\cdot n^{O(1)}$. More significantly, we show that on graphs excluding $H$ as a topological subgraph, Graph Isomorphism can be solved in time $n^{f(H)}$. This result unifies and generalizes two previously known important polynomial time solvable cases of Graph Isomorphism: bounded-degree graphs [E. M. Luks, J. Comput. System Sci., 25 (1982), pp. 42--65] and $H$-minor free graphs [I. N. Ponomarenko, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI), 174 (1988), pp. 147--177, 182]. The proof of this result needs a generalization of our structure theorem to the context of invariant treelike decomposition.

81 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the problem of minimizing the number of unsatisfied clauses in a 3-CNF instance, where the objective function is a sum of functions from a given constraint language with different arity.
Abstract: Let $D$, called the domain, be a fixed finite set and let $\Gamma$, called the valued constraint language, be a fixed set of functions of the form $f:D^m\to\mathbb{Q}\cup\{\infty\}$, where different functions might have different arity $m$. We study the valued constraint satisfaction problem parametrized by $\Gamma$, denoted by VCSP$(\Gamma)$. These are minimization problems given by $n$ variables and the objective function given by a sum of functions from $\Gamma$, each depending on a subset of the $n$ variables. For example, if $D=\{0,1\}$ and $\Gamma$ contains all ternary $\{0,\infty\}$-valued functions, VCSP($\Gamma$) corresponds to 3-SAT. More generally, if $\Gamma$ contains only $\{0,\infty\}$-valued functions, VCSP($\Gamma$) corresponds to CSP($\Gamma$). If $D=\{0,1\}$ and $\Gamma$ contains all ternary $\{0,1\}$-valued functions, VCSP($\Gamma$) corresponds to Min-3-SAT, in which the goal is to minimize the number of unsatisfied clauses in a 3-CNF instance. Finite-valued constraint languages contain functions that take on only rational values and not infinite values. Our main result is a precise algebraic characterization of valued constraint languages whose instances can be solved exactly by the basic linear programming relaxation (BLP). For a valued constraint language $\Gamma$, BLP is a decision procedure for $\Gamma$ if and only if $\Gamma$ admits a symmetric fractional polymorphism of every arity. For a finite-valued constraint language $\Gamma$, BLP is a decision procedure if and only if $\Gamma$ admits a symmetric fractional polymorphism of some arity, or equivalently, if $\Gamma$ admits a symmetric fractional polymorphism of arity 2. Using these results, we obtain tractability of several novel classes of problems, including problems over valued constraint languages that are (1) submodular on arbitrary lattices; (2) $k$-submodular on arbitrary finite domains; (3) weakly (and hence strongly) tree submodular on arbitrary trees.

65 citations


Journal ArticleDOI
TL;DR: An algorithm for maintaining maximal matching in a graph under addition and deletion of edges that can maintain a factor 2 approximate maximum matching in expected amortized $O(\log n )$ time per update as a direct corollary of the maximal matching scheme.
Abstract: We present an algorithm for maintaining maximal matching in a graph under addition and deletion of edges. Our algorithm is randomized and it takes expected amortized $O(\log n)$ time for each edge update, where $n$ is the number of vertices in the graph. While there exists a trivial $O(n)$ time algorithm for each edge update, the previous best known result for this problem is due to Ivkovicź and Lloyd [Lecture Notes in Comput. Sci. 790, Springer-Verlag, London, 1994, pp. 99--111]. For a graph with $n$ vertices and $m$ edges, they gave an $O( {(n+ m)}^{0.7072})$ update time algorithm which is sublinear only for a sparse graph. For the related problem of maximum matching, Onak and Rubinfeld [Proceedings of STOC'10, Cambridge, MA, 2010, pp. 457--464] designed a randomized algorithm that achieves expected amortized $O(\log^2 n)$ time for each update for maintaining a $c$-approximate maximum matching for some unspecified large constant $c$. In contrast, we can maintain a factor 2 approximate maximum matching in expected amortized $O(\log n )$ time per update as a direct corollary of the maximal matching scheme. This in turn also implies a 2-approximate vertex cover maintenance scheme that takes expected amortized $O(\log n )$ time per update.

Journal ArticleDOI
TL;DR: A deterministic rendezvous algorithm with cost polynomial in the size of the graph and in the length of the smaller label is presented and it is shown that the cost of this algorithm should be decreased exponentially in the sizes of thegraph and the labels of agents.
Abstract: Two mobile agents starting at different nodes of an unknown network have to meet. This task is known in the literature as rendezvous. Each agent has a different label which is a positive integer known to it but unknown to the other agent. Agents move in an asynchronous way: the speed of agents may vary and is controlled by an adversary. The cost of a rendezvous algorithm is the total number of edge traversals by both agents until their meeting. The only previous deterministic algorithm solving this problem has cost exponential in the size of the graph and in the larger label. In this paper we present a deterministic rendezvous algorithm with cost polynomial in the size of the graph and in the length of the smaller label. Hence, we decrease the cost exponentially in the size of the graph and doubly exponentially in the labels of agents. As an application of our rendezvous algorithm we solve several fundamental problems involving teams of unknown size larger than 1 of labeled agents moving asynchronously in...

Journal ArticleDOI
TL;DR: This work studies a new framework for property testing of probability distributions, by considering distribution testing algorithms that have access to a conditional sampling oracle that takes as input a subset of the domain of the unknown probability distribution and returns a draw from the conditional probability distribution restricted to S.
Abstract: We study a new framework for property testing of probability distributions, by considering distribution testing algorithms that have access to a conditional sampling oracle. This is an oracle that takes as input a subset $S \subseteq [N]$ of the domain $[N]$ of the unknown probability distribution ${\cal D}$ and returns a draw from the conditional probability distribution ${\cal D}$ restricted to $S$. This new model allows considerable flexibility in the design of distribution testing algorithms; in particular, testing algorithms in this model can be adaptive. We study a wide range of natural distribution testing problems in this new framework and some of its variants, giving both upper and lower bounds on query complexity. These problems include testing whether ${\cal D}$ is the uniform distribution ${\cal U}$; testing whether ${\cal D} = {\cal D}^\ast$ for an explicitly provided ${\cal D}^\ast$; testing whether two unknown distributions ${\cal D}_1$ and ${\cal D}_2$ are equivalent; and estimating the va...

Journal ArticleDOI
TL;DR: These techniques extend the collision-finding oracle due to Simon to the setting of interactive protocols and the reconstruction paradigm of Gennaro and Trevisan and derive similar tight lower bounds for several other cryptographic protocols, such as single-server private information retrieval, interactive hashing, and oblivious transfer that guarantees statistical security for one of the parties.
Abstract: We study the round and communication complexities of various cryptographic protocols. We give tight lower bounds on the round and communication complexities of any fully black-box reduction of a statistically hiding commitment scheme from one-way permutations and from trapdoor permutations. As a corollary, we derive similar tight lower bounds for several other cryptographic protocols, such as single-server private information retrieval, interactive hashing, and oblivious transfer that guarantees statistical security for one of the parties. Our techniques extend the collision-finding oracle due to Simon [Advances in Cryptology---EUROCRYPT'98, Lecture Notes in Comput. Sci. 1403, Springer, Berlin, 1998, pp. 334--345] to the setting of interactive protocols and the reconstruction paradigm of Gennaro and Trevisan [Proceedings of the 41st Annual Symposium on Foundations of Computer Science (FOCS), IEEE Press, Piscataway, NJ, 2000, pp. 305--313].

Journal ArticleDOI
TL;DR: It is shown that the interactive information complexity of functions ($f) is equal to the amortized (randomized) communication complexity of $f, and the first general connection between information complexity and (nonamortized) Communication complexity is given.
Abstract: The primary goal of this paper is to define and study the interactive information complexity of functions. Let $f(x,y)$ be a function, and suppose Alice is given $x$ and Bob is given $y$. Informally, the interactive information complexity $\mathsf{IC}(f)$ of $f$ is the least amount of information Alice and Bob need to reveal to each other to compute $f$. Previously, information complexity has been defined with respect to a prior distribution on the input pairs $(x,y)$. Our first goal is to give a definition that is independent of the prior distribution. We show that several possible definitions are essentially equivalent. We establish some basic properties of the interactive information complexity $\mathsf{IC}(f)$. In particular, we show that $\mathsf{IC}(f)$ is equal to the amortized (randomized) communication complexity of $f$. We also show a direct sum theorem for $\mathsf{IC}(f)$ and give the first general connection between information complexity and (nonamortized) communication complexity. This conn...

Journal ArticleDOI
TL;DR: An $n^{O(\log n)}$-time blackbox polynomial identity testing algorithm for unknown-order read-once oblivious arithmetic branching programs (ROABPs) is given and the proof is simpler and involves a new technique called basis isolation.
Abstract: We give an $n^{O(\log n)}$-time ($n$ is the input size) blackbox polynomial identity testing algorithm for unknown-order read-once oblivious arithmetic branching programs (ROABPs). The best time complexity known for blackbox polynomial identity testing (PIT) for this class was $n^{O(\log^2 n)}$ due to Forbes, Saptharishi, and Shpilka [Proceedings of the 2014 ACM Symposium on Theory of Computing, 2014, pp. 867--875]. Moreover, their result holds only when the individual degree is small, while we do not need any such assumption. With this, we match the time complexity for the unknown-order ROABP with the known-order ROABP (due to Forbes and Shpilka [Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, 2013, pp. 243--252]) and also with the depth-3 set-multilinear circuits (due to Agrawal, Saha, and Saxena [Proceedings of the 2013 ACM Symposium on Theory of Computing, 2013, pp. 321--330]). Our proof is simpler and involves a new technique called basis isolation. The depth-3 ...

Journal ArticleDOI
TL;DR: This paper gives a complementary pivot algorithm for computing an equilibrium for Arrow--Debreu markets under separable, piecewise-linear concave (SPLC) utilities and gives a direct proof of membership of such markets in PPAD.
Abstract: Using Lemke's scheme, we give a complementary pivot algorithm for computing an equilibrium for Arrow--Debreu markets under separable, piecewise-linear concave (SPLC) utilities. Despite the polynomial parity argument on directed graphs (PPAD) completeness of this case, experiments indicate that our algorithm is practical---on randomly generated instances, the number of iterations it needs is linear in the total number of segments (i.e., pieces) in all the utility functions specified in the input. Our paper settles a number of open problems: (1) Eaves (1976) gave an LCP formulation and a Lemke-type algorithm for the linear Arrow--Debreu model. We generalize both to the SPLC case, hence settling the relevant part of his open problem. (2) Our path following algorithm for SPLC markets, together with a result of Todd (1976), gives a direct proof of membership of such markets in PPAD and settles a question of Vazirani and Yannakakis (2011). (3) We settle a question of Devanur and Kannan (2008) of obtaining a “sy...

Journal ArticleDOI
TL;DR: Even for the unweighted case, a polynomial time approximation scheme (PTAS) for a fundamental class of objects called pseudodisks (which includes halfspaces, disks, unit-height rectangles, translates of convex sets, etc.) is curren...
Abstract: Weighted geometric set-cover problems arise naturally in several geometric and nongeometric settings (e.g., the breakthrough of Bansal and Pruhs [Proceedings of FOCS, 2010, pp. 407--414] reduces a wide class of machine scheduling problems to weighted geometric set cover). More than two decades of research has succeeded in settling the $(1+\epsilon)$-approximability status for most geometric set-cover problems, except for some basic scenarios which are still lacking. One is that of weighted disks in the plane for which, after a series of papers, Varadarajan [Proceedings of STOC'10, 2010, pp. 641--648] presented a clever quasi-sampling technique, which together with improvements by Chan et al. [Proceedings of SODA, 2012, pp. 1576--1585], yielded an $O(1)$-approximation algorithm. Even for the unweighted case, a polynomial time approximation scheme (PTAS) for a fundamental class of objects called pseudodisks (which includes halfspaces, disks, unit-height rectangles, translates of convex sets, etc.) is curren...

Journal ArticleDOI
TL;DR: An analogous lower bound on the support size of $(1+\epsilon)$-approximate mixed strategies for random two-player zero-sum 0/1-matrix games is shown.
Abstract: We give a lower bound on the iteration complexity of a natural class of Lagrangian-relaxation algorithms for approximately solving packing/covering linear programs. We show that, given an input with $m$ random 0/1-constraints on $n$ variables, with high probability, any such algorithm requires $\Omega(\rho \log(m)/\epsilon^2)$ iterations to compute a $(1+\epsilon)$-approximate solution, where $\rho$ is the width of the input. The bound is tight for a range of the parameters $(m,n,\rho,\epsilon)$. The algorithms in the class include Dantzig--Wolfe decomposition, Benders' decomposition, Lagrangian relaxation as developed by Held and Karp for lower-bounding TSP, and many others (e.g., those by Plotkin, Shmoys, and Tardos and Grigoriadis and Khachiyan). To prove the bound, we use a discrepancy argument to show an analogous lower bound on the support size of $(1+\epsilon)$-approximate mixed strategies for random two-player zero-sum 0/1-matrix games.

Journal ArticleDOI
TL;DR: It is shown that solving a subproblem (\sf SingleMap) about mapping a single workload to the physical graph essentially suffices for solving the general problem of basic resource allocation in cloud computing environments.
Abstract: We study a basic resource allocation problem that arises in cloud computing environments. The physical network of the cloud is represented as a graph with vertices representing servers and edges corresponding to communication links. A workload is a set of processes with processing requirements and mutual communication requirements. The workloads arrive and depart over time, and the resource allocator must map each workload upon arrival to the physical network. We consider the objective of minimizing the congestion. We show that solving a subproblem (\sf SingleMap) about mapping a single workload to the physical graph essentially suffices for solving the general problem. In particular, an $\alpha$-approximation algorithm for \sf SingleMap gives an $O(\alpha \log nD)$ competitive algorithm for the general problem, where $n$ is the number of nodes in the physical network and $D$ is the maximum to minimum workload duration ratio. We then consider the \sf SingleMap problem for two natural classes of workloads,...

Journal ArticleDOI
TL;DR: In this paper, it was shown that the parameters of a Gaussian distribution with a fixed number of components can be learned using a sample whose size is polynomial in dimension and all other parameters.
Abstract: The question of polynomial learnability of probability distributions, particularly Gaussian mixture distributions, has recently received significant attention in theoretical computer science and machine learning. However, despite major progress, the general question of polynomial learnability of Gaussian mixture distributions still remained open. The current work resolves the question of polynomial learnability for Gaussian mixtures in high dimension with an arbitrary fixed number of components. Specifically, we show that parameters of a Gaussian distribution with a fixed number of components can be learned using a sample whose size is polynomial in dimension and all other parameters. The result on learning Gaussian mixtures relies on an analysis of distributions belonging to what we call polynomial families in low dimension. These families are characterized by their moments being polynomial in parameters and include almost all common probability distributions as well as their mixtures and products. Using...

Journal ArticleDOI
TL;DR: This work proposes a new approach to anisotropic mesh generation, relying on the notion of an isotropic Delaunay meshes, a mesh in which the star of each vertex consists of simplices that are Delaunays for the metric associated to vertex $v$.
Abstract: Anisotropic meshes are triangulations of a given domain in the plane or in higher dimensions, with elements elongated along prescribed directions. Anisotropic triangulations are known to be well suited for interpolation of functions or solving PDEs. Assuming that the anisotropic shape requirements for mesh elements are given through a metric field varying over the domain, we propose a new approach to anisotropic mesh generation, relying on the notion of anisotropic Delaunay meshes. An anisotropic Delaunay mesh is defined as a mesh in which the star of each vertex $v$ consists of simplices that are Delaunay for the metric associated to vertex $v$. This definition works in any dimension and allows us to define a simple refinement algorithm. The algorithm takes as input a domain and a metric field and provides, after completion, an anisotropic mesh whose elements are sized and shaped according to the metric field.

Journal ArticleDOI
TL;DR: This work presents a deterministic local routing algorithm that is guaranteed to find a path between any pair of vertices in a half-theta-6-graph (the half-$\theta_6$-graph is equivalent to the Delaunay triangulation where the empty region is an equilateral triangle).
Abstract: We present a deterministic local routing algorithm that is guaranteed to find a path between any pair of vertices in a half-$\theta_6$-graph (the half-$\theta_6$-graph is equivalent to the Delaunay triangulation where the empty region is an equilateral triangle). The length of the path is at most $5/\sqrt{3} \approx 2.887$ times the Euclidean distance between the pair of vertices. Moreover, we show that no local routing algorithm can achieve a better routing ratio, thereby proving that our routing algorithm is optimal. This is somewhat surprising because the spanning ratio of the half-$\theta_6$-graph is 2, meaning that even though there always exists a path whose length is at most twice the Euclidean distance, we cannot always find such a path when routing locally. Since every triangulation can be embedded in the plane as a half-$\theta_6$-graph using $O(\log n)$ bits per vertex coordinate via Schnyder's embedding scheme [W. Schnyder, Embedding planar graphs on the grid, in Proceedings of the 1st Annual ...

Journal ArticleDOI
TL;DR: In a dynamic scenario, when the stations can join the channel at arbitrary rounds, there is a nonadaptive deterministic algorithm guaranteeing a successful transmission for each station in only a slightly bigger time: $O(k\log n\log\ log n)$ in the worst case, which almost matches the $\Omega(k \log n/\log k)$ lower bound.
Abstract: A classical problem in addressing a decentralized multiple-access channel is resolving conflicts when a set of stations attempt to transmit at the same time on a shared communication channel. In a static scenario, i.e., when all stations are activated simultaneously, Komlos and Greenberg [IEEE Trans. Inform. Theory, 31 (1985), pp. 302--306] in their seminal work showed that it is possible to resolve the conflict among $k$ stations from an ensemble of $n$, with a nonadaptive deterministic algorithm in time $O(k + k \log(n/k))$ in the worst case. In this paper we show that in a dynamic scenario, when the stations can join the channel at arbitrary rounds, there is a nonadaptive deterministic algorithm guaranteeing a successful transmission for each station in only a slightly bigger time: $O(k\log n\log\log n)$ in the worst case. This almost matches the $\Omega(k\log n/\log k)$ lower bound by Greenberg and Winograd [J. ACM, 32 (1985), pp. 589--596] that holds even in much stronger settings: for adaptive algor...

Journal ArticleDOI
TL;DR: A randomized mechanism that in expectation selects an agent with at least half the maximum number of nominations is proposed, which is best possible subject to impartiality and resolves a conjecture of Alon et al.
Abstract: We study a fundamental problem in social choice theory, the selection of a member of a set of agents based on impartial nominations by agents from that set. Studied previously by Alon et al. [Proceedings of TARK, 2011, pp. 101--110] and by Holzman and Moulin [Econometrica, 81 (2013), pp. 173--196], this problem arises when representatives are selected from within a group or when publishing or funding decisions are made based on a process of peer review. Our main result concerns a randomized mechanism that in expectation selects an agent with at least half the maximum number of nominations. This is best possible subject to impartiality and resolves a conjecture of Alon et al. Further results are given for the case where some agent receives many nominations and the case where each agent casts at least one nomination.

Journal ArticleDOI
TL;DR: In quantum complexity, quantum error correcting codes provide central examples in the study of the illusive behavior of multiparticle entanglement, and they have played a crucial role in many computational complexity results as discussed by the authors.
Abstract: We initiate the study of quantum locally testable codes ($\text{qLTC}$s). Classical $\text{LTC}$s are very important in computational complexity. These codes are defined as the linear subspace satisfying a set of local constraints, with the additional requirement that their soundness, $R(\delta)$, which is the probability that a randomly chosen constraint is violated, is proportional to the proximity $\delta$, where $\delta n$ is the distance of a word from the code. Excellent $\text{LTC}$s exist in the classical world, and they are tightly related to the celebrated $\text{PCP}$ (probabilistically checkable proof) theorem. In quantum complexity, quantum error correcting codes provide central examples in the study of the illusive behavior of multiparticle entanglement, and they have played a crucial role in many computational complexity results. We provide a definition of the quantum analogue of $\text{LTC}$s and motivate it by connecting its central notions in the study of both entanglement and quantum Ha...

Journal ArticleDOI
TL;DR: This work considers the problem of finding a minimum edge cost subgraph of a graph satisfying both given node-connectivity requirements and degree upper bounds on nodes, and presents an iterative rounding algorithm of the biset linear programming relaxation for this problem.
Abstract: We consider the problem of finding a minimum edge cost subgraph of a graph satisfying both given node-connectivity requirements and degree upper bounds on nodes. We present an iterative rounding algorithm of the biset linear programming relaxation for this problem. For directed graphs and $k$-out-connectivity requirements from a root, our algorithm computes a solution that is a 2-approximation on the cost, and the degree of each node $v$ in the solution is at most $2b(v) + O(k)$, where $b(v)$ is the degree upper bound on $v$. For undirected graphs and element-connectivity requirements with maximum connectivity requirement $k$, our algorithm computes a solution that is a $4$-approximation on the cost, and the degree of each node $v$ in the solution is at most $4b(v)+O(k)$. These ratios improve the previous $O(\log k)$-approximation on the cost and $O(2^k b(v))$-approximation on the degrees. Our algorithms can be used to improve approximation ratios for other node-connectivity problems such as undirected $k...

Journal ArticleDOI
TL;DR: A quasi-polynomial simulation of arithmetic proof systems operating with arithmetic circuits and arithmetic formulas that prove polynomial identities over a field ${\mathbb F} is obtained.
Abstract: We study arithmetic proof systems ${\mathbb P}_c({\mathbb F})$ and $ {\mathbb P}_f({\mathbb F})$ operating with arithmetic circuits and arithmetic formulas, respectively, and that prove polynomial identities over a field ${\mathbb F}$. We establish a series of structural theorems about these proof systems, the main one stating that ${\mathbb P}_c({\mathbb F})$ proofs can be balanced: if a polynomial identity of syntactic degree $ d $ and depth $k$ has a ${\mathbb P}_c({\mathbb F})$ proof of size $s$, then it also has a ${\mathbb P}_c({\mathbb F})$ proof of size $ {\rm poly}(s,d) $ in which every circuit has depth $ O(k+\log^2 d + \log d\cdot \log s) $. As a corollary, we obtain a quasi-polynomial simulation of ${\mathbb P}_c({\mathbb F})$ by ${\mathbb P}_f({\mathbb F})$. Using these results we obtain the following: consider the identities $\det(XY) = \det(X)\cdot\det(Y) \mbox{ and } \det(Z)= z_{11}\cdots z_{nn},$ where $X,Y$, and $ Z$ are $n\times n$ square matrices and $Z$ is a triangular matrix with $z_...

Journal ArticleDOI
TL;DR: This work builds a general analytical framework and introduces a general renormalization method for the bifurcation analysis of multiagent systems to prove that, while Turing-complete, influence dynamics of the diffusive type is almost surely asymptotically periodic.
Abstract: Influence systems seek to model how influence, broadly defined, spreads across a dynamic network. We build a general analytical framework which we then use to prove that, while Turing-complete, influence dynamics of the diffusive type is almost surely asymptotically periodic. In addition to resolving the dynamics of a widely used family of multiagent systems, we introduce a general renormalization method for the bifurcation analysis of multiagent systems.

Journal ArticleDOI
TL;DR: A fairly complete theory of operator precedence languages is provided, introducing a class of automata with the same recognizing power as the generative power of their grammars and characterization of their sentences in terms of monadic second-order logic.
Abstract: Operator precedence languages were introduced half a century ago by Robert Floyd to support deterministic and efficient parsing of context-free languages. Recently, we renewed our interest in this class of languages thanks to a few distinguishing properties that make them attractive for exploiting various modern technologies. Precisely, their local parsability enables parallel and incremental parsing, whereas their closure properties make them amenable to automatic verification techniques, including model checking. In this paper we provide a fairly complete theory of this class of languages: we introduce a class of automata with the same recognizing power as the generative power of their grammars; we provide a characterization of their sentences in terms of monadic second-order logic as has been done in previous literature for more restricted language classes such as regular, parenthesis, and input-driven ones; we investigate preserved and lost properties when extending the language sentences from finite ...