# Showing papers in "Theory of Computing in 2015"

••

TL;DR: The problem of finding the most influential nodes in a social network is NP-hard as mentioned in this paper, and the first provable approximation guarantees for efficient algorithms were provided by Domingos et al. using an analysis framework based on submodular functions.

Abstract: Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of "word of mouth" in the promotion of new products. Recently, motivated by the design of viral marketing strategies, Domingos and Richardson posed a fundamental algorithmic problem for such social network processes: if we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?We consider this problem in several of the most widely studied models in social network analysis. The optimization problem of selecting the most influential nodes is NP-hard here, and we provide the first provable approximation guarantees for efficient algorithms. Using an analysis framework based on submodular functions, we show that a natural greedy strategy obtains a solution that is provably within 63% of optimal for several classes of models; our framework suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.We also provide computational experiments on large collaboration networks, showing that in addition to their provable guarantees, our approximation algorithms significantly out-perform node-selection heuristics based on the well-studied notions of degree centrality and distance centrality from the field of social networks.

3,729 citations

••

TL;DR: It is shown that univariate multiplicity codes of rate R over fields of prime order can be list-decoded from a (1 R e) fraction of errors in polynomial time (for constant R;e).

Abstract: We study the list-decodability of multiplicity codes. These codes, which are based on evaluations of high-degree polynomials and their derivatives, have rate approaching 1 while simultaneously allowing for sublinear-time error correction. In this paper, we show that multiplicity codes also admit powerful list-decoding and local list-decoding algorithms that work even in the presence of a large error fraction. In other words, we give algorithms for recovering a polynomial given several evaluations of it and its derivatives, where possibly many of the given evaluations are incorrect. Our first main result shows that univariate multiplicity codes over fields of prime order can be list-decoded up to the so-called "list-decoding capacity." Specifically, we show that univariate multiplicity codes of rate R over fields of prime order can be list-decoded from a (1 R e) fraction of errors in polynomial time (for constant R;e). This resembles the behavior of the "Folded Reed-Solomon Codes" of Guruswami and Rudra (Trans. Info. Theory 2008). The list-decoding algorithm is based on constructing a differential equation of which the desired codeword is a solution; this differential equation is then solved using a power-series approach (a variation of Hensel lifting) along with other algebraic ideas. Our second main result is a list-decoding algorithm for decoding multivariate multiplicity codes up to their Johnson radius. The key ingredient of this algorithm is the construction of a special family of "algebraically-repelling" curves passing through the points of F m ; no moderate-degree multivariate polynomial over F m can simultaneously vanish on all these A version of this paper was posted online as an Electronic Colloq. on Computational Complexity Tech. Report (20). Supported in part by a Sloan Fellowship and NSF grant CCF-1253886.

84 citations

••

TL;DR: This article showed that the border rank of the matrix multiplication operator for n n matrices is at least 2n 2 n, which is the smallest lower bound known for all n 3 matrices, and obtained by finding new equations that bilinear maps of small border rank must satisfy.

Abstract: The border rank of the matrix multiplication operator for n n matrices is a standard measure of its complexity. Using techniques from algebraic geometry and representation theory, we show the border rank is at least 2n 2 n. Our bounds are better than the previous lower bound (due to Lickteig in 1985) of 3n 2 =2+ n=2 1 for all n 3. The bounds are obtained by finding new equations that bilinear maps of small border rank must satisfy, i. e., new equations for secant varieties of triple Segre products, that matrix multiplication fails to satisfy.

53 citations

••

TL;DR: Any value-dependent approximation of the Jones polynomial at these non-lattice roots of unity is #P-hard, which follows fairly directly from the universality result and Aaronson's theorem that PostBQP = PP.

Abstract: Freedman, Kitaev, and Wang (2002), and later Aharonov, Jones, and Landau (2009), established a quantum algorithm to "additively" approximate the Jones polynomial V(L;t) at any principal root of unity t. The strength of this additive approximation depends exponentially on the bridge number of the link presentation. Freedman, Larsen, and Wang (2002) established that the approximation is universal for quantum computation at a non- lattice, principal root of unity. We show that any value-distinguishing approximation of the Jones polynomial at these non-lattice roots of unity is #P-hard. Given the power to decide whetherjV(L;t)j b for fixed constants 0 1, T(G; x; y) is #P-hard to approximate within a factor of c even for planar graphs G. Along the way, we clarify and generalize both Aaronson's theorem and the Solovay- Kitaev theorem.

42 citations

••

TL;DR: A deterministic algorithm which, given a graph G with n vertices and an integer 1 0 is an absolute constant, allows us to tell apart the graphs that do not have m-subsets of high density from the graph that have sufficiently many m- Subsets ofhigh density.

Abstract: We present a deterministic algorithm which, given a graph G with n vertices and an integer 1 0 is an absolute constant. This allows us to tell apart the graphs that do not have m-subsets of high density from the graphs that have sufficiently many m-subsets of high density, even when the probability to hit such a subset at random is exponentially small in m. ACM Classification: F.2.1, G.1.2, G.2.2, I.1.2 AMS Classification: 15A15, 68C25, 68W25, 60C05

36 citations

••

TL;DR: A simple and elementary proof of Friedgut, Kalai, and Naor's result that if Var(jSj) is much smaller than Var(S), then the sum is largely determined by one of the summands is provided.

Abstract: Let S = a1r1+ a2r2+ + anrn be a weighted Rademacher sum. Friedgut, Kalai, and Naor have shown that if Var(jSj) is much smaller than Var(S), then the sum is largely determined by one of the summands. We provide a simple and elementary proof of this result, strengthen it, and extend it in various ways to a more general setting.

20 citations

••

TL;DR: In this paper, the parity graph homomorphism problem is shown to be either polynomial-time solvable or P-complete, and a conjectured characterisation of the easy cases is provided.

Abstract: Given a graph G, we investigate the problem of determining the parity of the number of homomorphisms from G to some other fixed graph H. We conjecture that this problem exhibits a complexity dichotomy, such that all parity graph homomorphism problems are either polynomial-time solvable or P-complete, and provide a conjectured characterisation of the easy cases. We show that the conjecture is true for the restricted case in which the graph H is a tree, and provide some tools that may be useful in further investigation into the parity graph homomorphism problem, and the problem of counting homomorphisms for other moduli.

17 citations

••

TL;DR: In this paper, a bounded-error quantum algorithm that makes O(n 1=4 e 1=2 ) queries to a function f :f0; 1g n!f0, 1g, accepts when f is monotone, and rejects when it is e-far from being far from being one.

Abstract: In this note, we develop a bounded-error quantum algorithm that makes ˜ O(n 1=4 e 1=2 ) queries to a function f :f0; 1g n !f0; 1g, accepts when f is monotone, and rejects when f is e-far from being monotone. This result gives a super-quadratic improve- ment compared to the best known randomized algorithm for all e = o(1). The improvement is cubic when e = 1= p n.

14 citations

••

TL;DR: In this article, it was shown that the problem of determining whether an isometry can be made to produce a separable state is either QMA-complete or QMA(2)-complete, depending upon whether the distance between quantum states is measured by the one-way LOCC norm or the trace norm.

Abstract: We identify a formal connection between physical problems related to the detection of separable (unentangled) quantum states and complexity classes in theoretical computer science. In particular, we show that to nearly every quantum interactive proof complexity class (including BQP, QMA, QMA(2), and QSZK), there corresponds a natural separability testing problem that is complete for that class. Of particular interest is the fact that the problem of determining whether an isometry can be made to produce a separable state is either QMA-complete or QMA(2)-complete, depending upon whether the distance between quantum states is measured by the one-way LOCC norm or the trace norm. We obtain strong hardness results by employing prior work on entanglement purification protocols to prove that for each n-qubit maximally entangled state there exists a fixed one-way LOCC measurement that distinguishes it from any separable state with error probability that decays exponentially in n.

13 citations

••

TL;DR: In this paper, the authors study the role of non-adaptivity in maintaining dynamic data structures in the cell probe model and show that one can obtain polynomial cell probe lower bounds for nonadaptive data structures.

Abstract: In this paper, we study the role non-adaptivity plays in maintaining dynamic data structures. Roughly speaking, a data structure is non-adaptive if the memory locations it reads and/or writes when processing a query or update depend only on the query or update and not on the contents of previously read cells. We study such non-adaptive data structures in the cell probe model. The cell probe model is one of the least restrictive lower bound models and in particular, cell probe lower bounds apply to data structures developed in the popular word-RAM model. Unfortunately, this generality comes at a high cost: the highest lower bound proved for any data structure problem is only polylogarithmic (if allowed adaptivity). Our main result is to demonstrate that one can in fact obtain polynomial cell probe lower bounds for non-adaptive data structures. To shed more light on the seemingly inherent polylogarithmic lower bound barrier, we study several different notions of non-adaptivity and identify key properties that must be dealt with if we are to prove polynomial lower bounds without restrictions on the data structures. Finally, our results also unveil an interesting connection between data structures and depth-2 circuits. This allows us to translate conjectured hard data structure problems into good candidates for high circuit lower bounds; in particular, in the area of linear circuits for linear operators. Building on lower bound proofs for data structures in slightly more restrictive models, we also present a number of properties of linear operators which we believe are worth investigating in the realm of circuit lower bounds.

13 citations

••

TL;DR: In this article, the complexity of arithmetic circuits with division gates over non-commuting variables was studied and lower and upper bounds on the complexity were established. But the complexity was not studied in the context of rational function identity testing.

Abstract: We initiate the study of the complexity of arithmetic circuits with division gates over non-commuting variables. Such circuits and formulas compute non-commutative rational functions, which, despite their name, can no longer be expressed as ratios of polynomials. We prove some lower and upper bounds, completeness and simulation results, as follows. If X is n x n matrix consisting of n2 distinct mutually non-commuting variables, we show that: (i). X-1 can be computed by a circuit of polynomial size, (ii). every formula computing some entry of X-1 must have size at least 2Ω(n). We also show that matrix inverse is complete in the following sense: (i). Assume that a non-commutative rational function f can be computed by a formula of size s. Then there exists an invertible 2s x 2s-matrix A whose entries are variables or field elements such that f is an entry of A-1. (ii). If f is a non-commutative polynomial computed by a formula without inverse gates then A can be taken as an upper triangular matrix with field elements on the diagonal. We show how divisions can be eliminated from non-commutative circuits and formulae which compute polynomials, and we address the non-commutative version of the "rational function identity testing" problem. As it happens, the complexity of both of these procedures depends on a single open problem in invariant theory.

••

TL;DR: It is shown that for 1 k-generated subgroups, the multiset of isomorphism types of $k$- generated subgroups does not determine a group of order at most $n$.

Abstract: We show that for $1 \le k \le \sqrt{2\log_3 n}-(5/2)$, the multiset of isomorphism types of $k$-generated subgroups does not determine a group of order at most $n$. This answers a question raised by Tim Gowers in connection with the Group Isomorphism problem.

••

TL;DR: In this paper, it was shown that approximating the ground energy of the Bose-Hubbard model on a graph at fixed particle number is QMA-complete, and the history of an n-qubit computation in the subspace with at most one particle per site (i.e., hard core bosons).

Abstract: The Bose-Hubbard model is a system of interacting bosons that live on the vertices of a graph. The particles can move between adjacent vertices and experience a repulsive on-site interaction. The Hamiltonian is determined by a choice of graph that specifies the geometry in which the particles move and interact. We prove that approximating the ground energy of the Bose-Hubbard model on a graph at fixed particle number is QMA-complete. In our QMA-hardness proof, we encode the history of an n-qubit computation in the subspace with at most one particle per site (i.e., hard-core bosons). This feature, along with the well-known mapping between hard-core bosons and spin systems, lets us prove a related result for a class of 2-local Hamiltonians defined by graphs that generalizes the XY model. By avoiding the use of perturbation theory in our analysis, we circumvent the need to multiply terms in the Hamiltonian by large coefficients.

••

TL;DR: The aim of this note is to make the proof of the polynomial Freiman-Ruzsa conjecture accessible to the theoretical computer science community, and in particular to readers who are less familiar with additive combinatorics.

Abstract: The polynomial Freiman-Ruzsa conjecture is one of the most important conjec- tures in additive combinatorics. It asserts that one can switch between combinatorial and algebraic notions of approximate subgroups with only a polynomial loss in the underlying pa- rameters. This conjecture has also found several applications in theoretical computer science. Recently, Tom Sanders proved a weaker version of the conjecture, with a quasi-polynomial loss in parameters. The aim of this note is to make his proof accessible to the theoretical computer science community, and in particular to readers who are less familiar with additive combinatorics.

••

TL;DR: It is shown that for general circuits, certain approximation versions of the problems of deciding full independence and exchangeability are SZK-complete, and a bounded-error version of C=P is introduced, which is called BC=P, and its structural properties are investigated.

Abstract: We consider the problems of deciding whether the joint distribution sampled by a given circuit has certain statistical properties such as being i. i. d., being exchangeable, being pairwise independent, having two coordinates with identical marginals, having two uncorrelated coordinates, and many other variants. We give a proof that simultaneously shows all these problems are C=P-complete, by showing that the following promise problem (which is a restriction of all the above problems) is C=P-complete: Given a circuit, distinguish the case where the output distribution is uniform and the case where every pair of coordinates is neither uncorrelated nor identically distributed. This completeness result holds even for samplers that are depth-3 circuits. We also consider circuits that are d-local, in the sense that each output bit depends on at most d input bits. We give linear-time algorithms for deciding whether a 2-local sampler’s joint distribution is fully independent, and whether it is exchangeable. We also show that for general circuits, certain approximation versions of the problems of deciding full independence and exchangeability are SZK-complete. We also introduce a bounded-error version of C=P, which we call BC=P, and we investigate its structural properties. ACM Classification: F.1.3 AMS Classification: 68Q17, 68Q15