scispace - formally typeset
Search or ask a question

Showing papers in "Electronic Colloquium on Computational Complexity in 2012"


Journal Article
Neeraj Kayal1
TL;DR: The main theorem is a lower bound on the number of summands in any representation of the form (1) for an explicit polynomial f, which is an easy corollary of Fischer [Fis94].
Abstract: In this work we consider representations of multivariate polynomials in F[x] of the form f(x) = Q1(x) +Q2(x) + . . .+Qs(x) , where the ei’s are positive integers and the Qi’s are arbitary multivariate polynomials of bounded degree. We give an explicit n-variate polynomial f of degree n such that any representation of the above form for f requires the number of summands s to be 2. Motivation. Let F be a field, F[x] be the set of n-variate polynomials over F and d ≥ 1 be an integer. For a polynomial f(x) ∈ F[x], we consider representations of the form f(x) = Q1 1 (x) +Q e2 2 (x) + . . .+Q es s (x), (1) where the Qi(x)’s are polynomials of degree at most d. We do this with an eye towards proving lower bounds for the number of summands s required to write some explicit polynomial f in the above form. Our motivation for this line of inquiry stems from some recent results and problems posed in the field of arithmetic complexity. Agrawal and Vinay [AV08] showed that proving exponential lower bounds for depth four arithmetic circuits implies exponential lower bounds for arbitrary depth arithmetic circuits. In our case, a representation of the form (1) above corresponds to computing f via a depth four ΣΠΣΠ arithmetic circuit where the bottommost layer of multiplication gates have fanin bounded by d and the second-last layer of multiplication gates actually consists of exponentiation gates of arbitrarily large degree (i.e. multiplication gates where all the incoming edges originate from a single node). Meanwhile Hrubes, Wigderson and Yehudayoff [HWY10] look at the situation where d = e1 = e2 = . . . = es = 2 and ask for a superlinear lower bound on the number of summands s for an explicit n-variate biquadratic polynomial f . They show that such a superlinear lower bound implies an exponential lower bound on the size of arithmetic circuits computing the noncommutative permanent. Finally Chen, Kayal and Wigderson [CKW11] pose the problem of proving lower bounds for bounded depth arithmetic circuits with addition and exponentiation gates. Our main theorem is a lower bound on the number of summands in any representation of the form (1) for an explicit polynomial. Theorem 1. (Lower bound for sum of powers). Let F be any field and F[x] be the ring of polynomials over the set of indeterminates x = (x1, x2, . . . , xn). Let e1, e2, . . . , es be positive integers and Q1, Q2, . . . , Qs ∈ F[x] be multivariate polynomials each of degree at most d. If Q1 1 +Q e2 2 + . . .+Q es s = (x1 · x2 · . . . · xn), then we must have that (log s) = Ω( n 2d ). In particular, if d is a constant then s = 2Ω(n). Remark 2. 1. The fact that the f in the lower bound above consists of a single monomial indicates above all the severe limitation of representations of the form (1). ∗Microsoft Research India, neeraka@microsoft.com 1 ISSN 1433-8092 Electronic Colloquium on Computational Complexity, Report No. 81 (2012) 2. An upper bound of 2n/d is an easy corollary of Fischer [Fis94]. Specifically, let F be an algebraically closed field with char(F) > n. Then for all integers d ≥ 1 there exist polynomials Q1, Q2, . . . , Qs each of degree d such that Q1 1 +Q e2 2 + . . .+Q es s = (x1 · x2 · . . . · xn), and the number of summands s is at most 2n/d. Fischer [Fis94] gives an explicit set of 2m−1 linear forms `1, `2, . . . , `2m such that (y1 · y2 · . . . · ym) = ∑

107 citations


Journal Article
TL;DR: The main ingredient is a new gap-amplification technique inspired by XOR-lemmas that improves the NP-hardness of approximating Independent-Set on bounded-degree graphs, Almost-Coloring, Two-Prover-One-Round-Game, and various other problems.
Abstract: We show optimal (up to a constant factor) NP-hardness for a maximum constraint satisfaction problem with k variables per constraint (Max-kCSP) whenever k is larger than the domain size. This follows from our main result concerning CSPs given by a predicate: A CSP is approximation resistant if its predicate contains a subgroup that is balanced pairwise independent. Our main result is analogous to Austrin and Mossel’s, bypassing their Unique-Games Conjecture assumption whenever the predicate is an abelian subgroup. Our main ingredient is a new gap-amplification technique inspired by XOR lemmas. Using this technique, we also improve the NP-hardness of approximating Independent-Set on bounded-degree graphs, Almost-Coloring, Label-Cover, and various other problems.

98 citations


Journal Article
TL;DR: In this article, the authors give a linear algebraic list decoding algorithm that can correct a fraction of errors approaching the code distance, while pinning down the candidate messages to a well-structured affine space of dimension a constant factor smaller than the code dimension.
Abstract: We consider Reed-Solomon (RS) codes whose evaluation points belong to a subfield, and give a linear-algebraic list decoding algorithm that can correct a fraction of errors approaching the code distance, while pinning down the candidate messages to a well-structured affine space of dimension a constant factor smaller than the code dimension. By pre-coding the message polynomials into a subspace-evasive set, we get a Monte Carlo construction of a subcode of Reed-Solomon codes that can be list decoded from a fraction (1-R-e) of errors in polynomial time (for any fixed e > 0) with a list size of O(1/e). Our methods extend to algebraic-geometric (AG) codes, leading to a similar claim over constant-sized alphabets. This matches parameters of recent results based on folded variants of RS and AG codes. but our construction here gives subcodes of Reed-Solomon and AG codes themselves (albeit with restrictions on the evaluation points).Further, the underlying algebraic idea also extends nicely to Gabidulin's construction of rank-metric codes based on linearized polynomials. This gives the first construction of positive rate rank-metric codes list decodable beyond half the distance, and in fact gives codes of rate R list decodable up to the optimal (1-R-e) fraction of rank errors. A similar claim holds for the closely related subspace codes studied by Koetter and Kschischang.We introduce a new notion called subspace designs as another way to pre-code messages and prune the subspace of candidate solutions. Using these, we also get a deterministic construction of a polynomial time list decodable subcode of RS codes. By using a cascade of several subspace designs, we extend our approach to AG codes, which gives the first deterministic construction of an algebraic code family of rate R with efficient list decoding from 1-R-e fraction of errors over an alphabet of constant size (that depends only on e). The list size bound is almost a constant (governed by log* (block length)), and the code can be constructed in quasi-polynomial time.

91 citations


Journal Article
TL;DR: It is shown that strong AND- or OR-compression for SAT would imply non-uniform, statistical zero-knowledge proofs for SAT-an even stronger and more unlikely consequence than NP ⊆ coNP/poly.
Abstract: Given an instance of a hard decision problem, a limited goal is to compress that instance into a smaller, equivalent instance of a second problem. As one example, consider the problem where, given ...

82 citations



Journal Article
TL;DR: This template is used to construct pseudorandom generators for combinatorial rectangles and read-once CNFs and a hitting set generator for width-3 branching programs, all of which achieve near-optimal seed-length even in the low-error regime.
Abstract: We present an iterative approach to constructing pseudorandom generators, based on the repeated application of mild pseudorandom restrictions. We use this template to construct pseudorandom generators for combinatorial rectangles and read-once CNFs and a hitting set generator for width-3 branching programs, all of which achieve near-optimal seed-length even in the low-error regime: We get seed-length $\tilde{O}(\log (n/\epsilon))$ for error $\epsilon$. Previously, only constructions with seed-length $O(\log^{3/2} n)$ or $O(\log^2 n)$ were known for these classes with error $\epsilon = 1/\poly(n)$. The (pseudo)random restrictions we use are milder than those typically used for proving circuit lower bounds in that we only set a constant fraction of the bits at a time. While such restrictions do not simplify the functions drastically, we show that they can be derandomized using small-bias spaces.

70 citations


Journal Article
TL;DR: In this paper, the authors explore error-correcting codes derived from the lifting of affine-invariant codes, i.e., linear codes whose coordinates are a vector space over a field and which are invariant under affine transformations of the coordinate space.
Abstract: In this work we explore error-correcting codes derived from the "lifting" of "affine-invariant" codes. Affine-invariant codes are simply linear codes whose coordinates are a vector space over a field and which are invariant under affine-transformations of the coordinate space. Lifting takes codes defined over a vector space of small dimension and lifts them to higher dimensions by requiring their restriction to every subspace of the original dimension to be a codeword of the code being lifted. While the operation is of interest on its own, this work focusses on new ranges of parameters that can be obtained by such codes, in the context of local correction and testing. In particular we present four interesting ranges of parameters that can be achieved by such lifts, all of which are new in the context of affine-invariance and some may be new even in general. The main highlight is a construction of high-rate codes with sublinear time decoding. The only prior construction of such codes is due to Kopparty, Saraf and Yekhanin [33]. All our codes are extremely simple, being just lifts of various parity check codes (codes with one symbol of redundancy), and in the final case, the lift of a Reed-Solomon code.We also present a simple connection between certain lifted codes and lower bounds on the size of "Nikodym sets". Roughly, a Nikodym set in Fqm is a set S with the property that every point has a line passing through it which is almost entirely contained in S. While previous lower bounds on Nikodym sets were roughly growing as qm/2m, we use our lifted codes to prove a lower bound of (1 - o(1))qm for fields of constant characteristic.

69 citations


Journal Article
TL;DR: In this article, the authors give an exponential small upper bound on the success probability for computing the direct product of any function over any distribution using a communication protocol, where the inputs (x, y) are drawn from the distribution μ.
Abstract: We give exponentially small upper bounds on the success probability for computing the direct product of any function over any distribution using a communication protocol. Let suc(μ, f, C) denote the maximum success probability of a 2-party communication protocol for computing the boolean function f(x, y) with C bits of communication, when the inputs (x, y) are drawn from the distribution μ. Let μn be the product distribution on n inputs and fn denote the function that computes n copies of f on these inputs. We prove that if T log3/2 T ≪ (C - 1)√n and suc(μ, f, C) <; 2/3, then suc(μn, fn, T) ≤ exp(-Ω(n)). When μ is a product distribution, we prove a nearly optimal result: as long as T log2 T ≪ Cn, we must have suc(μn, fn, T) ≤ exp(-Ω(n)).

66 citations


Journal Article
TL;DR: In this article, it was shown that the correlations produced by any entangled strategy which succeeds in the multilinearity test with high probability can always be closely approximated using shared randomness alone.
Abstract: We prove a strong limitation on the ability of entangled provers to collude in a multiplayer game. Our main result is the first nontrivial lower bound on the class MIP* of languages having multi-prover interactive proofs with entangled provers, namely MIP* contains NEXP, the class of languages decidable in non-deterministic exponential time. While Babai, Fort now, and Lund (Computational Complexity 1991) proved the celebrated equality MIP = NEXP in the absence of entanglement, ever since the introduction of the class MIP* it was open whether shared entanglement between the provers could weaken or strengthen the computational power of multi-prover interactive proofs. Our result shows that it does not weaken their computational power: MIP* contains MIP. At the heart of our result is a proof that Babai, Fort now, and Lund's multilinearity test is sound even in the presence of entanglement between the provers, and our analysis of this test could be of independent interest. As a byproduct we show that the correlations produced by any entangled strategy which succeeds in the multilinearity test with high probability can always be closely approximated using shared randomness alone.

65 citations


Journal Article
TL;DR: It is proved that the correlation of a depth-d unbounded fanin circuit of size S with parity of n variables is at most $2-\Omega(n/(\log S)^{d-1})}$.
Abstract: We prove that the correlation of a depth-$d$ unbounded fanin circuit of size $S$ with parity of $n$ variables is at most $2^{-\Omega(n/(\log S)^{d-1})}$.

60 citations


Journal Article
TL;DR: This work highlights that constructing an explicit subspace-evasive subset that has small intersection with low-dimensional subspaces-an interesting problem in pseudorandomness in its own right-could lead to explicit codes with better list-decoding guarantees.
Abstract: Folded Reed-Solomon (RS) codes are an explicit family of codes that achieve the optimal tradeoff between rate and list error-correction capability: specifically, for any e > 0, Guruswami and Rudra presented an nO(1/ e) time algorithm to list decode appropriate folded RS codes of rate R from a fraction 1-R-e of errors. The algorithm is based on multivariate polynomial interpolation and root-finding over extension fields. It was noted by Vadhan that interpolating a linear polynomial suffices for a statement of the above form. Here, we give a simple linear-algebra-based analysis of this variant that eliminates the need for the computationally expensive root-finding step over extension fields (and indeed any mention of extension fields). The entire list-decoding algorithm is linear-algebraic, solving one linear system for the interpolation step, and another linear system to find a small subspace of candidate solutions. Except for the step of pruning this subspace, the algorithm can be implemented to run in quadratic time. We also consider a closely related family of codes, called (order m) derivative codes and defined over fields of large characteristic, which consist of the evaluations of f as well as its first m-1 formal derivatives at N distinct field elements. We show how our linear-algebraic methods for folded RS codes can be used to show that derivative codes can also achieve the above optimal tradeoff. The theoretical drawback of our analysis for folded RS codes and derivative codes is that both the decoding complexity and proven worst-case list-size bound are nΩ(1/ e). By combining the above idea with a pseudorandom subset of all polynomials as messages, we get a Monte Carlo construction achieving a list-size bound of O(1/ e2) which is quite close to the existential O(1/ e) bound (however, the decoding complexity remains nΩ(1/ e)). Our work highlights that constructing an explicit subspace-evasive subset that has small intersection with low-dimensional subspaces-an interesting problem in pseudorandomness in its own right-could lead to explicit codes with better list-decoding guarantees.

Journal Article
TL;DR: In this article, the authors present a compiler that takes as input an algorithm and a security parameter and produces a functionally equivalent algorithm with running time a factor of poly(poly(kappa)$ slower than the original algorithm.
Abstract: We address the following problem: how to execute any algorithm $P$, for an unbounded number of executions, in the presence of an adversary who observes partial information on the internal state of the computation during executions. The security guarantee is that the adversary learns nothing, beyond $P$'s input-output behavior. Our main result is a compiler, which takes as input an algorithm $P$ and a security parameter $\kappa$ and produces a functionally equivalent algorithm $P'$. The running time of $P'$ is a factor of ${\rm poly}(\kappa)$ slower than $P$. $P'$ will be composed of a series of calls to ${\rm poly}(\kappa)$-time computable subalgorithms. During the executions of $P'$, an adversary algorithm ${\cal A}$, which can choose the inputs of $P'$, can learn the results of adaptively chosen leakage functions---each of bounded output size $\tilde{\Theta}(\kappa)$---on the subalgorithms of $P'$ and the randomness they use. We prove that any computationally unbounded ${\cal A}$ observing the results o...

Journal Article
TL;DR: The typical way to derandomize a randomized computation is to replace the truly random bits used by the computation with suitable pseudorandom bits, but this method needs to be efficiently computable and short enough for all possible seeds to be enumerated in a deterministic simulation.
Abstract: The typical way we derandomize a randomized computation is to replace the truly random bits used by the computation with suitable pseudorandom bits. We require that using pseudorandom, rather than truly random, bits does not change the output of the computation with probability more than ε; we call this parameter the error. The algorithm that produces the pseudorandom bits is called a pseudorandom generator and the number of random bits needed by this algorithm to sample the distribution is called the seed length. The pseudorandom generator needs to be efficiently computable and the seed length needs to be short enough for all possible seeds to be enumerated in a deterministic simulation.

Journal Article
TL;DR: In this paper, it was shown that a random binary linear code with probability arbitrarily close to 1 is list decodable at radius 1--1/q -e with list size L = O(1/e2) and rate R = Ωq(e2/(log3( 1/e)) up to the polylogarithmic factor in 1/ e and constant factors depending on q, and that the desired average distance guarantees hold for a code provided that a natural complex matrix encoding the codewords satisfies the Restricted Isometry Property with respect to
Abstract: @q, with probability arbitrarily close to 1, is list decodable at radius 1--1/q -- e with list size L = O(1/e2) and rate R = Ωq(e2/(log3(1/e))). Up to the polylogarithmic factor in 1/e and constant factors depending on q, this matches the lower bound L = Ωq(1/e2) for the list size and upper bound R = Oq(e2) for the rate. Previously only existence (and not abundance) of such codes was known for the special case q = 2 (Guruswami, Hastad, Sudan and Zuckerman, 2002).In order to obtain our result, we employ a relaxed version of the well known Johnson bound on list decoding that translates the average Hamming distance between codewords to list decoding guarantees. We furthermore prove that the desired average-distance guarantees hold for a code provided that a natural complex matrix encoding the codewords satisfies the Restricted Isometry Property with respect to the Euclidean norm (RIP-2). For the case of random binary linear codes, this matrix coincides with a random submatrix of the Hadamard-Walsh transform matrix that is well studied in the compressed sensing literature.Finally we improve the analysis of Rudelson and Vershynin (2008) on the number of random frequency samples required for exact reconstruction of k-sparse signals of length N. Specifically we improve the number of samples from O(k log (N) log2 (k)(log k+log log N)) to O(k log(N) log3(k)). The proof involves bounding the expected supremum of a related Gaussian process by using an improved analysis of the metric defined by the process. This improvement is crucial for our application in list decoding.

Journal Article
TL;DR: In this paper, the authors consider the problem of annotating a data stream as it is read and show upper bounds that achieve a nontrivial tradeoff between the amount of annotation used and the space required to verify it.
Abstract: The central goal of data stream algorithms is to process massive streams of data using sublinear storage space. Motivated by work in the database community on outsourcing database and data stream processing, we ask whether the space usage of such algorithms can be further reduced by enlisting a more powerful “helper” that can annotate the stream as it is read. We do not wish to blindly trust the helper, so we require that the algorithm be convinced of having computed a correct answer. We show upper bounds that achieve a nontrivial tradeoff between the amount of annotation used and the space required to verify it. We also prove lower bounds on such tradeoffs, often nearly matching the upper bounds, via notions related to Merlin-Arthur communication complexity. Our results cover the classic data stream problems of selection, frequency moments, and fundamental graph problems such as triangle-freeness and connectivity. Our work is also part of a growing trend—including recent studies of multipass streaming, read/write streams, and randomly ordered streams—of asking more complexity-theoretic questions about data stream processing. It is a recognition that, in addition to practical relevance, the data stream model raises many interesting theoretical questions in its own right.

Journal Article
Mark Zhandry1
TL;DR: In the presence of a quantum adversary, there are two possible definitions of security for a pseudorandom function as discussed by the authors : standard-security and quantum-security, where the adversary can query the function on a quantum superposition of inputs.
Abstract: In the presence of a quantum adversary, there are two possible definitions of security for a pseudorandom function. The first, which we call standard-security, allows the adversary to be quantum, but requires queries to the function to be classical. The second, quantum-security, allows the adversary to query the function on a quantum superposition of inputs, thereby giving the adversary a superposition of the values of the function at many inputs at once. Existing techniques for proving the security of pseudorandom functions fail when the adversary can make quantum queries. We give the first quantum-security proofs for pseudorandom functions by showing that some classical constructions of pseudorandom functions are quantum-secure. Namely, we show that the standard constructions of pseudorandom functions from pseudorandom generators or pseudorandom synthesizers are secure, even when the adversary can make quantum queries. We also show that a direct construction from lattices is quantum-secure. To prove security, we develop new tools to prove the indistinguishability of distributions under quantum queries. In light of these positive results, one might hope that all standard-secure pseudorandom functions are quantum-secure. To the contrary, we show a separation: under the assumption that standard-secure pseudorandom functions exist, there are pseudorandom functions secure against quantum adversaries making classical queries, but insecure once the adversary can make quantum queries.

Journal Article
TL;DR: Agarwal et al. as mentioned in this paper gave a new construction of algebraic codes which are efficiently list decodable from a fraction 1-R-e of adversarial errors where R is the rate of the code, for any desired positive constant e.
Abstract: We give a new construction of algebraic codes which are efficiently list decodable from a fraction 1-R-e of adversarial errors where R is the rate of the code, for any desired positive constant e. The worst-case list size output by the algorithm is O(1/e), matching the existential bound for random codes up to constant factors. Further, the alphabet size of the codes is a constant depending only on e --- it can be made exp(~O(1/e2)) which is not much worse than the non-constructive exp(1/e) bound of random codes. The code construction is Monte Carlo and has the claimed list decoding property with high probability. Once the code is (efficiently) sampled, the encoding/decoding algorithms are deterministic with a running time Oe(Nc) for an absolute constant $c$, where N is the code's block length. Our construction is based on a careful combination of a linear-algebraic approach to list decoding folded codes from towers of function fields, with a special form of subspace-evasive sets. Instantiating this with the explicit "asymptotically good" Garcia-Stichtenoth (GS for short) tower of function fields yields the above parameters. To illustrate the method in a simpler setting, we also present a construction based on Hermitian function fields, which offers similar guarantees with a list-size and alphabet size polylogarithmic in the block length N. In comparison, algebraic codes achieving the optimal trade-off between list decodability and rate based on folded Reed-Solomon codes have a decoding complexity of NΩ(1/e), an alphabet size of NΩ(1/e2), and a list size of O(1/e2) (even after combination with subspace-evasive sets). Thus we get an improvement over the previous best bounds in all three aspects simultaneously, and are quite close to the existential random coding bounds. Along the way, we shed light on how to use automorphisms of certain function fields to enable list decoding of the folded version of the associated algebraic-geometric codes.

Journal Article
TL;DR: It is proved that the Shortest Vector Problem (SVP) on point lattices is NP- hard to approximate for any constant factor under polynomial-time randomized reductions with one-sided error that produce no false positives.
Abstract: We prove that the Shortest Vector Problem (SVP) on point lattices is NP- hard to approximate for any constant factor under polynomial-time randomized reductions with one-sided error that produce no false positives We also prove inapproximability for quasi-polynomial factors under the same kind of reductions running in subexponential time Previous hardness results for SVP either incurred two-sided error, or only proved hardness for small constant approximation factors Close similarities between our reduction and recent results on the complexity of the analogous problem for linear codes make our new proof an attractive target for derandomization, paving the road to a possible NP-hardness proof for SVP under deterministic polynomial-time reductions

Journal Article
TL;DR: Improved analysis of the rank of complex sparse matrices results in a new, linear algebraic, proof of Kelly’s theorem, which is the complex analog of the Sylvester–Gallai theorem.
Abstract: We study the rank of complex sparse matrices in which the supports of different columns have small intersections. The rank of these matrices, called design matrices, was the focus of a recent work by Barak et al. [Rank bounds for design matrices with applications to combinatorial geometry and locally correctable codes. Proceedings of the 43rd annual ACM symposium on Theory of computing, STOC 11, (ACM, NY 2011), 519‐528] in which they were used to answer questions regarding point configurations. In this work, we derive near-optimal rank bounds for these matrices and use them to obtain asymptotically tight bounds in many of the geometric applications. As a consequence of our improved analysis, we also obtain a new, linear algebraic, proof of Kelly’s theorem, which is the complex analog of the Sylvester‐Gallai theorem.

Journal Article
TL;DR: An explicit function is given such that any deMorgan formula of size O(n2.5-o(1)) agrees with h on at most 1/2 + ε fraction of the inputs, where ε is exponentially small.
Abstract: We give an explicit function h:{0,1}n->{0,1} such that any deMorgan formula of size O(n2.499) agrees with h on at most 1/2 + e fraction of the inputs, where e is exponentially small (i.e. e = 2-nΩ(1)). We also show, using the same technique, that any boolean formula of size O(n1.999) over the complete basis, agrees with h on at most 1/2 + e fraction of the inputs, where e is exponentially small (i.e. e = 2-nΩ(1)). Our construction is based on Andreev's Ω(n2.5-o(1)) formula size lower bound that was proved for the case of exact computation.

Journal Article
TL;DR: In this paper, a tradeoff between the running time and the mistake bound for learning length-k decision lists over n Boolean variables was studied, and a 2 (p k=d) lower bound was shown to be essentially optimal for d k 1=3.
Abstract: We study the challenging problem of learning decision lists attribute-efficiently, giving both positive and negative results. Our main positive result is a new tradeoff between the running time and mistake bound for learning length-k decision lists over n Boolean variables. When the allowed running time is relatively high, our new mistake bound improves significantly on the mistake bound of the best previous algorithm of Klivans and Servedio (Klivans and Servedio, 2006). Our main negative result is a new lower bound on the weight of any degree-d polynomial threshold function (PTF) that computes a particular decision list overk variables (the “ODD-MAXBIT” function). The main result of Beigel (Beigel, 1994) is a weight lower bound of 2 (k=d 2 ) , which was shown to be essentially optimal for d k 1=3 by Klivans and Servedio. Here we prove a 2 ( p k=d) lower bound, which improves on Beigel’s lower bound for d > k 1=3 : This lower bound establishes strong limitations on the effectiveness of the Klivans and Servedio approach and suggests that it may be difficult to improve on our positive result. The main tool used in our lower bound is a new variant of Markov’s classical inequality which may be of independent interest; it provides a bound on the derivative of a univariate polynomial in terms of both its degree and the size of its coefficients.

Journal Article
TL;DR: A new technique is presented that allows the existence of an argument system that is both resettable zero knowledge and resettably sound under the sole assumption that CRHFs exist, and a novel way to use protocol lower bounds in constructive protocol design is shown.
Abstract: In FOCS 2001, Barak, Goldreich, Goldwasser and Lindell conjectured that the existence of ZAPs, introduced by Dwork and Naor in FOCS 2000, could lead to the design of a zeroknowledge proof system that is secure against both resetting provers and resetting verifiers. Their conjecture has been proven true by Deng, Goyal and Sahai in FOCS 2009 where both ZAPs and collision-resistant hash functions (CRHFs, for short) play a fundamental role. In this paper, we present a new technique that allows us to prove that simultaneously resettable zero knowledge can be achieved by relying on CRHFs only. Our construction therefore goes beyond the conjecture of Barak et al. bypassing the (demanding) use of ZAPs, that in turn require double enhanced trapdoor permutations (DTPs, for short). More specifically, we present the following results: 1. We construct the first resettably-sound resettable witness indistinguishable (rsrWI, for short) argument for NP based on CRHFs. Our construction exploits a new technique that we call “soundness upgrade”. In order to upgrade stand-alone soundness to resettable soundness, we use the lower bound proved by Rosen in CRYPTO 2000 on the round complexity of black-box concurrent zero knowledge. Moreover our rsrWI argument is an argument of knowledge (AoK, for short). 2. As an application of the above result, we obtain the main theorem of this work: we prove (constructively) the existence of an argument system that is both resettable zero knowledge and resettably sound under the sole assumption that CRHFs exist. Our results improve the state-of-the-art, and, perhaps even more importantly, provide a novel tool for the design of resettably-secure protocols. We also show a novel way to use protocol lower bounds in constructive protocol design.

Journal Article
TL;DR: A concrete-efficiency threshold is defined that indicates the smallest problem size beyond which the PCP becomes "useful", in the sense that using it actually pays off relative to naive verification by simply rerunning the computation; this definition takes into account both the prover and verifier's complexity.
Abstract: Probabilistically-Checkable Proofs (PCPs) form the algorithmic core that enables suc- cinct verification of long proofs/computations in many cryptographic constructions, such as succinct arguments and proof-carrying data. Despite the wonderful asymptotic savings they bring, PCPs are also the infamous compu- tational bottleneck preventing these cryptographic constructions from being used in practice. This reflects a lack of understanding of how to study and improve the concrete (as opposed to asymptotic) efficiency of PCPs. We propose a new efficiency measure for PCPs (and its major component: locally-testable codes and PCPs of proximity). We define a concrete-efficiency thresholdthat indicates the smallest problem size beyond which the PCP becomes "useful", in the sense that using it actually pays off relative to naive verification by simply rerunning the computation; our definition takes into account both the prover's and verifier's complexity. We then show that there exists a PCP with a finite concrete-efficiency threshold. This does not follow from existing works on PCPs with succinct verifiers. We provide a PCP con- struction where the prover and verifier are efficient enough toachieve a finite threshold, and further show that this PCP has the requisite properties for being used in the cryptographic applications mentioned above. Moreover, we extend and improve the Reed-Solomon PCPs of proximity over fields of characteristic 2, which constitute the main component in the quasilinear-size construction of (Ben-Sasson and Sudan, STOC '05) as well as our construction. Our improvements reduce the concrete-efficiency threshold for testing proximity to Reed-Solomon codes from 2 572 in their work to 2 35 , which is tantalizingly close to practicality. We hope this will motivate the search for even better constructions, ultimately leading to practical PCPs for succinct verification of long computations.

Journal Article
TL;DR: It is proved that Inner Product is not computable by small AC circuits with one layer of parity gates close to the inputs, and it is shown that the sign of any −1/1 polynomial with ≤ s monomials in 2n variables disagrees with Inner Product in ≥ Ω(1/s) fraction of inputs, a type of result that seems unknown in the rigidity setting.
Abstract: We highlight the special case of Valiant’s rigidity problem in which the low-rank matrices are truth-tables of sparse polynomials. We show that progress on this special case entails that Inner Product is not computable by small AC circuits with one layer of parity gates close to the inputs. We then prove that the sign of any −1/1 polynomial with ≤ s monomials in 2n variables disagrees with Inner Product in ≥ Ω(1/s) fraction of inputs, a type of result that seems unknown in the rigidity setting. Valiant’s rigidity problem [Val77] asks to build explicit matrixes that are far in Hamming distance from low-rank matrixes. Valiant proved that if an N ×N matrix M has hamming distance ≥ N from any matrix of rank R = (1−Ω(1))N , then the corresponding linear transformation x 7→ Mx requires circuits of superlogarithmic depth or superlinear size. Exhibiting an explicit such matrix remains a long-standing challenge. Despite significant efforts, the best lower bounds are of the form (N/R) lg(N/R) against matrixes of rank R. The matrix corresponding to the inner product function IP has been conjectured to satisfy better better bounds. We refer the reader to Lokam’s survey [Lok09] for more on rigidity. In this note we highlight a special case of the rigidity problem, and we suggest that attacks should be directed towards it. Recall that an N ×N matrix has rank R if and only if it is the sum of R rank-1 matrixes, i.e., matrixes uiv T i , where ui, vi are N -entry column vectors. We consider the special case of this problem where the rank-1 matrixes are the truthtables of monomials over the variables x1, . . . , xn, y1, . . . , yn, where N = 2 n and the variables range over {−1, 1}. For example, the truth-table of a monomial c ∏ i∈S xi ∏ i∈T yi, where S, T ⊆ {1, . . . , n}, is the N ×N matrix whose entry indexed by (a, b) ∈ {−1, 1}×{−1, 1} is c ∏ i∈S ai ∏ i∈T bi. This matrix can be written as uv T where the a-th entry of u is c ∏ i∈S ai and the b-th entry of v is ∏ i∈T bi. This special case of the rigidity problem is stated without direct reference to rank as follows. Challenge 0.1 (Sparsity). Exhibit an explicit function f : {−1, 1} × {−1, 1} → {−1, 1} such that for any real polynomial p with ≤ R monomials we have Pr x,y∈{−1,1}n [f(x, y) 6= p(x, y)] ≥ , ∗Supported by NSF grant CCF-1115703. Email: rocco@cs.columbia.edu †Supported by NSF grant CCF-0845003. Email: viola@ccs.neu.edu for as large as possible. Again, = Ω(lg(2/R)/R) follows from the rigidity bounds. The concurrent work [RV12] raises a similar challenge for low-degree (as opposed to sparse) polynomials. Motivation: AC with parity gates. Besides hopefully paving the way for the original rigidity question, a motivation for making progress on Challenge 0.1 is that stronger bounds would yield new circuit lower bounds. Let AC-⊕ denote the class of AC circuits augmented with a bottom level (right before the input bits) of parity gates. To our knowledge, it is not known whether the Inner Product function IP is computable by poly-size AC-⊕ circuits: Challenge 0.2. Show that IP cannot be computed by poly-size AC-⊕ circuits. Challenge 0.2 seems open even for AC-⊕ circuits of depth 4, but it is known to be true for AC-⊕ circuits of depth 3, i.e. poly-size DNF-⊕ circuits. Indeed, it follows from Fact 8 in [Jac97] that any function computable by such circuits has 1/poly correlation with parity on some subset of the variables, but it is well-known that IP has exponentially small correlation with parity on any subset of the variables. Solving Challenge 0.2 is a step towards a more thorough understanding of AC with parity gates. For example, no strong correlation bound is known for this class, see e.g. [SV10]. In fact, this is not even known for AC-⊕, and IP is a natural candidate. Next we formally connect the two challenges. Claim 0.3. Suppose that IP on 2n variables has AC-⊕ circuits of polynomial size. Then for any b there exists c and a polynomial p(x, y) with ≤ 2lgc n monomials such that Pr x,y [p(x, y) 6= IP(x, y)] ≤ 2lgb . Proof. Let C be a depth-(d+1) AC-⊕ circuit that computes IP over 2n input bits x1, . . . , xn, y1, . . . , yn. Let N = poly(n) denote the number of parity gates at the leaves. Let C ′ be the depth-d AC circuit obtained by replacing the i-th parity gate by a fresh input variable zi (so C ′ is a circuit over N input bits z1, ..., zN). Let D be the distribution over {−1, 1} induced by drawing a uniform random input x from {−1, 1} and setting zi = the value of the i-th parity gate on x (the draw from D is the string z ∈ {0, 1}). Let := 1/2lgc . Lemma 5.1 and Corollary 5.2 of [ABFR94] tell us that there is a polynomial p(z1, . . . , zN) of degree (O(lg(n)) 2d that computes C ′(z) for a (1 − ) fraction of all inputs drawn from D. Since p has degree (O(lg n)) it must have ≤ n n))2d monomials. Now let q(x1, ..., xn, y1, . . . , yn) be the polynomial obtained by substituting in the i-th parity (monomial) for zi in p. q has no more monomials than p, and q computes IP on (1− ) fraction of all inputs drawn from {−1, 1}. We note that for Valiant’s connection to lower bounds, we need rank R = Ω(N), whereas for sparsity much smaller rank R = poly lgN suffices. In both cases we need to go beyond error 1/R.

Journal Article
TL;DR: It is shown that for any Turing-recognizable language, there exists a constant-space weak-qAM system (the nonmembers do not need to be rejected with high probability), and, different from the classical case, the protocol has perfect-completeness.
Abstract: We introduce a new public quantum interactive proof system, namely qAM, by augmenting the verifier with a fixed-size quantum register in Arthur-Merlin game. We focus on spacebounded verifiers, and compare our new public system with private-coin interactive proof (IP) system in the same space bounds. We show that qAM systems not only can simulate all known space-bounded private-coin protocols, but also implements some protocols that are either not implementable by IP systems or currently not known to be simulated by IP systems. More specifically, we show that for any Turing-recognizable language, there exists a constant-space weak-qAM system (the nonmembers do not need to be rejected with high probability), and, different from the classical case, our protocol has perfect-completeness (the members are accepted exactly). For strong proof system, where the nonmembers must be rejected with high probability, we show that the known space-bounded private-coin protocols can also be simulated by qAM systems with the same space bound. In case of constant-space and log-space IP systems, the best known lower bounds are ASPACE(n) and EXP, respectively. We obtain better lower bounds for the corresponding qAM systems: Each language in NP has a constant-space (exp-time) qAM system, there is an NEXP-complete language having a constant-space qAM system, and each language in NEXP has a log-space qAM system.

Journal Article
TL;DR: In this article, the authors prove a Chernoff-like large deviation bound on the sum of non-independent random variables that have the following dependence structure: the variables Y1,',Yr are arbitrary [0, 1]-valued functions of independent random variables X 1,',Xm, modulo a restriction that every Xi influences at most k of the variables y 1,''Yr.
Abstract: We prove a Chernoff-like large deviation bound on the sum of non-independent random variables that have the following dependence structure. The variables Y1,',Yr are arbitrary [0,1]-valued functions of independent random variables X1,',Xm, modulo a restriction that every Xi influences at most k of the variables Y1,',Yr. © 2014 Wiley Periodicals, Inc. Random Struct. Alg., 47, 99-108, 2015

Journal Article
TL;DR: In this article, the permanents of bounded-norm complex matrices were used to derandomize Gurvits's algorithm for the special case that the matrix A is nonnegative.
Abstract: Around 2002, Leonid Gurvits gave a striking randomized algorithm to approximate the permanent of an n × n matrix A. The algorithm runs in O(n2/e2) time, and approximates Per (A) to within ±e||A||n" additive error. A major advantage of Gurvits's algorithm is that it works for arbitrary matrices, not just for nonnegative matrices. This makes it highly relevant to quantum optics, where the permanents of bounded-norm complex matrices play a central role. (In particular, n × n permanents arise as the transition amplitudes for n identical photons.) Indeed, the existence of Gurvits's algorithm is why, in their recent work on the hardness of quantum optics, Aaronson and Arkhipov (AA) had to talk about sampling problems rather than ordinary decision problems. In this paper, we improve Gurvits's algorithm in two ways. First, using an idea from quantum optics, we generalize the algorithm so that it yields a better approximation when the matrix A has either repeated rows or repeated columns. Translating back to quantum optics, this lets us classically estimate the probability of any outcome of an AA-type experiment--even an outcome involving multiple photons "bunched" in the same mode--at least as well as that probability can be estimated by the experiment itself. It also yields a general upper bound on the probabilities of "bunched" outcomes, which resolves a conjecture of Gurvits and might be of independent physical interest. Second, we use e-biased sets to derandomize Gurvits's algorithm, in the special case where the matrix A is nonnegative. More interestingly, we generalize the notion of "-biased sets to the complex numbers, construct "complex e-biased sets," then use those sets to derandomize even our generalization of Gurvits's algorithm to the case (again for nonnegative A) where some rows or columns are identical. Whether Gurvits's algorithm can be derandomized for general A remains an outstanding problem.

Journal Article
TL;DR: It is shown here that any homogeneous depth four arithmetic circuit with bounded bottom fanin computing the permanent (or the determinant) must be of exponential size.
Abstract: Agrawal and Vinay [AV08] have recently shown that an exponential lower bound for depth four homogeneous circuits with bottom layer of × gates having sublinear fanin translates to an exponential lower bound for a general arithmetic circuit computing the permanent. Motivated by this, we examine the complexity of computing the permanent and determinant via homogeneous depth four circuits with bounded bottom fanin. We show here that any homogeneous depth four arithmetic circuit with bounded bottom fanin computing the permanent (or the determinant) must be of exponential size.

Journal Article
TL;DR: The list-decodability of multiplicity codes was studied in this paper, where it was shown that a polynomial over fields of prime order can be list decoded from a (1 R e ) fraction of errors in polynomially time.
Abstract: We study the list-decodability of multiplicity codes These codes, which are based on evaluations of high-degree polynomials and their derivatives, have rate approaching 1 while simultaneously allowing for sublinear-time error correction In this paper, we show that multiplicity codes also admit powerful list-decoding and local list-decoding algorithms that work even in the presence of a large error fraction In other words, we give algorithms for recovering a polynomial given several evaluations of it and its derivatives, where possibly many of the given evaluations are incorrect Our first main result shows that univariate multiplicity codes over fields of prime order can be list-decoded up to the so-called "list-decoding capacity" Specifically, we show that univariate multiplicity codes of rate R over fields of prime order can be list-decoded from a (1 R e) fraction of errors in polynomial time (for constant R;e) This resembles the behavior of the "Folded Reed-Solomon Codes" of Guruswami and Rudra (Trans Info Theory 2008) The list-decoding algorithm is based on constructing a differential equation of which the desired codeword is a solution; this differential equation is then solved using a power-series approach (a variation of Hensel lifting) along with other algebraic ideas Our second main result is a list-decoding algorithm for decoding multivariate multiplicity codes up to their Johnson radius The key ingredient of this algorithm is the construction of a special family of "algebraically-repelling" curves passing through the points of F m ; no moderate-degree multivariate polynomial over F m can simultaneously vanish on all these A version of this paper was posted online as an Electronic Colloq on Computational Complexity Tech Report (20) Supported in part by a Sloan Fellowship and NSF grant CCF-1253886

Journal Article
TL;DR: In this article, the authors study the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory.
Abstract: We study the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory. Here we give bounds on the ε-rank and use them for algorithmic applications. Our main algorithmic results are (a) polynomial-time additive approximation schemes for Nash equilibria for 2-player games when the payoff matrices are positive semidefinite or have logarithmic rank and (b) an additive PTAS for the densest subgraph problem for similar classes of weighted graphs. We use combinatorial, geometric and spectral techniques; our main new tool is an algorithm for efficiently covering a convex body with translates of another convex body.