scispace - formally typeset
Search or ask a question

Showing papers in "Electronic Colloquium on Computational Complexity in 2007"


Journal Article
TL;DR: In this article, the authors show how to construct a variety of "trapdoor" cryptographic tools assuming the worst-case hardness of standard lattice problems (such as approximating the length of the shortest nonzero vector to within certain polynomial factors).
Abstract: We show how to construct a variety of "trapdoor" cryptographic tools assuming the worst-case hardness of standard lattice problems (such as approximating the length of the shortest nonzero vector to within certain polynomial factors). Our contributions include a new notion of trapdoor function with preimage sampling, simple and efficient "hash-and-sign" digital signature schemes, and identity-based encryption. A core technical component of our constructions is an efficient algorithm that, given a basis of an arbitrary lattice, samples lattice points from a discrete Gaussian probability distribution whose standard deviation is essentially the length of the longest Gram-Schmidt vector of the basis. A crucial security property is that the output distribution of the algorithm is oblivious to the particular geometry of the given basis.

1,312 citations


Journal Article
TL;DR: In this paper, a general cryptographic primitive called lossy trapdoor functions (lossy TDFs) is proposed, which can be used for constructing several important cryptographic tools, including (injective) trapdoor function, collision-resistant hash functions, oblivious transfer, and chosen ciphertext-secure cryptosystems.
Abstract: We propose a general cryptographic primitive called lossy trapdoor functions (lossy TDFs), and we use it to develop new approaches for constructing several important cryptographic tools, including (injective) trapdoor functions, collision-resistant hash functions, oblivious transfer, and chosen ciphertext-secure cryptosystems (in the standard model). All of these constructions are simple, efficient, and black-box. We realize lossy TDFs based on a variety of cryptographic assumptions, including the hardness of the decisional Diffie-Hellman (DDH) problem and the hardness of the “learning with errors” problem (which is implied by the worst-case hardness of various lattice problems). Taken together, our results resolve some long-standing open problems in cryptography. They give the first injective TDFs based on problems not directly related to integer factorization and provide the first chosen ciphertext-secure cryptosystem based solely on worst-case complexity assumptions.

294 citations


Journal Article
TL;DR: A property tester is given that given a graph with degree bound d, an expansion bound �, and a parameter " > 0, accepts the graph with high probability if its expansion is more than�, and rejects it withhigh probability if it is "-far from agraph with expansion � 0 withdegree bound d".
Abstract: We consider the problem of testing graph expansion (either vertex or edge) in the bounded degree model [2]. We give a property tester that given a graph with degree bound d, an expansion bound �, and a parameter " > 0, accepts the graph with high probability if its expansion is more than �, and rejects it with high probability if it is "-far from a graph with expansion � 0 with degree bound d, where � 0 < � is a function of �. For edge expansion, we obtain � 0 = ( � 2 d ), and for vertex expansion, we obtain � 0 = ( � 2 d2 ). In either case, the algorithm runs in time

172 citations


Journal Article
TL;DR: In this paper, a general methodology for constructing very simple and efficient non-interactive zero-knowledge proofs and noninteractive witness-indistinguishable proofs that work directly for groups with a bilinear map, without needing a reduction to Circuit Satisfiability is presented.
Abstract: Non-interactive zero-knowledge proofs and non-interactive witness-indistinguishable proofs have played a significant role in the theory of cryptography. However, lack of efficiency has prevented them from being used in practice. One of the roots of this inefficiency is that non-interactive zero-knowledge proofs have been constructed for general NP-complete languages such as Circuit Satisfiability, causing an expensive blowup in the size of the statement when reducing it to a circuit. The contribution of this paper is a general methodology for constructing very simple and efficient non-interactive zero-knowledge proofs and non-interactive witness-indistinguishable proofs that work directly for groups with a bilinear map, without needing a reduction to Circuit Satisfiability. Groups with bilinear maps have enjoyed tremendous success in the field of cryptography in recent years and have been used to construct a plethora of protocols. This paper provides non-interactive witnessindistinguishable proofs and non-interactive zero-knowledge proofs that can be used in connection with these protocols. Our goal is to spread the use of non-interactive cryptographic proofs from mainly theoretical purposes to the large class of practical cryptographic protocols based on bilinear groups.

128 citations


Journal Article
TL;DR: For any odd integer q > 1, the lower bound for general q-query locally decodable codes C is improved and is now Ω (n/ logn) → {0, 1}m.
Abstract: For any odd integer q > 1, we improve the lower bound for general q-query locally decodable codes C : {0, 1}n → {0, 1}m from m = Ω (n/ logn) q+1 q−1 to m = Ω (

124 citations


Journal Article
TL;DR: An algorithm for the tree update problem that is statically optimal for every sufficiently long contiguous subsequence of accesses is given, which combines techniques from data streaming algorithms, composition of learning algorithms, and a twist on the standard experts framework.
Abstract: We study the notion of learning in an oblivious changing environment. Existing online learning algorithms which minimize regret are shown to converge to the average of all locally optimal solutions. We propose a new performance metric, strengthening the standard metric of regret, to capture convergence to locally optimal solutions, and propose efficient algorithms which provably converge at the optimal rate. One application is the portfolio management problem, for which we show that all previous algorithms behave suboptimally under dynamic market conditions. Another application is online routing, for which our adaptive algorithm exploits local congestion patterns and runs in near-linear time. We also give an algorithm for the tree update problem that is statically optimal for every sufficiently long contiguous subsequence of accesses. Our algorithm combines techniques from data streaming algorithms, composition of learning algorithms, and a twist on the standard experts framework.

100 citations


Journal Article
TL;DR: An exposition of Bourgain's result, giving a high level way to view his extractor construction, and a proof of a generalization of Vazirani’s XOR lemma that seems interesting in its own right.
Abstract: A construction of Bourgain [Bou05] gave the first 2-source extractor to break the min-entropy rate 1/2 barrier. In this note, we write an exposition of his result, giving a high level way to view his extractor construction. We also include a proof of a generalization of Vazirani’s XOR lemma that seems interesting in its own right, and an argument (due to Boaz Barak) that shows that any two source extractor with sufficiently small error must bestrong.

99 citations


Journal Article
TL;DR: An introduction into the theory underlying algorithmic meta theorems and a survey of the most important results in this area are presented.
Abstract: Algorithmic meta theorems are algorithmic results that apply to whole families of combinatorial problems, instead of just specific problems. These families are usually defined in terms of logic and graph theory. An archetypal algorithmic meta theorem is Courcelle’s Theorem [9], which states that all graph properties definable in monadic second-order logi c can be decided in linear time on graphs of bounded tree width. This article is an introduction into the theory underlying s uch meta theorems and a survey of the most important results in this area.

95 citations


Journal Article
TL;DR: This article characterize relational structures H for which (#CSP(H) can be solved in polynomial time and prove that for all other structures the problem is #P-complete.
Abstract: The Counting Constraint Satisfaction Problem (nCSP(H)) over a finite relational structure H can be expressed as follows: given a relational structure G over the same vocabulary, determine the number of homomorphisms from G to H. In this article we characterize relational structures H for which (nCSP(H) can be solved in polynomial time and prove that for all other structures the problem is nP-complete.

93 citations


Journal Article
TL;DR: This work introduces a theory of parameterized approximability, which is intended to deal with the efficient approximation of small cost solutions for optimisation problems.
Abstract: Combining classical approximability questions with parameterized complexity, we introduce a theory of parameterized approximability. The main intention of this theory is to deal with the efficient approximation of small cost solutions for optimisation problems.

85 citations


Journal Article
TL;DR: In this article, the authors consider the approximation ability of randomized search heuristics for the class of covering problems and compare single-objective and multiobjective models for such problems.
Abstract: The main aim of randomized search heuristics is to produce good approximations of optimal solutions within a small amount of time. In contrast to numerous experimental results, there are only a few theoretical explorations on this subject. We consider the approximation ability of randomized search heuristics for the class of covering problems and compare single-objective and multi-objective models for such problems. For the VertexCover problem, we point out situations where the multi-objective model leads to a fast construction of optimal solutions while in the single-objective case, no good approximation can be achieved within the expected polynomial time. Examining the more general SetCover problem, we show that optimal solutions can be approximated within a logarithmic factor of the size of the ground set, using the multi-objective approach, while the approximation quality obtainable by the single-objective approach in expected polynomial time may be arbitrarily bad.

Journal Article
TL;DR: The approach combines ideas from the junta test of Fischer et al. 16 with ideas from learning theory, and yields property testers that make po!y(s/epsiv) queries for Boolean function classes such as s-term DNF formulas and s-sparse polynomials over finite fields.
Abstract: We describe a general method for testing whether a function on n input variables has a concise representation. The approach combines ideas from the junta test of Fischer et al. 16 with ideas from learning theory, and yields property testers that make po!y(s/epsiv) queries (independent of n) for Boolean function classes such as s-term DNF formulas (answering a question posed by Parnas et al. [12]), sizes. decision trees, sizes Boolean formulas, and sizes Boolean circuits. The method can be applied to non-Boolean valued function classes as well. This is achieved via a generalization of the notion of van at ion/row Fischer et al. to non-Boolean functions. Using this generalization we extend the original junta test of Fischer et al. to work for non-Boolean functions, and give poly(s/e)-query testing algorithms for non-Boolean valued function classes such as sizes algebraic circuits and s-sparse polynomials over finite fields. We also prove an Omega(radic(s)) query lower bound for nonadaptively testing s-sparse polynomials over finite fields of constant size. This shows that in some instances, our general method yields a property tester with query complexity that is optimal (for nonadaptive algorithms) up to a polynomial factor.

Journal Article
TL;DR: This work starts by considering a computational setting for the problem where the goal of one of the interacting players is to gain some computational wisdom from the other player, and shows that if the second player is "sufficiently" helpful and powerful, then the first player can gain significant computational power (deciding PSPACE complete languages).
Abstract: Is it possible for two intelligent beings to communicate meaningfully, without any common language or background? This question has interest on its own, but is especially relevant in the context of modern computational infrastructures where an increase in the diversity of computers is making the task of inter-computer interaction increasingly burdensome. Computers spend a substantial amount of time updating their software to increase their knowledge of other computing devices. In turn, for any pair of communicating devices, one has to design software that enables the two to talk to each other. Is it possible instead to let the two computing entities use their intelligence (universality as computers) to learn each others' behavior and attain a common understanding? What is 'common understanding?' We explore this question in this paper. To formalize this problem, we suggest that one should study the 'goal of communication:' why are the two entities interacting with each other, and what do they hope to gain by it? We propose that by considering this question explicitly, one can make progress on the question of universal communication. We start by considering a computational setting for the problem where the goal of one of the interacting players is to gain some computational wisdom from the other player. We show that if the second player is "sufficiently" helpful and powerful, then the first player can gain significant computational power (deciding PSPACE complete languages). Our work highlights some of the definitional issues underlying the task of formalizing universal communication, but also suggests some interesting phenomena and highlights potential tools that may be used for such communication.

Journal Article
TL;DR: It is proved that the sum of d small-bias generators L : Fs rarr Fn fools degree-d polynomials in n variables over a prime field F, for any fixed degree d and field F.
Abstract: We prove that the sum of d small-bias generators $$L : {\mathbb{F}}^{s} \rightarrow {\mathbb{F}}^{n}$$ fools degree-d polynomials in n variables over a field $${\mathbb{F}}$$ , for any fixed degree d and field $${\mathbb{F}}$$ , including $${\mathbb{F}} = {\mathbb{F}}_{2} = \{0, 1\}$$ . Our result builds on, simplifies, and improves on both the work by Bogdanov and Viola (FOCS ’07) and the follow-up by Lovett (STOC ’08). The first relies on a conjecture that turned out to be true only for some degrees and fields, while the latter considers the sum of 2 d small-bias generators (as opposed to d in our result).

Journal Article
TL;DR: The LDCs presented also translate directly in to three server Private Information Retrieval protocols with communication complexities for a database of size, starting with a Mersenne prime.
Abstract: Locally Decodable codes(LDC) support decoding of any particular symbol of the input message by reading constant number of symbols of the codeword, even in presence of constant fraction of errors. In a recent breakthrough [10], Yekhanin constructed -query LDCs that hugely improve over earlier constructions. Specifically, for a Mersenne prime , binary LDCs of length for infinitely many were obtained. Using the largest known Mersenne prime, this implies LDCs of length less than . Assuming infinitude of Mersenne primes, the construction y ields LDCs of length for infinitely many . Inspired by [10], we construct -query binary LDCs with same parameters from Mersenne primes. While all the main technical tools are borrowed from [10], we give a self-contained simple construction of LDCs. Our bounds do not improve over [10], and have worse soundness of the decoder. However the LDCs are simpler and generalize naturally to prime fields other than . The LDCs presented also translate directly in to three server Private Information Retrieval(PIR) protocols with communication complexities for a database of size , starting with a Mersenne prime

Journal Article
TL;DR: In this paper, it was shown that random sparse binary linear codes are locally testable and locally decodable (under any linear encoding) with constant queries (with probability tending to one) under the assumption that the code should have only polynomially many codewords.
Abstract: We show that random sparse binary linear codes are locally testable and locally decodable (under any linear encoding) with constant queries (with probability tending to one). By sparse, we mean that the code should have only polynomially many codewords. Our results are the first to show that local decodability and testability can be found in random, unstructured, codes. Previously known locally decodable or testable codes were either classical algebraic codes, or new ones constructed very carefully. We obtain our results by extending the techniques of Kaufman and Litsyn [11] who used the MacWilliams Identities to show that "almost-orthogonal" binary codes are locally testable. Their definition of almost orthogonality expected codewords to disagree in n/2 plusmn O(radicn) coordinates in codes of block length n. The only families of codes known to have this property were the dual-BCH codes. We extend their techniques, and simplify them in the process, to include codes of distance at least n/2 - O(n1-gamma) for any gamma > 0, provided the number of codewords is O(nt) for some constant t. Thus our results derive the local testability of linear codes from the classical coding theory parameters, namely the rale and the distance of the codes. More significantly, we show that this technique can also be used to prove the "self-correctability" of sparse codes of sufficiently large distance. This allows us to show that random linear codes under linear encoding functions are locally decodable. This ought to be surprising in that the definition of a code doesn't specify the encoding function used! Our results effectively say that any linear function of the bits of the codeword can be locally decoded in this case.

Journal Article
TL;DR: In this article, the problem of testing the expansion of graphs with bounded degree d in sublinear time was studied, and it was shown that the algorithm proposed by Goldreich and Ron [9] (ECCC-2000) can distinguish with high probability between @a-expanders of degree bound d and graphs which are -far from having expansion at least @W(@a^2).
Abstract: We study the problem of testing the expansion of graphs with bounded degree d in sublinear time. A graph is said to be an @a-expander if every vertex set U@?V of size at most 12|V| has a neighborhood of size at least @a|U|. We show that the algorithm proposed by Goldreich and Ron [9] (ECCC-2000) for testing the expansion of a graph distinguishes with high probability between @a-expanders of degree bound d and graphs which are -far from having expansion at least @W(@a^2). This improves a recent result of Czumaj and Sohler [3] (FOCS-07) who showed that this algorithm can distinguish between @a-expanders of degree bound d and graphs which are -far from having expansion at least @W(@a^2/logn). It also improves a recent result of Kale and Seshadhri [12] (ECCC-2007) who showed that this algorithm can distinguish between @a-expanders and graphs which are -far from having expansion at least @W(@a^2) with twice the maximum degree. Our methods combine the techniques of [3], [9] and [12].

Journal Article
TL;DR: This paper constructs explicit deterministic extractors from polynomial sources, which are distributions sampled by low degree multivariate polynomials over finite fields, and uses a theorem of Bombieri to turn the condensers into extractors, which allows extracting all the randomness from poynomial sources over exponentially large prime fields.
Abstract: In this paper we construct explicit deterministic extractors from polynomial sources, which are distributions sampled by low degree multivariate polynomials over finite fields. This naturally generalizes previous work on extraction from affine sources (which are degree 1 polynomials). A direct consequence is a deterministic extractor for distributions sampled by polynomial size arithmetic circuits over exponentially large fields. The steps in our extractor construction, and the tools (mainly from algebraic geometry) that we use for them, are of independent interest: The first step is a construction of rank extractors, which are polynomial mappings which ‘extract’ the algebraic rank from any system of low degree polynomials. More precisely, for any n polynomials, k of which are algebraically independent, a rank extractor outputs k algebraically independent polynomials of slightly higher degree. The rank extractors we construct are applicable not only over finite fields but also over fields of characteristic zero. The next step is relating algebraic independence to min-entropy. We use a theorem of Wooley to show that these parameters are tightly connected. This allows replacing the algebraic assumption on the source (above) by the natural information theoretic one. It also shows that a rank extractor is already a high quality condenser for polynomial sources over polynomially large fields. Finally, to turn the condensers into extractors, we employ a theorem of Bombieri, giving a character sum estimate for polynomials defined over curves. It allows extracting all the randomness (up to a multiplicative constant) from polynomial sources over exponentially large prime fields.

Journal Article
TL;DR: The methods of Impagliazzo and Kabanets imply that if the authors can derandomize polynomial identity testing for bounded depth circuits then NEXP does not have bounded depth arithmetic circuits, that is, either NEXP ⊄ P/poly or the Permanent is not computable byPolynomial size bounded Depth d arithmetic circuits.
Abstract: In this paper we show that lower bounds for bounded depth arithmetic circuits imply derandomization of polynomial identity testing for bounded depth arithmetic circuits. More formally, if there exists an explicit polynomial $f$ that cannot be computed by a depth $d$ arithmetic circuit of small size, then there exists an efficient deterministic black-box algorithm to test whether a given depth $d-5$ circuit that computes a polynomial of relatively small individual degrees is identically zero or not. In particular, if we are guaranteed that the tested circuit computes a multilinear polynomial, then we can perform the identity test efficiently. To the best of our knowledge this is the first hardness-randomness tradeoff for bounded depth arithmetic circuits. The above results are obtained using the arithmetic Nisan-Wigderson generator of Kabanets and Impagliazzo together with a new theorem on bounded depth circuits, which is the main technical contribution of our work. This theorem deals with polynomial equations of the form $P(x_1,\dots,x_n,y)\equiv0$ and shows that if $P$ has a circuit of depth $d$ and size $s$ and if the polynomial $f(x_1,\dots,x_n)$ satisfies $P(x_1,\dots,x_n,f)\equiv0$, then $f$ has a circuit of depth $d+3$ and size $\mathrm{poly}(s,m^r)$, where $m$ is the total degree of $f$ and $r$ is the degree of $y$ in $P$. This circuit for $f$ can be found probabilistically in time $\mathrm{poly}(s,m^r)$. In the other direction we observe that the methods of Kabanets and Impagliazzo can be used to show that derandomizing identity testing for bounded depth circuits implies lower bounds for the same class of circuits. More formally, if we can derandomize polynomial identity testing for bounded depth circuits, then NEXP does not have bounded depth arithmetic circuits. That is, either $\mathrm{NEXP} ot\subseteq P/\mathrm{poly}$ or the Permanent is not computable by polynomial size bounded depth arithmetic circuits.

Journal Article
TL;DR: It is shown that a given predicate D gives rise to a rapidly mixing random walk on Z2 n, which allows the problem to be reduced to communication lower bounds for typical predicates, and Paturi's approximation lower bounds are used to prove that a typical predicate behaves analogous to PARITY with respect to a smooth distribution on the inputs.

Journal Article
TL;DR: The complexity of propositional proof systems of varying strength extending resolution by allowing it to operate with disjunctions of linear equations instead of clauses is studied and an exponential-size lower bound on refutations in a certain, considerably strong, fragment of resolution over linear equations is established.
Abstract: We develop and study the complexity of propositional proof systems of varying strength extending resolution by allowing it to operate with disjunctions of linear equations instead of clauses. We demonstrate polynomial-size refutations for hard tautologies like the pigeonhole principle, Tseitin graph tautologies and the clique-coloring tautologies in these proof systems. Using (monotone) interpolation we establish an exponential-size lower bound on refutations in a certain, considerably strong, fragment of resolution over linear equations, as well as a general polynomial upper bound on (non-monotone) interpolants in this fragment. We then apply these results to extend and improve previous results on multilinear proofs (over fields of characteristic 0), as studied in [Ran Raz, Iddo Tzameret, The strength of multilinear proofs. Comput. Complexity (in press)]. Specifically, we show the following: • Proofs operating with depth-3 multilinear formulas polynomially simulate a certain, considerably strong, fragment of resolution over linear equations. • Proofs operating with depth-3 multilinear formulas admit polynomial-size refutations of the pigeonhole principle and Tseitin graph tautologies. The former improve over a previous result that established small multilinear proofs only for the functional pigeonhole principle. The latter are different from previous proofs, and apply to multilinear proofs of Tseitin mod p graph tautologies over any field of characteristic 0. We conclude by connecting resolution over linear equations with extensions of the cutting planes proof system.

Journal Article
TL;DR: It is proved that depth three circuits consisting of a MAJORITY gate at the output, gates computing arbitrary symmetric function at the second layer and arbitrary gates of bounded fan-in at the base layer cannot simulate the circuit class AC0 in sub-exponential size.
Abstract: We develop a new technique of proving lower bounds for the randomized communication complexity of boolean functions in the multiparty 'number on the forehead' model Our method is based on the notion of voting polynomial degree of functions and extends the degree-discrepancy lemma in the recent work of Sherstov (2007) Using this we prove that depth three circuits consisting of a MAJORITY gate at the output, gates computing arbitrary symmetric function at the second layer and arbitrary gates of bounded fan-in at the base layer ie circuits of type MAJ o SYMM o ANYO(1) cannot simulate the circuit class AC0 in sub-exponential size Further, even if the fan-in of the bottom ANY gates are increased to o(log log n), such circuits cannot simulate AC0 in quasi-polynomial size This is in contrast to the classical result of Yao and Beigel-Tarui that shows that such circuits, having only MAJORITY gales, can simulate the class ACC0 in quasi-polynomial size when the bottom fan-in is increased to poly-logarithmic size In the second part, we simplify the arguments in the breakthrough work of Bourgain (2005) for obtaining exponentially small upper bounds on the correlation between the boolean function MODq and functions represented bv polynomials of small degree over Zm, when m,q ges 2 are co-prime integers Our calculation also shows similarity with techniques used to estimate discrepancy of functions in the multiparty communication setting This results in a slight improvement of the estimates of Bourgain et al (2005) It is known that such estimates imply that circuits of type MAJ o MODm o ANDisin log n cannot compute the MODq function in sub-exponential size It remains a major open question to determine if such circuits can simulate ACC0 in polynomial size when the bottom fan-in is increased to poly-logarithmic size

Journal Article
TL;DR: It is proved that for all primes p except for possibly one of them, and for all c < 2 cos(π/7) ≈ 1.801, there is a d > 0 such that MODp-Sat is not solvable in nc time and nd space by general algorithms, which is the first time-space tradeoffs for counting the number of solutions to an NP problem modulo small integers.
Abstract: We prove the first time-space tradeoffs for counting the number of solutions to an NP problem modulo small integers, and also improve upon known time-space tradeoffs for Sat. Let m > 0 be an integer, and define MOD m -Sat to be the problem of determining if a given Boolean formula has exactly km satisfying assignments, for some integer k. We show for all primes p except for possibly one of them, and for all c 0 such that MOD p -Sat is not solvable in n c time and n d space by general algorithms. That is, there is at most one prime p that does not satisfy the tradeoff. We prove that the same limitation holds for Sat and MOD 6-Sat, as well as MOD m -Sat for any composite m that is not a prime power. Our main tool is a general method for rapidly simulating deterministic computations with restricted space, by counting the number of solutions to NP predicates modulo integers. The simulation converts an ordinary algorithm into a "canonical" one that consumes roughly the same amount of time and space, yet canonical algorithms have nice properties suitable for counting.

Journal Article
TL;DR: This work presents a new and arguably simpler construction of LTCs that is purely combinatorial, does not rely on PCP machinery, and matches the parameters of the best known construction.
Abstract: An error correcting code is said to be locally testable if there is a test that checks whether a given string is a codeword, or rather far from the code, by reading only a constant number of symbols of the string. While the best known construction of locally testable codes (LTCs) by Ben-Sasson and Sudan [SIAM J. Comput., 38 (2008), pp. 551-607] and Dinur [J. ACM, 54 (2007), article 12] achieves very efficient parameters, it relies heavily on algebraic tools and on probabilistically checkable proof (PCP) machinery. In this work we present a new and arguably simpler construction of LTCs that is purely combinatorial, does not rely on PCP machinery, and matches the parameters of the best known construction. However, unlike the latter construction, our construction is not entirely explicit.

Journal Article
TL;DR: In this paper, it was shown that the symmetries of a property being tested play a central role in property testing, and that an O(1)-local "characterization" is a necessary and sufficient condition for O( 1)-local testability.
Abstract: We argue that the symmetries of a property being tested play a central role in property testing. We support this assertion in the context of algebraic functions, by examining properties of functions mapping a vector space Kn over a field K to a subfield F. We consider (F-)linear properties that are invariant under linear transformations of the domain and prove that an O(1)-local "characterization" is a necessary and sufficient condition for O(1)-local testability. when |K| = O(1). (A local characterization of a property is a definition of a property in terms of local constraints satisfied by functions exhibiting a property.) For the subclass of properties that are invariant under affine transformations of the domain, we prove that the existence of a single O(1)-local constraint implies O(1)-local testability. These results generalize and extend the class of algebraic properties, most notably linearity and low-degree-ness, that were previously known to be testable. In particular, the extensions include properties satisfied by functions of degree linear in n that turn out to be O(1)-locally testable. Our results are proved by introducing a new notion that we term "formal characterizations". Roughly this corresponds to characterizations that are given by a single local constraint and its permutations under linear transformations of the domain. Our main testing result shows that local formal characterizations essentially imply local testability. We then investigate properties that are linear-invariant and attempt to understand their local formal characterizability. Our results here give coarse upper and lower bounds on the locality of constraints and characterizations for linear-invariant properties in terms of some structural parameters of the property we introduce. The lower bounds rule out any characterization, while the upper bounds give formal characterizations. Combining the two gives a test for all linear-invariant properties with local characterizations. We believe that invariance of properties is a very interesting notion to study in the context of property testing in general and merits a systematic study. In particular, the class of linear-invariant and affine-invariant properties exhibits a rich variety among algebraic properties and offer better intuition about algebraic properties than the more limited class of low-degree functions.

Journal Article
TL;DR: Hertel et al. as mentioned in this paper presented a greatly simplified proof of the length-space trade-off result for resolution in [P. Hertel, T. Pitassi], and also proved several other theorems in the same vein.
Abstract: We present a greatly simplified proof of the length-space trade-off result for resolution in [P. Hertel, T. Pitassi, Exponential time/space speedups for resolution and the PSPACE-completeness of black-white pebbling, in: Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS '07), Oct. 2007, pp. 137-149], and also prove a couple of other theorems in the same vein. We point out two important ingredients needed for our proofs to work, and discuss some possible conclusions. Our key trick is to look at formulas of the type [email protected]?H, where G and H are over disjoint sets of variables and have very different length-space properties with respect to resolution.

Journal Article
TL;DR: This work demonstrates lower bounds for algebraically generating generalized characteristic vectors over certain algebraic structures, and shows how to directly apply this abstract algebraic results to put lower bounds on algebraic constructions of a number of cryptographic protocols, including PIR-writing and private keyword search protocols.
Abstract: In cryptography, there has been tremendous success in building primitives out of homomorphic semantically-secure encryption schemes, using homomorphic properties in a black-box way. A few notable examples of such primitives include items like private information retrieval schemes and collision-resistant hash functions (e.g. [14, 6, 13]). In this paper, we illustrate a general methodology for determining what types of protocols can be implemented in this way and which cannot. This is accomplished by analyzing the computational power of various algebraic structures which are preserved by existing cryptosystems. More precisely, we demonstrate lower bounds for algebraically generating generalized characteristic vectors over certain algebraic structures, and subsequently we show how to directly apply this abstract algebraic results to put lower bounds on algebraic constructions of a number of cryptographic protocols, including PIR-writing and private keyword search protocols. We hope that this work will provide a simple “litmus test” of feasibility for use by other cryptographic researchers attempting to develop new protocols that require computation on encrypted data. Additionally, a precise mathematical language for reasoning about such problems is developed in this work, which may be of independent interest.

Journal Article
TL;DR: This paper considers the problem of determining whether an unknown arithmetic circuit, for which the authors have oracle access, computes the identically zero polynomial, and obtains a quasi-polynomial time deterministic black-box identity testing algorithm for ���(k) circuits (depth-3 circuits with top fan-in equal k).
Abstract: In this paper we consider the problem of determining whether an unknown arithmetic circuit, for which we have oracle access, computes the identically zero polynomial. Our focus is on depth-3 circuits with a bounded top fan-in. We obtain the following results. 1. A quasi-polynomial time deterministic black-box identity testing algorithm for ���(k) circuits (depth-3 circuits with top fan-in equal k). 2. A randomized black-box algorithm for identity testing of ���(k) circuits, that uses a polylogarithmic number of random bits, and makes a single query to the black-box.

Journal Article
TL;DR: A public-key cryptosystem with worst-case/average case equivalence, which generalizes a conceptually simple modification of the “Ajtai-Dwork” cryptos system and provides a unified treatment of the two cryptosSystems.
Abstract: We describe a public-key cryptosystem with worst-case/average case equivalence. The cryptosystem has an amortized plaintext to ciphertext expansion of O(n), relies on the hardness of the ˜ O(n 2 )-unique shortest vector problem for lattices, and requires a public key of size at most O(n 4 ) bits. The new cryptosystem generalizes a conceptually simple modification of the “Ajtai-Dwork” cryptosystem. We provide a unified treatment of the two cryptosystems.

Journal Article
TL;DR: In this article, a new approximation algorithm for the maximum acyclic subgraph problem was presented, which achieves a 1/2 + Omega(delta/ log n) fraction of all edges.
Abstract: In this paper we present a new approximation algorithm for the Max Acyclic Subgraph problem. Given an instance where the maximum acyclic subgraph contains 1/2 + delta fraction of all edges, our algorithm finds an acyclic subgraph with 1/2 + Omega(delta/ log n) fraction of all edges.