scispace - formally typeset
Search or ask a question

Showing papers in "Electronic Colloquium on Computational Complexity in 1994"


Journal Article
TL;DR: This paper proves lower bounds of the form exp(ne d), ed > 0, on the length of proofs of an explicit sequence of tautologies, based on the Pigeonhole Principle, in proof systems using formulas of depth d, for any constant d, is the largest lower bound for the strongest proof system, for which any superpolynomial lower bounds are known.
Abstract: We prove lower bounds of the form exp(ne d), ed > 0, on the length of proofs of an explicit sequence of tautologies, based on the Pigeonhole Principle, in proof systems using formulas of depth d, for any constant d. This is the largest lower bound for the strongest proof system, for which any superpolynomial lower bounds are known.

143 citations


Journal Article
TL;DR: Three explicit constructions of hash functions are presented, which exhibit a trade-off between the size of the family (and hence the number of random bits needed to generate a member of thefamily), and the quality (or error parameter) of the pseudo-random property it achieves.
Abstract: We present three explicit constructions of hash functions, which exhibit a trade-off between the size of the family (and hence the number of random bits needed to generate a member of the family), and the quality (or error parameter) of the pseudo-random property it achieves. Unlike previous constructions, most notably universal hashing, the size of our families is essentially independent of the size of the domain on which the functions operate. The first construction is for the mixing property-mapping a proportional part of any subset of the domain to any other subset. The other two are for the extraction property-mapping any subset of the domain almost uniformly into a range smaller than it. The second and third constructions handle (respectively) the extreme situations when the range is very large or very small. We provide lower bounds showing our constructions are nearly optimal, and mention some applications of the new constructions.

95 citations



Journal Article
TL;DR: In this paper, the authors present a Logspace, many-one reduction from the undirected s-t connectivity problem to its complement, and show that SL=coSL.
Abstract: We present a Logspace, many-one reduction from the undirected s-t connectivity problem to its complement. This shows that SL=coSL.

66 citations


Journal Article
TL;DR: This paper exhibits for any depth d 3 a large class of feedforward neural nets of depth d with w weights that have VC-dimension (w log w), and shows that this lower bound holds even if the inputs are restricted to Boolean values.
Abstract: It has been known for quite a while that the Vapnik-Chervonenkis dimension (VC-dimension) of a feedforward neural net with linear threshold gates is at most O(w · log w), where w is the total number of weights in the neural net. We show in this paper that this bound is in fact asymptotically optimal. More precisely, we exhibit for any depth d ≥ 3 a large class of feedforward neural nets of depth d with w weights that have VC-dimension Ω(w · log w). This lower bound holds even if the inputs are restricted to Boolean values. The proof of this result relies on a new method that allows us to encode more program-bits in the weights of a neural net than previously thought possible.

54 citations


Journal Article
TL;DR: In this article, a fonction booleenne explicite f, who ne peut pas etre calculee par un programme syntaxique arborescent non deterministe de taille inferieure a exp(ω(∪ n /k 2k )), is presented.
Abstract: Un programme syntaxique arborescent a k lectures, est defini par la restriction qu'aucune variable n'apparaisse plus de k fois le long d'un chemin (consistant ou non). Nous exhibons une fonction booleenne explicite f, qui ne peut pas etre calculee par un programme syntaxique arborescent non deterministe de taille inferieure a exp(ω(∪ n /k 2k )), bien que son complementaire −f admette un programme syntaxique non-deterministe arborescent a une unique lecture de taille polynomiale. Ceci signifie en particulier que l'analogue non-uniforme de NLOGSPACE=co−NLOGSPACE ne vaut plus pour les reseaux syntaxiques a k lectures ou k=o(logn). Nous montrons aussi que (meme pour k=1), le modele syntaxique est exponentiellement plus faible que le modele plus realiste «non-syntaxique»

48 citations


Journal Article
TL;DR: In this paper, it was shown that there is no constant depth polynomial (in n) size Frege proof of CF'P,n even if we are allowed to use CF'q,n as an axiom schema.
Abstract: If p is a prime and n is a positive integer then the mod p counting principle for n (CPp,n) is the following statement: there are no two equivalence relations @ and @ on a set A of size n with the following properties: (a) each class of@ cent ains exactly p elements, and (b) each class of IO with one exception contains exactly p elements, the exceptional class contains 1 element. We will always assume that p is constant and n is sufficiently large. If we associate Boolean variables x ~, b, ya, b with all pairs formed from the elements of A then CPP,n can be expressed as a Boolean formula of constant depth and polynomial size. (We may think that a@b iff xa, h = 1, aQb iff Ya,b = 1.) This formula is a tautology. We show that if p, q are distinct primes then there is no constant depth polynomial (in n) size Frege proof of CF'P,n even if we are allowed to use CF'q,n as an axiom schema. (We get the axiom schema from C'Pg,n by replacing in every possible way the variables in G'pq,n by constant depth polynomial size fromulae formed from the variables used in the Frege proof. This schema says the if we define the partitions 0, W in an arbitrary but constant depth poly-size way they cannot contradict CPq, n). The new part of the proof (compared to earlier results about the Pigoenhole and Parity Principles) is the reduction of the problem to a theorem about the representations of the symmetric group over the field with p elements (or as we will formulate below a theorem about symmetric systems of linear equations), and the proof of this theorem. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association of Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 1. A Frege proof system is a way of proving that a propositional formula is a tautol-ogy. First we give a finite set of axioms. E.g. q$ A ~ * ~ A @ is an axiom, more precisely we accept that if we replace # …

40 citations


Journal Article
TL;DR: This paper study the pairs (U,V) of disjoint NP-sets representable in a theory T of Bounded Arithmetic in the sense that T proves U intersection V = \emptyset, which allows the approach to showing independence of central open questions in Boolean complexity from theories of Bounding Arithmetic to be clarified.
Abstract: In this paper we study the pairs (U,V) of disjoint NP-sets representable in a theory T of Bounded Arithmetic in the sense that T proves U intersection V = \emptyset. For a large variety of theories T we exhibit a natural disjoint NP-pair which is complete for the class of disjoint NP-pairs representable in T. This allows us to clarify the approach to showing independence of central open questions in Boolean complexity from theories of Bounded Arithmetic initiated in [1]. Namely, in order to prove the independence result from a theory T, it is sufficient to separate the corresponding complete NP-pair by a (quasi)poly-time computable set. We remark that such a separation is obvious for the theory S(S_2) + S Sigma^b_2 - PIND considered in [1], and this gives an alternative proof of the main result from that paper. [1] A. Razborov. Unprovability of lower bounds on circuit size in certain fragments of Bounded Arithmetic. To appear in Izvestiya of the RAN , 1994.

37 citations


Journal Article
TL;DR: The computational complexity of languages with interactive proofs of logarithmic knowledge complexity was studied in this article, where it was shown that all such languages can be recognized in BPP and NP.
Abstract: We study the computational complexity of languages which have interactive proofs of logarithmic knowledge complexity. We show that all such languages can be recognized in ${\cal BPP}^{\cal NP}$. Prior to this work, for languages with greater-than-zero knowledge complexity only trivial computational complexity bounds were known. In the course of our proof, we relate statistical knowledge complexity to perfect knowledge complexity; specifically, we show that, for the honest verifier, these hierarchies coincide up to a logarithmic additive term.

28 citations



Journal Article
TL;DR: In this paper, it is shown that simple operations on phase differences between spike-trains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network of spiking neurons.
Abstract: We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phase differences between spike-trains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We construct networks of spiking neurons that simulate arbitrary threshold circuits, Turing machines, and a certain type of random access machines with real valued inputs. We also show that relatively weak basic assumptions about the response and threshold functions of the spiking neurons are sufficient to employ them for such computations.

Journal Article
TL;DR: A general technique is developed that supplies fully polynomial randomised approximation schemes for approximating the value of T(G; x, y) for any dense graph G, that is, any graph on n vertices whose minimum.
Abstract: The Tutte-Grothendieck polynomial T (G;x, y) of a graph G encodes numerous interesting combinatorial quantities associated with the graph Its evaluation in various points in the (x, y) plane give the number of spanning forests of the graph, the number of its strongly connected orientations, the number of its proper k-colorings, the (all terminal) reliability probability of the graph, and various other invariants the exact computation of each of which is well known to be #P -hard Here we develop a general technique that supplies fully polynomial randomised approximation schemes for approximating the value of T (G;x, y) for any dense graph G, that is, any graph on n vertices whose minimum degree is Ω(n), whenever x ≥ 1 and y > 1, and in various additional points Annan [2] has dealt with the case y = 1, x ≥ 1 This region includes evaluations of reliability and partition functions of the ferromagnetic Q-state Potts model Extensions to linear matroids where T specialises to the weight enumerator of linear codes are considered as well

Journal Article
TL;DR: The complexity of the satissability problem for these restricted branching programs is investigated and tight hierarchy results are proved for the classes of functions representable by k layers of ordered or indexed BDDs of polynomial size.
Abstract: Almost the same types of restricted branching programs (or binary decision diagrams BDDs) are considered in complexity theory and in applications like hardware veriication. These models are read-once branching programs (free BDDs) and certain types of oblivious branching programs (ordered and indexed BDDs with k layers). The complexity of the satissability problem for these restricted branching programs is investigated and tight hierarchy results are proved for the classes of functions representable by k layers of ordered or indexed BDDs of polynomial size.

Journal Article
TL;DR: This exposition focuses on three such proof systems — interactive proofs, zero-knowledge proofs, and probabilistic checkable proofs — stressing the essential role of randomness in each of them.
Abstract: Various types of probabilistic proof systems have played a central role in the development of computer science in the last decade. In this exposition, we concentrate on three such proof systems — interactive proofs, zero-knowledge proofs, and probabilistic checkable proofs — stressing the essential role of randomness in each of them.

Journal Article
TL;DR: In this article, it was shown that for all prime numbers p and integers q, r, it holds that if p divides r but not q then all threshold-MOD q circuits for MOD r have exponentially many nodes.
Abstract: We investigate the computational power of depth-2 circuits consisting of MOD r gates at the bottom and a threshold gate with arbitrary weights at the top (for short, threshold-MOD r circuits) and circuits with two levels of MOD gates ( MOD p -MOD q circuits). In particular, we will show the following results: 1. (i) For all prime numbers p and integers q , r , it holds that if p divides r but not q then all threshold-MOD q circuits for MOD r have exponentially many nodes. 2. (ii) For all integers r , all problems computable by depth-2 AND,OR,NOT circuits of polynomial size have threshold-MOD r circuits with polynomially many edges. 3. (iii) There is a problem computable by depth 3 AND,OR,NOT circuits of linear size and constant bottom fan-in which for all r needs threshold-MOD r circuits with exponentially many nodes. 4. (iv) For p , r different primes, and q ⩾ 2, k positive integers, where r does not divide q , every MOD p k -MOD q circuit for MOD r has exponentially many nodes. Results (i) and (iii) imply the first known exponential lower bounds on the number of nodes of threshold-MOD r circuits, r ≠ 2. They are based on a new method for estimating the minimum length of threshold realizations over predefined function bases, which, in contrast to previous related techniques (Goldmann et al., 1992; Bruck and Smolensky, 1990; Kailath et al., 1991; Goldmann, 1993; Grolmusz, 1993) works even if the weight of the realization is allowed to be unbounded, and if the bases are allowed to be nonorthogonal. The special importance of result (iii) consists of the fact that the known spectral-theoretically based lower bound methods for threshold-XOR circuits (Bruck and Smolensky, 1990; Kailath et al., 1991) can provably not be applied to AC 0 functions. Thus, by (ii), result (iii) is sharp. It gives a partial negative answer to the open question whether there exist simulations of AC 0 -circuits by small depth threshold circuits which are more efficient than that given by Yao's important result that ACC functions have depth-3 threshold circuits of quasipolynomial weight (Yao, 1990). Finally we observe that our method works also for MOD p -MOD q circuits, if p is a power of a prime ((iv) above); see (Barrington et al., 1990; Krause and Waack, 1991; Yan and Parberry, 1994) for related results. A preliminary version of this paper appeared in (Krause and Pudlak, 1993).


Journal Article
TL;DR: A better and fast heuristic is suggested for the Steiner problem in graphs and in rectilinear plane, which requires a shortest tree connecting a given set of terminal points in a metric space.
Abstract: The Steiner tree problem requires to nd a shortest tree connecting a given set of terminal points in a metric space. We suggest a better and fast heuristic for the Steiner problem in graphs and in rectilinear plane. This heuristic nds a Steiner tree at most 1.757 and 1.267 times longer than the optimal solution in graphs and rectilinear plane, respectively.

Journal Article
TL;DR: In this paper, Haussler's PAC-learning model for multi-layer neural networks is presented. But this model does not have a prior knowledge about the learning target, noise is permitted in the training data, and inputs and outputs are not restricted to boolean values.
Abstract: There exist a number of negative results ([J]), [BR]), [KV]) about learning on neural nets in Valiant's model [V]) for probably approximately correct learning ("PAC-learning") These negative results are based on an asymptotic analysis where one lets the number of nodes in the neural net go to infinity Hence this analysis is less adequate for the investigation of learning on a small fixed neural net with relatively few analog inputs (eg the principal components of some sensory data) The latter type of learning problem gives rise to a different kind of asymptotic question: Can the true error of the neural net be brought arbitrarily close to that of a neural net with "optimal" weights through sufficiently long training? In this paper we employ some new arguments in order to give a positive answer to this question in Haussler's rather realistic refinement of Valiant's model for PAC-learning ([H]), [KSS]) In this more realistic model no a-priori assumptions are required about the "learning target" , noise is permitted in the training data, and the inputs and outputs are not restricted to boolean values As a special case our result implies one of the first positive results about learning on multi-layer neural nets in Valiant's original PAC-learning model At the end of this paper we will describe an efficient parallel implementation of this new learning algorithm


Journal Article
TL;DR: It is shown that high order feedforward neural nets of constant depth with piecewise polynomial activation functions and arbitrary real weights can be simulated for boolean inputs and outputs by neuralnets of a somewhat larger size and depth with linear threshold gates and weights, providing the first known upper bound for the computational power and VC-dimension of the former type of neural nets.
Abstract: It is shown that high-order feedforward neural nets of constant depth with piecewise-polynomial activation functions and arbitrary real weights can be simulated for Boolean inputs and outputs by neural nets of a somewhat larger size and depth with Heaviside gates and weights from {-1, 0, 1}. This provides the first known upper bound for the computational power of the former type of neural nets. It is also shown that in the case of first-order nets with piecewise-linear activation functions one can replace arbitrary real weights by rational numbers with polynomially many bits without changing the Boolean function that is computed by the neural net. In order to prove these results, we introduce two new methods for reducing nonlinear problems about weights in multilayer neural nets to linear problems for a transformed set of parameters. These transformed parameters can be interpreted as weights in a somewhat larger neural net. As another application of our new proof technique we show that neural nets with piecewise-polynomial activation functions and a constant number of analog inputs are probably approximately correct (PAC) learnable (in Valiant's model for PAC learning [Comm. Assoc. Comput. Mach., 27 (1984), pp. 1134--1142]).

Journal Article
TL;DR: It is proved that 4 rounds are necessary and sufficient when 2/spl radic/(2n)/spl les/0.03n (for n sufficiently large) and at least 5 rounds are required when t/spl ges/ 0.49n ( for n sufficientlyLarge).
Abstract: Consider a set of n processors that can communicate with each other. Assume that each processor can be either "good" or "faulty". Also assume that the processors can test each other. We consider how to use parallel testing rounds to identify the faulty processors, given an upper bound t on their number. We prove that 4 rounds are necessary and sufficient when 2/spl radic/(2n)/spl les/0.03n (for n sufficiently large). Furthermore, at least 5 rounds are necessary when t/spl ges/0.49n (for n sufficiently large), and 10 rounds are sufficient when t<0.5n (for all n). (It is well known that no general solution is possible when t/spl ges/0.5n).

Journal Article
TL;DR: In this article, the maximum bichromatic discrepancy for simple geometric ranges, including rectangles and halfspaces, has been studied and an algorithm to compute it is given.
Abstract: Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, including rectangles and halfspaces. In addition, we give extensions to other discrepancy problems.

Journal Article
TL;DR: A review of the past ten years in computational complexity theory by focusing on ten theorems that the author enjoyed the most is given in this paper, where the authors use each of the theorem as a springboard to discuss work done in various areas of complexity theory.
Abstract: We review the past ten years in computational complexity theory by focusing on ten theorems that the author enjoyed the most. We use each of the theorems as a springboard to discuss work done in various areas of complexity theory.