scispace - formally typeset
Search or ask a question

Showing papers in "Electronic Colloquium on Computational Complexity in 1999"


Journal Article
TL;DR: Near-optimal constructions of such "weak designs" which achieve much better parameters than possible with the notion of designs used by Nisan-Wigderson and Trevisan are given.
Abstract: We give explicit constructions of extractors which work for a source of any min-entropy on strings of length n. These extractors can extract any constant fraction of the min-entropy using O(log2 n) additional random bits, and can extract all the min-entropy using O(log3 n) additional random bits. Both of these constructions use fewer truly random bits than any previous construction which works for all min-entropies and extracts a constant fraction of the min-entropy. We then improve our second construction and show that we can reduce the entropy loss to 2log(1/e)+O(1) bits, while still using O(log3n) truly random bits (where entropy loss is defined as [(source min-entropy)+(# truly random bits used)-(# output bits)], and e is the statistical difference from uniform achieved). This entropy loss is optimal up to a constant additive term. Our extractors are obtained by observing that a weaker notion of "combinatorial design" suffices for the Nisan-Wigderson pseudorandom generator, which underlies the recent extractor of Trevisan. We give near-optimal constructions of such "weak designs" which achieve much better parameters than possible with the notion of designs used by Nisan-Wigderson and Trevisan. We also show how to improve our constructions (and Trevisan's construction) when the required statistical difference e from the uniform distribution is relatively small. This improvement is obtained by using multilinear error-correcting codes over finite fields, rather than the arbitrary error-correcting codes used by Trevisan.

191 citations


Journal Article
TL;DR: Resettable zero-knowledge (rZK) as discussed by the authors is a security measure for cryptographic protocols which strengthens the classical notion of zero knowledge, and it has great relevance to applications.
Abstract: We introduce the notion of Resettable Zero-Knowledge (rZK), a new security measure for cryptographic protocols which strengthens the classical notion of zero-knowledge. In essence, an rZK protocol is one that remains zero knowledge even if an adversary can interact with the prover many times, each time resetting the prover to its initial state and forcing him to use the same random tape. Under general complexity assumptions, which hold for example if the Discrete Logarithm Problem is hard, we construct (non-constant round) Resettable Zero-Knowledge proof-systems for NP constant-round Resettable Witness-Indistinguishable proof-systems for NP constant-round Resettable Zero-Knowledge arguments for NP in the public key model where verifiers have fixed, public keys associated with them. In addition to shedding new light on what makes zero knowledge possible (by constructing ZK protocols that use randomness in a dramatically weaker way than before), rZK has great relevance to applications. Firstly, we show that rZK protocols are closed under parallel and concurrent execution and thus are guaranteed to be secure when implemented in fully asynchronous networks, even if an adversary schedules the arrival of every message sent. Secondly, rZK protocols enlarge the range of physical ways in which provers of a ZK protocols can be securely implemented, including devices which cannot reliably toss coins on line, nor keep state between invocations. (For instance, because ordinary smart cards with secure hardware are resettable, they could not be used to implement securely the provers of classical ZK protocols, but can now be used to implement securely the provers of rZK protocols.)

161 citations


Journal Article
TL;DR: Improved algorithms for testing monotonicity of functions are presented, given the ability to query an unknown function f: Σ n ↦ Ξ, and the test always accepts a monotone f, and rejects f with high probability if it is e-far from being monotones.
Abstract: We present improved algorithms for testing monotonicity of functions. Namely, given the ability to query an unknown function f: Σ n ↦ Ξ, where Σ and Ξ are finite ordered sets, the test always accepts a monotone f, and rejects f with high probability if it is e-far from being monotone (i.e., every monotone function differs from f on more than an e fraction of the domain). For any e > 0, the query complexity of the test is O((n/e) · log ∣Σ ∣ · log ∣Ξ∣). The previous best known bound was \(\tilde{O}((n^2/\epsilon) \cdot \vert\Sigma\vert^2 \cdot \vert\Xi\vert)\).

152 citations



Journal Article
TL;DR: In this article, the complexity of the problem of deciding whether a Boolean function f can be realized by a Boolean circuit of size at most s is studied, and it is shown that proving this problem to be NP-complete (if it is indeed true) would imply proving strong circuit lower bounds for the class DTIME(2°('~)), which appears beyond the currently known techniques.
Abstract: We study the complexity of the following circuit minimization problem: given the truth table of a Boolean function f and a parameter s, decide whether f can be realized by a Boolean circuit of size at most s. We argue why this problem is unlikely to be in P (or even in P/poly) by giving a number of surprising consequences of such an assumption. We also argue that proving this problem to be NP-complete (if it is indeed true) would imply proving strong circuit lower bounds for the class DTIME(2°('~)), which appears beyond the currently known techniques.

130 citations


Journal Article
TL;DR: The results for the minimum distance problem strengthen (though using stronger assumptions) a previous result of Vardy (1997) who showed that the minimum Distance cannot be computed exactly in deterministic polynomial time (P), unless P = NP.
Abstract: We show that the minimum distance of a linear code (or equivalently, the weight of the lightest codeword) is not approximable to within any constant factor in random polynomial time (RP), unless NP equals RP. Under the stronger assumption that NP is not contained in RQP (random quasi-polynomial time), we show that the minimum distance is not approximable to within the factor 2/sup log(1-/spl epsiv/)n/, for any /spl epsiv/>0, where n denotes the block length of the code. Our results hold for codes over every finite field, including the special case of binary codes. In the process we show that the nearest codeword problem is hard to solve even under the promise that the number of errors is (a constant factor) smaller than the distance of the code. This is a particularly meaningful version of the nearest codeword problem. Our results strengthen (though using stronger assumptions) a previous result of A. Vardy (1997) who showed that the minimum distance is NP-hard to compute exactly. Our results are obtained by adapting proofs of analogous results for integer lattices due to M. Ajtai (1998) and D. Micciancio (1998). A critical component in the adaptation is our use of linear codes that perform better than random (linear) codes.

117 citations



Journal Article
TL;DR: In this article, the authors consider a model analogous to Turing machines with a read-only input tape and propose two different space measures, corresponding to the maximal number of bits and clauses/monomials that need to be kept in the memory simultaneously.
Abstract: We study space complexity in the framework of propositional proofs. We consider a natural model analogous to Turing machines with a read-only input tape and such popular propositional proof systems as resolution, polynomial calculus, and Frege systems. We propose two different space measures, corresponding to the maximal number of bits, and clauses/monomials that need to be kept in the memory simultaneously. We prove a number of lower and upper bounds in these models, as well as some structural results concerning the clause space for resolution and Frege systems.

75 citations


Journal Article
TL;DR: This paper proves near quadratic lower bounds for depth-3 arithmetic formulae over fields of characteristic zero for the elementary symmetric functions, the (trace of) iterated matrix multiplication, and the determinant, and gets the first non-trivial lower bound for computing polynomials of constant degree.
Abstract: In this paper we prove near quadratic lower bounds for depth-3 arithmetic formulae over fields of characteristic zero. Such bounds are obtained for the elementary symmetric functions, the (trace of) iterated matrix multiplication, and the determinant. As corollaries we get the first non-trivial lower bounds for computing polynomials of constant degree, and a gap between the power depth-3 arithmetic formulas and depth-4 arithmetic formulas. The main technical contribution relates the complexity of computing a polynomial in this model to the wealth of partial derivatives it has on every affine subspace of small co-dimension. Lower bounds for related models utilize an algebraic analog of Nechiporuk lower bound on Boolean formulae.

56 citations


Journal Article
TL;DR: A hierarchy Gk (U; S) of classes of conjunctive normal forms, recognizable and SAT-decidable in polynomial time is investigated, with special emphasize on the corresponding hardness parameter hU ;S(F) for clause-sets F (the rst level of inclusion).
Abstract: We investigate a hierarchy Gk (U; S) of classes of conjunctive normal forms, recognizable and SAT-decidable in polynomial time, with special emphasize on the corresponding hardness parameter hU ;S(F) for clause-sets F (the rst level of inclusion). At level 0 an (incomplete, poly-time) oracle U for unsatissability detection and an oracle S for satissability detection is used. The hierarchy from Pretolani 96] is improved in this way with respect to strengthened satissability handling, simpliied recognition and consistent relativization. Also a hierarchy of canonical poly-time reductions with Unit-clause propagation at the rst level is obtained. General methods for upper and lower bounds on hU ;S(F) are developed and applied to a number of well-known examples. hU ;S(F) admits several diierent characterizations, including the space complexity of tree-like resolution and the use of pebble games as in Esteban, Torr an 99]. Using for S the class of linearly satissable clause-sets (based on linear programming) q-Horn clause-sets Boros, Cramer, Hammer 90] are contained at level 2, and for k 1 the \\k-times nested Horn clause-sets\" from Gallo, Scutelll a 88] are contained at level k. The unsatissable clause-sets in Gk (U; S) are exactly those refutable by relativized k-times nested input resolution, and the SAT decision algorithm 1 searching through the levels from below quasi-automatizes relativized tree-like resolution (using oracle U), while by means of hU (F) nearly precise general bounds on the (relativized) complexity of tree-like resolution (with oracle U) are obtained. In order to cope also with full resolution, a (more comprehensive) hierarchy Wk (U) of unsatissable clause-sets is introduced, based on a new form of width-restricted resolution, and relativized general upper and lower bounds for full resolution are derived, generalizing Ben-Sasson, Wigder-son 99] and also releasing the lower bound from its dependence on the maximal input clause length. Motivated by Bonet, Galesi 99] we give a simpliied example where the lower bound is tight.

51 citations


Journal Article
TL;DR: In this article, it was shown that SVP and CVP are NP-hard to approximate to within nc/log log n for some constant c > 0, and a direct reduction from SAT to these problems was obtained without relying on the PCP characterization of NP.
Abstract: We show SVP∞ and CVP∞ to be NP-hard to approximate to within nc/log log n for some constant c > 0. We show a direct reduction from SAT to these problems, that combines ideas from [ABSS93] and from [DKRS99], along with some modifications. Our result is obtained without relying on the PCP characterization of NP, although some of our techniques are derived from the proof of the PCP characterization itself [DFK+99].

Journal Article
TL;DR: This paper characterize #AC0 in terms of counting paths in a family of bounded-width graphs and resolves several questions regarding the closure properties of # AC0 and GapAC0.
Abstract: Constant-depth arithmetic circuits have been defined and studied in [AAD97,ABL98]; these circuits yield the function classes #AC0 and GapAC0 These function classes in turn provide new characterizations of the computational power of threshold circuits, and provide a link between the circuit classes AC0 (where many lower bounds are known) and TC0 (where essentially no lower bounds are known) In this paper, we resolve several questions regarding the closure properties of #AC0 and GapAC0 and characterize #AC0 in terms of counting paths in a family of bounded-width graphs

Journal Article
TL;DR: The main focus is the recent progress on complexity results of intractability, which includes Ajtai's worst-case/average-case connections, NP-hardness and non-NP- hardness, transference theorems between primal and dual lattices, and the AjTai-Dwork cryptosystem.
Abstract: We survey some recent developments in the study of the complexity of lattice problems. After a discussion of some problems on lattices which can be algorithmically solved efficiently, our main focus is the recent progress on complexity results of intractability. We discuss Ajtai's worst-case/average-case connections, NP-hardness and non-NP-hardness, transference theorems between primal and dual lattices, and the Ajtai-Dwork cryptosystem.

Journal Article
TL;DR: In this article, the authors introduce the notion of stability of approximation algorithms and apply their concept to the study of the traveling salesman problem (TSP), showing how to modify the Christofides algorithm for Δ-TSP to obtain efficient approximation algorithms with constant approximation ratio for every instance of TSP that violates the triangle inequality by a multiplicative constant factor.
Abstract: The investigation of the possibility to efficiently compute approximations of hard optimization problems is one of the central and most fruitful areas of current algorithm and complexity theory. The aim of this paper is twofold. First, we introduce the notion of stability of approximation algorithms. This notion is shown to be of practical as well as of theoretical importance, especially for the real understanding of the applicability of approximation algorithms and for the determination of the border between easy instances and hard instances of optimization problems that do not admit polynomial-time approximation. Secondly, we apply our concept to the study of the traveling salesman problem (TSP). We show how to modify the Christofides algorithm for Δ-TSP to obtain efficient approximation algorithms with constant approximation ratio for every instance of TSP that violates the triangle inequality by a multiplicative constant factor. This improves the result of Andreae and Bandelt (SIAM J. Discrete Math. 8 (1995) 1).

Journal Article
TL;DR: The main results are that there is a quadratic gap between nondeterminism and Las Vegas for two-way finite automata and there is no nontrivial result relating the power of determinism, Las Vegas and nondeterministic finite Automata.
Abstract: The investigation of the computational power of randomized computations is one of the central tasks of complexity and algorithm theory. While for one-way finite automata the power of different computational modes was successfully determined, one does not have any nontrivial result relating the power of determinism, Las Vegas and nondeterminism for two-way finite automata. The main results of this paper are as follows. (i) If, for a regular language L , there exist small two-way nondeterministic finite automata for both L and L ʗ , then there exists a small two-way Las Vegas finite automaton for L . (ii) There is a quadratic gap between nondeterminism and Las Vegas for two-way finite automata. (iii) For every k ∈ N , there is a regular language S k such that S k can be accepted by a two-way Las Vegas finite automaton with O (k) states, but every two-way deterministic finite automaton recognizing S k has at least Ω (k 2 / log 2 k) states.

Journal Article
TL;DR: Agarwal et al. as mentioned in this paper showed that LABEL COVER is NP-hard to approximate to within 2(log n)1-o(1) for any constant > 0.
Abstract: The LABEL-COVER problem, defined by S. Arora, L. Babai, J. Stem, Z. Sweedyk [Proceedings of 34th IEEE Symposium on Foundations of Computer Science, 1993, pp. 724-733], serves as a starting point for numerous hardness of approximation reductions. It is one of six 'canonical' approximation problems in the survey of Arora and Lund [Hardness of Approximations, in: Approximation Algorithms for NP-Hard Problems, PWS Publishing Company, 1996, Chapter 10]. In this paper we present a direct combinatorial reduction from low error-probability PCP [Proceedings of 31st ACM Symposium on Theory of Computing, 1999, pp. 29-40] to LABEL-COVER showing it NP-hard to approximate to within 2(log n)1-o(1). This improves upon the best previous hardness of approximation results known for this problem.We also consider the MINIMUM-MONOTONE-SATISFYING-ASSIGNMENT (MMSA) problem of finding a satisfying assignment to a monotone formula with the least number of 1's, introduced by M. Alekhnovich, S. Buss, S. Moran, T. Pitassi [Minimum propositional proof length is NP-hard to linearly approximate, 1998]. We define a hierarchy of approximation problems obtained by restricting the number of alternations of the monotone formula. This hierarchy turns out to be equivalent to an AND/OR scheduling hierarchy suggested by M.H. Goldwasser, R. Motwani [Lecture Notes in Comput, Sci., Vol. 1272, Springer-Verlag, 1997, pp. 307-320]. We show some hardness results for certain levels in this hierarchy, and place LABELCOVER between levels 3 and 4. This partially answers an open problem from M.H. Goldwasser, R. Motwani regarding the precise complexity of each level in the hierarchy, and the place of LABEL-COVER in it.

Journal Article
TL;DR: It is shown that deciding square-freeness of a sparse univariate polynomial over Z and over the algebraic closure of a finite field Fp of p elements is NP-hard.
Abstract: We show that deciding square-freeness of a sparse univariate polynomial over Z and over the algebraic closure of a finite field Fp of p elements is NP-hard. We also discuss some related open problems about sparse polynomials.

Journal Article
TL;DR: In this article, the lowness properties of non-uniform function classes, namely, NPMV/poly, NPSVt/poly and NPMv/poly are investigated.
Abstract: We show the following new lowness results for the probabilistic class ZPPNP. - The class AM ∩ coAM is low for ZPPNP. As a consequence it follows that Graph Isomorphism and several group-theoretic problems known to be in AM ∩ coAM are low for ZPPNP. - The class IP[P=poly], consisting of sets that have interactive proof systems with honest provers in P=poly, is also low for ZPPNP. We consider lowness properties of nonuniform function classes, namely, NPMV/poly, NPSV/poly, NPMVt/poly, and NPSVt/poly. Specifically, we show that - Sets whose characteristic functions are in NPSV/poly and that have program checkers (in the sense of Blum and Kannan [8]) are low for AM and ZPPNP. - Sets whose characteristic functions are in NPMVt/poly are low for Σ2p.

Journal Article
TL;DR: In this paper, the largest size increase by a synthesis step of π-OBDDs followed by an optimal reordering is determined as well as the largest ratio of the size of deterministic finite automata and quasi-reduced OBDDs compared to the size in OBDD.
Abstract: Ordered binary decision diagrams (OBDDs) are nowadays the most common dynamic data structure or representation type for Boolean functions. Among the many areas of application are verification, model checking, and computer aided design. For many functions it is easy to estimate the OBDD size but asymptotically optimal bounds are only known in simple situations. In this paper, methods for proving asymptotically optimal bounds are presented and applied to the solution of some basic problems concerning OBDDs. The largest size increase by a synthesis step of π-OBDDs followed by an optimal reordering is determined as well as the largest ratio of the size of deterministic finite automata and quasi-reduced OBDDs compared to the size of OBDDs. Moreover, the worst case OBDD size of functions with a given number of 1-inputs is investigated.

Journal Article
TL;DR: In this paper, the first completely combinatorial algorithm for computing the Pfaffian in polynomial time was presented, which can be computed in the complexity class GapL; this result was not known before.
Abstract: The Pfaffian of a graph is closely linked to Perfect Matching It is also naturally related to the determinant of an appropriately defined matrix This relation between Pfaffian and determinant is usually exploited to give a fast algorithm for computing Pfaffians We present the first completely combinatorial algorithm for computing the Pfaffian in polynomial time In fact, we show that it can be computed in the complexity class GapL; this result was not known before Our proof techniques generalize the recent combinatorial characterization of determinant [MV97] in novel ways As a corollary, we show that under reasonable encodings of a planar graph, Kasteleyn's algorithm for counting the number of perfect matchings in a planar graph is also in GapL The combinatorial characterization of Pfaffian also makes it possible to directly establish several algorithmic and complexity theoretic results on Perfect Matching which otherwise use determinants in a roundabout way

Journal Article
TL;DR: It is shown that the set of prime numbers is not contained in AC/sup 0/ [p] for any prime p, and the problem of computing the greatest common divisor of two numbers is solved.
Abstract: Recent work by Bernasconi, Damm and Shparlinski proved lower bounds on the circuit complexity of the square-free numbers, and raised as an open question if similar (or stronger) lower bounds could be proved for the set of prime numbers. In this short note, we answer this question affirmatively, by showing that the set of prime numbers (represented in the usual binary notation) is not contained in AC/sup 0/ [p] for any prime p. Similar lower bounds are presented for the set of square-free numbers, and for the problem of computing the greatest common divisor of two numbers.

Journal Article
TL;DR: In this paper, the authors combine communication complexity and information theory to prove that the direct storage access function and the inner product function have the following properties: they have linear π-OBDD size for some variable ordering π and, for most variable orderings π, all functions which approximate them on considerably more than half of the inputs, need exponential π−OBDDD size.
Abstract: Ordered binary decision diagrams (OBDDs) and their variants are motivated by the need to represent Boolean functions in applications. Research concerning these applications leads also to problems and results interesting from a theoretical point of view. In this paper, methods from communication complexity and information theory are combined to prove that the direct storage access function and the inner product function have the following property. They have linear π-OBDD size for some variable ordering π and, for most variable orderings π′ all functions which approximate them on considerably more than half of the inputs, need exponential π′-OBDD size. These results have implications for the use of OBDDs in experiments with genetic programming.

Journal Article
TL;DR: In this paper, the authors considered combinatorial avoidance and achievement games based on graph Ramsey theory, where the players take turns in coloring still uncolored edges of a graph G, each player being assigned a distinct color, choosing one edge per move.
Abstract: We consider combinatorial avoidance and achievement games based on graph Ramsey theory: The players take turns in coloring still uncolored edges of a graph G, each player being assigned a distinct color, choosing one edge per move. In avoidance games, completing a monochromatic subgraph isomorphic to another graph A leads to immediate defeat or is forbidden and the first player that cannot move loses. In the avoidance+ variants, both players are free to choose more than one edge per move. In achievement games, the first player that completes a monochromatic subgraph isomorphic to A wins. Erdos & Selfridge (1973) were the first to identify some tractable subcases of these games, followed by a large number of further studies. We complete these investigations by settling the complexity of all unrestricted cases: We prove that general graph Ramsey avoidance, avoidance+, and achievement games and several variants thereof are PSPACE-complete. We ultra-strongly solve some nontrivial instances of graph Ramsey avoidance games that are based on symmetric binary Ramsey numbers and provide strong evidence that all other cases based on symmetric binary Ramsey numbers are effectively intractable. Keywords: combinatorial games, graph Ramsey theory, Ramsey game, PSPACE-completeness, complexity, edge coloring, winning strategy, achievement game, avoidance game, the game of Sim, Polya's enumeration formula, probabilistic counting, machine learning, heuristics, Java applet

Journal Article
TL;DR: It is proved that Kleene closure, inversion, and root extraction are all hard operations in the following sense: there is a language in AC0 for which inversion androot extraction are GapL-complete and Kleeneclosure is NLOG-complete, and there isA finite set for whichinversion and root extractions are GapNC1 -complete and Klein closure is NC1 - complete, with respect to appropriate reducibilities.
Abstract: Abstract. The aim of this paper is to use formal power series techniques to study the structure of small arithmetic complexity classes such as GapNC1 and GapL. More precisely, we apply the formal power series operations of inversion and root extraction to these complexity classes. We define a counting version of Kleene closure and show that it is intimately related to inversion and root extraction within GapNC1 and GapL. We prove that Kleene closure, inversion, and root extraction are all hard operations in the following sense: there is a language in AC0 for which inversion and root extraction are GapL-complete and Kleene closure is NLOG-complete, and there is a finite set for which inversion and root extraction are GapNC1 -complete and Kleene closure is NC1 -complete, with respect to appropriate reducibilities. The latter result raises the question of classifying finite languages so that their inverses fall within interesting subclasses of GapNC1 , such as GapAC0 . We initiate work in this direction by classifying the complexity of the Kleene closure of finite languages. We formulate the problem in terms of finite monoids and relate its complexity to the internal structure of the monoid. Some results in this paper show properties of complexity classes that are interesting independent of formal power series considerations, including some useful closure properties and complete problems for GapL.

Journal Article
TL;DR: In this paper, the security of individual bits in an RSA encrypted message EN(x) was studied, and it was shown that predicting any single bit in EN (x) with only a nonnegligible advantage over the trivial guessing strategy, is (through a polynomial-time reduction) as hard as breaking RSA.
Abstract: We study the security of individual bits in an RSA encrypted message EN(x). We show that given EN(x), predicting any single bit in x with only a nonnegligible advantage over the trivial guessing strategy, is (through a polynomial-time reduction) as hard as breaking RSA. Moreover, we prove that blocks of O(log log N) bits of x are computationally indistinguishable from random bits. The results carry over to the Rabin encryption scheme.Considering the discrete exponentiation function gx modulo p, with probability 1 − o(1) over random choices of the prime p, the analog results are demonstrated. The results do not rely on group representation, and therefore applies to general cyclic groups as well. Finally, we prove that the bits of ax + b modulo p give hard core predicates for any one-way function f.All our results follow from a general result on the chosen multiplier hidden number problem: given an integer N, and access to an algorithm Px that on input a random a i ZN, returns a guess of the ith bit of ax mod N, recover x. We show that for any i, if Px has at least a nonnegligible advantage in predicting the ith bit, we either recover x, or, obtain a nontrivial factor of N in polynomial time. The result also extends to prove the results about simultaneous security of blocks of O(log log N) bits.

Journal Article
TL;DR: The notion of linearity testing has been extended to the problem of checking linear consistency of multiple functions as discussed by the authors, where functions are linear if their graphs form straight lines on the plane and two functions are consistent if the lines have the same slope.
Abstract: We extend the notion of linearity testing to the task of checking linear consistency of multiple functions. Informally, functions are “linear” if their graphs form straight lines on the plane. Two such functions are “consistent” if the lines have the same slope. We propose a variant of a test of M. Blum et al. (J. Comput. System Sci.47 (1993), 549?595) to check the linear consistency of three functions f1, f2, f3 mapping a finite Abelian group G to an Abelian group H:Pick x, y?G uniformly and independently at random and check if f1(x)+f2(y)=f3(x+y). We analyze this test for two cases: (1) G and H are arbitrary Abelian groups and (2) G=Fn2 and H=F2. Questions bearing close relationship to linear-consistency testing seem to have been implicitly considered in recent work on the construction of PCPs and in particular in the work of J. Hastad 9 (in “Proceedings of the Twenty-Ninth Annual ACM Symposium on Theory of Computing, El Paso, Texas, 4?6 May 1997,” pp. 1?10). It is abstracted explicitly for the first time here. As an application of our results we give yet another new and tight characterization of NP, namely ??>0, NP=MIP1??, 1/2O(logn), 3, 1. That is, every language in NP has 3-prover 1-round proof systems in which the verifier tosses O(logn) coins and asks each of the three provers one question each. The provers respond with one bit each such that the verifier accepts instance of the language with probability 1?? and rejects noninstances with probability at least 12. Such a result is of some interest in the study of probabilistically checkable proofs.




Journal Article
TL;DR: It is argued that the Vapnik-Chervonenkis dimension of nonoverlapping threshold or sigmoidal networks cannot become larger by allowing the nodes to compute linear functions.
Abstract: A neural network is said to be nonoverlapping if there is at most one edge outgoing from each node. We investigate the number of examples that a learning algorithm needs when using nonoverlapping neural networks as hypotheses. We derive bounds for this sample complexity in terms of the Vapnik-Chervonenkis dimension. In particular, we consider networks consisting of threshold, sigmoidal and linear gates. We show that the class of nonoverlapping threshold networks and the class of nonoverlapping sigmoidal networks on n inputs both have Vapnik-Chervonenkis dimension Ω(nlog n). This bound is asymptotically tight for the class of nonoverlapping threshold networks. We also present an upper bound for this class where the constants involved are considerably smaller than in a previous calculation. Finally, we argue that the Vapnik-Chervonenkis dimension of nonoverlapping threshold or sigmoidal networks cannot become larger by allowing the nodes to compute linear functions. This sheds some light on a recent result that exhibited neural networks with quadratic Vapnik-Chervonenkis dimension.