scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: It is demonstrated that, contrary to a number of claims, PDP models are nativist in a robust sense, and it is suggested that there is an alternative form of neural network learning that demonstrates the plausibility of constructivism.

92 citations

Proceedings ArticleDOI
21 May 2002
TL;DR: A tight characterization of multi-party one-way communication complexity for product distributions in terms of VC-dimension and shatter coefficients and a suite of lower bounds for specific functions in the simultaneous communication model.
Abstract: We use tools and techniques from information theory to study communication complexity problems in the one-way and simultaneous communication models. Our results include: (1) a tight characterization of multi-party one-way communication complexity for product distributions in terms of VC-dimension and shatter coefficients; (2) an equivalence of multi-party one-way and simultaneous communication models for product distributions; (3) a suite of lower bounds for specific functions in the simultaneous communication model, most notably an optimal lower bound for the multi-party set disjointness problem of Alon et al. (1999) and for the generalized addressing function problem of Babai et al. (1996) for arbitrary groups. Methodologically, our main contribution is rendering communication complexity problems in the framework of information theory. This allows us access to the powerful calculus of information theory and the use of fundamental principles such as Fano's inequality and the maximum likelihood estimate principle.

91 citations


Cites background or methods from "Learnability and the Vapnik-Chervon..."

  • ...Our characterization is in terms of shatter coefficients—a notion that generalizes VC-dimension; we show that this characterization sometimes yields tighter lower bounds....

    [...]

  • ...I2δ ( f )) in terms of the VC-dimension of the function matrix....

    [...]

  • ...Using the famous result of Blumer et al. [6] connecting PAC learning and VC-dimension, Kremer et al. also proved an upper bound on the one-way rectangular communication Proceedings of the 17th IEEE Annual Conference on Computational Complexity (CCC 02) 1093-0159/02 $17.00 © 2002 IEEE complexity in terms of VC-dimension....

    [...]

  • ...(1) A tight characterization of multi-party one-way communication complexity for product distributions in terms of VC-dimension and shatter coefficients; (2) An equivalence of multi-party one-way and simultaneous communication models for product distributions; (3) A suite of lower bounds for specific functions in the simultaneous communication model, most notably an optimal lower bound for the multi-party set disjointness problem of Alon et al. [2] and for the generalized addressing function problem of Babai et al. [3] for arbitrary groups....

    [...]

  • ...Using the connection between PAC learning and VC-dimension [6], Kremer et al....

    [...]

Journal ArticleDOI
TL;DR: A theoretical framework for formal inductive synthesis, a framework that captures a family of synthesizers that operate by iteratively querying an oracle, and a theoretical characterization of CEGIS for learning any program that computes a recursive language.
Abstract: Formal synthesis is the process of generating a program satisfying a high-level formal specification. In recent times, effective formal synthesis methods have been proposed based on the use of inductive learning. We refer to this class of methods that learn programs from examples as formal inductive synthesis. In this paper, we present a theoretical framework for formal inductive synthesis. We discuss how formal inductive synthesis differs from traditional machine learning. We then describe oracle-guided inductive synthesis (OGIS), a framework that captures a family of synthesizers that operate by iteratively querying an oracle. An instance of OGIS that has had much practical impact is counterexample-guided inductive synthesis (CEGIS). We present a theoretical characterization of CEGIS for learning any program that computes a recursive language. In particular, we analyze the relative power of CEGIS variants where the types of counterexamples generated by the oracle varies. We also consider the impact of bounded versus unbounded memory available to the learning algorithm. In the special case where the universe of candidate programs is finite, we relate the speed of convergence to the notion of teaching dimension studied in machine learning theory. Altogether, the results of the paper take a first step towards a theoretical foundation for the emerging field of formal inductive synthesis.

91 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...We show that the complexity of these techniques is related to well-studied notions in learning theory such as the Vapnik–Chervonenkis dimension [12] and the teaching dimension [20]....

    [...]

  • ...[12] have shown that the VC dimension of a concept class characterizes the number of examples required for learning any concept in the class under the distributionfree or probably approximately correct (PAC)model of Valiant [64]....

    [...]

Journal ArticleDOI
TL;DR: The main result shows that the training problem for 2-cascade neural nets is complete, which implies that finding an optimal net (in terms of the number of non-input units) that is consistent with a set of examples is also complete.
Abstract: We consider the computational complexity of learning by neural nets. We are interested in how hard it is to design appropriate neural net architectures and to train neural nets for general and specialized learning tasks. Our main result shows that the training problem for 2-cascade neural nets (which have only two non-input nodes, one of which is hidden) is {\mathscr NP}-complete, which implies that finding an optimal net (in terms of the number of non-input units) that is consistent with a set of examples is also {\mathscr NP}-complete. This result also demonstrates a surprising gap between the computational complexities of one-node (perceptron) and two-node neural net training problems, since the perceptron training problem can be solved in polynomial time by linear programming techniques. We conjecture that training a k-cascade neural net, which is a classical threshold network training problem, is also {\mathscr NP}-complete, for each fixed k ≥ 2. We also show that the problem of finding an optimal perceptron (in terms of the number of non-zero weights) consistent with a set of training examples is {\mathscr NP}-hard. Our neural net learning model encapsulates the idea of modular neural nets, which is a popular approach to overcoming the scaling problem in training neural nets. We investigate how much easier the training problem becomes if the class of concepts to be learned is known a priori and the net architecture is allowed to be sufficiently non-optimal. Finally, we classify several neural net optimization problems within the polynomial-time hierarchy.

91 citations


Cites background or methods from "Learnability and the Vapnik-Chervon..."

  • ...From Lemma 3, we have jF optj = ( q s= log s): Thus, from Lemma 4 there is an ( ; j; k)-Occam net nder for Rs. 2 By Theorem 3.2.4 in [Blumer et al 1989], we may generalize Theorem 7 and prove the following: Theorem 8 Let C be a concept class with nite VC dimension d, let Cs = f Ss i=1 ci j ci 2 C;…...

    [...]

  • ...The rest of the proof follows immediately from Theorem 3.2.4 in [Blumer et al 1989]....

    [...]

  • ...Proof : The proof is a simple application of Theorem 3.2.1 in [Blumer et al 89]....

    [...]

  • ...The next problem deals with determining if a neural net is optimal....

    [...]

  • ...Proof : There is a well-known simple greedy algorithm for Rs, which is optimal within a relative factor of ln jSj + 1 (see, for example, [Blumer et al 89])....

    [...]

01 Jan 2007
TL;DR: Of the Dissertation Inductive Learning of Phonotactic Patterns and its Applications to Teaching and Research: Foundations of a Response to the Response to Tocqueville's inequality.
Abstract: of the Dissertation Inductive Learning of Phonotactic Patterns

90 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations