scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI
13 Mar 1995
TL;DR: This paper requires that, for most training samples, with high probability the absolute difference between the values of the learner's hypothesis and the target function on a random point is small.
Abstract: In the problem of learning a real-valued function from examples, in an extension of the ‘PAC’ model, a learner sees a sequence of values of an unknown function at a number of randomly chosen points. On the basis of these examples, the learner chooses a function—called a hypothesis—from some class H of hypotheses, with the aim that the learner's hypothesis is close to the target function on future random examples. In this paper we require that, for most training samples, with high probability the absolute difference between the values of the learner's hypothesis and the target function on a random point is small. A natural learning algorithm to consider is one that chooses a function in H that is close to the target function on the training examples. This, together with the success criterion described above, leads to the definition of a statistical property which we would wish a class of functions to possess.

34 citations


Cites background or methods from "Learnability and the Vapnik-Chervon..."

  • ...The following result follows easily from Theorem 1 in [9], which gives a lower bound on the number of examples necessary for learning f0; 1g[d] in the probably approximately correct model (see also [7])....

    [...]

  • ...The proof is based on a technique analogous to that used in [16, 7, 11], where we `symmetrize' and then `combinatorially bound'....

    [...]

  • ...It is fairly easy to show [7] that P 2m(R) 2 mmaxz2X2m R(z)....

    [...]

Proceedings ArticleDOI
01 Aug 1993
TL;DR: In this article, it was shown that d-dimensional rectangles are learnable if and only if the fraction of noisy examplea is less than l/(d + 1), where learnable means that the learner can learn the target by a finite number of examples.
Abstract: We investigate the implications of noise in the equivalence query model. Besides some results for general target and hypotheaea classes, we prove bounds on the learning complexity of ddimensional rectangles (of size at most nd) in the case where only rectangles are allowed aa hypotheses. Our noise model assumea that a certain fraction of the examplea is noisy. We show that d-dimensional rectangles are learnable if and only if the fraction of noisy examplea is less than l/(d + 1), where learnable means that the learner can learn the target by a finite number of examples. Besides this structural result we present an algorithm which learns rectangles in poly( -*j) time using 0(-$ ) examples if the fraction of noise r is less than &. Aa a related result we prove for the noise-free case that the number of examplea necessary to learn is at least Q(A log n), where the best known upper bound on the learning complexity is 0(d2 log n).

34 citations

Book ChapterDOI
16 Dec 2011
TL;DR: A method is developed in this research to evaluate generalization capability and rate of convergence towards the generalization of Multi-Layer Perceptron neural network with pruning being incorporated for handwritten numeral recognition.
Abstract: In any real world application, the performance of Artificial Neural Networks (ANN) is mostly depends upon its generalization capability. Generalization of the ANN is ability to handle unseen data. The generalization capability of the network is mostly determined by system complexity and training of the network. Poor generalization is observed when the network is over-trained or system complexity (or degree of freedom) is relatively more than the training data. A smaller network which can fit the data will have the k good generalization ability. Network parameter pruning is one of the promising methods to reduce the degree of freedom of a network and hence improve its generalization. In recent years various pruning methods have been developed and found effective in real world applications. Next, it is important to estimate the improvement in generalization and rate of improvement as pruning being incorporated in the network. A method is developed in this research to evaluate generalization capability and rate of convergence towards the generalization. Using the proposed method, experiments have been conducted to evaluate Multi-Layer Perceptron neural network with pruning being incorporated for handwritten numeral recognition.

34 citations

Proceedings ArticleDOI
15 Aug 1991
TL;DR: A single (unknown) probability distribution over the domain is used to generate random examples for the learning algorithm, measure the speed at which the target changes, and measure the error of the algorithm's hypothesis.
Abstract: In this paper we consider the problem of tracking a subset of the domain (called the target) which changes gradually over time. A single (unknown) probability distribution over the domain is used to generate random examples for the learning algorithm, measure the speed at which the target changes, and measure the error of the algorithm's hypothesis.

34 citations

01 Jan 2005
TL;DR: Of the Dissertation Learning Features and Segments from Waveforms: A Statistical Model of Early Phonological Acquisition and its Applications.
Abstract: of the Dissertation Learning Features and Segments from Waveforms: A Statistical Model of Early Phonological Acquisition

34 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...Results in statistical learning theory show that no machine can learn an arbitrary set of functions (Vapnik and Chernovenkis, 1971; Blumer et al., 1989)....

    [...]

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations