scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
Naoki Abe1
TL;DR: The learnability of the class of letter-counts of regular languages (semilinear sets) and other related classes of subsets of Nd or Zd with respect to the distribution-free learning model of Valiant (PAC learning model) is characterized using the notion of reducibility among learning problems due to Pitt and Warmuth.
Abstract: The learnability of the class of letter-counts of regular languages (semilinear sets) and other related classes of subsets of Nd or Zd With respect to the distribution-free learning model of Valiant (PAC learning model) is characterized. Using the notion of reducibility among learning problems due to Pitt and Warmuth called "prediction preserving reducibility," and a special case thereof, a number of positive and partially negative results are obtained. On the positive side the class of semilinear sets of dimension 1 or 2 is shown to be learnable when the integers are encoded in unary. On the neutral to negative side it is shown that when the integers are encoded in binary the learning problem for semilinear sets as well as for a class of subsets of Zd much simpler than semilinear sets is as hard as learning DNF, a central open problem in the field. A number of hardness results for related learning problems are also given.

12 citations

Proceedings Article
02 Dec 1991
TL;DR: It is shown that the sample size for reliable learning can be bounded above by a formula similar to that required for single output networks with no equivalences.
Abstract: This paper applies the theory of Probably Approximately Correct (PAC) learning to multiple output feedforward threshold networks in which the weights conform to certain equivalences. It is shown that the sample size for reliable learning can be bounded above by a formula similar to that required for single output networks with no equivalences. The best previously obtained bounds are improved for all cases.

12 citations

Journal ArticleDOI
TL;DR: In this article, the Glivenko-Cantelli property for stationary ergodic processes was shown to fail to hold for independent random variables, and it was shown that these characterizations do not hold for any stationary process.

12 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...…the consistency of simple inductive procedures for a variety of statistical problems, including pattern recognition (Vapnik (1982), Devroye (1988)), the training of neural networks (Baum and Haussler (1989), Hausser (1992), Faragó and Lugosi (1992)), and machine learning (Blumer et al. (1989))....

    [...]

Proceedings ArticleDOI
16 Jul 2019
TL;DR: A method to combine different approaches in order to achieve best performances on categorical accuracy, with special attention paid to underrepresented category is proposed.
Abstract: In order to ease visual inspections of exterior aircraft fuselage, new technical approaches have been recently deployed. Automated UAVs are now acquiring high quality images of the aircraft in order to perform offline analysis. At first, some acquisitions are annotated by human operators in order to provide a large dataset required to train machine learning methods, especially for critical defects detection. An intrinsic problem of this dataset is its extreme imbalance (i.e there is an unequal distribution between classes): the rarest and most valuable samples represent few elements among thousands of annotated objects. Deep Learning-only based approaches have proven to be very effective when a sufficient amount of data is available for each desired class, whereas less complex systems such as Support Vector Machine theoretically need less data, and few-shot learning dedicated methods (Matching Network, Prototypical Network, etc.) can learn from only few examples. Those approaches are compared on our applicative case. Preliminary results show the existence of empirical frontiers in term of training dataset volume that indicate which approach might be favored. Based on those results, we propose a method to combine different approaches in order to achieve best performances on categorical accuracy, with special attention paid to underrepresented category.

12 citations

Journal ArticleDOI
TL;DR: It is argued that kernel-like neural computation is particularly suited to serving such learning and decision making needs, while simultaneously satisfying four fundamental constraints that apply to any cognitive system that is charged with learning from the statistics of its world.

12 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations