scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A model for approximate testing of concepts is developed, which relates to the PAC (probably almost correct) model of learning as well as other learning models, and a new measure that is similar to the Vapnik-Chervonenkis dimension of a concept class is defined, and it is shown how it yields untestability results for certain concept classes.

17 citations

Journal ArticleDOI
TL;DR: This work introduces an algorithm for constructing optimal monitoring schedules and proves its correctness, and presents a formal model for this class of problems and provides a theoretical analysis of the class of optimal schedules.

17 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...The computational learning literature gives us anupper limit on the number of examples required for PAC-learning [26, 1]; this upper limit isbased on , and the VC dimension of the concept class....

    [...]

  • ...The computational learning literature gives us an upper limit on the number of examples required for PAC-learning [26, 1]; this upper limit is based on , and the VC dimension of the concept class....

    [...]

  • ...The monitored process is a learning-by-examples PAC-learning algorithm....

    [...]

Book ChapterDOI
22 Jun 2006
TL;DR: It is proved that using a penalty of smaller order or equal to zero is preferable both in theory and practice and the introduction of a small-order penalty stabilizes the selection process, while preserving rather good performances.
Abstract: Let (X,Y) be a $\mathcal{X}$× 0,1 valued random pair and consider a sample (X1,Y1),...,(Xn,Yn) drawn from the distribution of (X,Y). We aim at constructing from this sample a classifier that is a function which would predict the value of Y from the observation of X. The special case where $\mathcal{X}$is a functional space is of particular interest due to the so called curse of dimensionality. In a recent paper, Biau et al. [1] propose to filter the Xi’s in the Fourier basis and to apply the classical k–Nearest Neighbor rule to the first d coefficients of the expansion. The selection of both k and d is made automatically via a penalized criterion. We extend this study, and note here the penalty used by Biau et al. is too heavy when we consider the minimax point of view under some margin type assumptions. We prove that using a penalty of smaller order or equal to zero is preferable both in theory and practice. Our experimental study furthermore shows that the introduction of a small-order penalty stabilizes the selection process, while preserving rather good performances.

16 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...[24] obtained various forms of the following result: if L(φ∗) = 0, then the ERM φ̂n has a mean classification error not larger than κ4V (C) log n/n....

    [...]

Book ChapterDOI
07 Nov 2011
TL;DR: This paper shows how statistical techniques can be used to reason in a statistically justified manner about the number of tests required to fully exercise a system without a specification, and how to provide a valid adequacy measure for black-box test sets in an applied context.
Abstract: Testing a black-box system without recourse to a specification is difficult, because there is no basis for estimating how many tests will be required, or to assess how complete a given test set is Several researchers have noted that there is a duality between these testing problems and the problem of inductive inference (learning a model of a hidden system from a given set of examples) It is impossible to tell how many examples will be required to infer an accurate model, and there is no basis for telling how complete a given set of examples is These issues have been addressed in the domain of inductive inference by developing statistical techniques, where the accuracy of an inferred model is subject to a tolerable degree of error This paper explores the application of these techniques to assess test sets of black-box systems It shows how they can be used to reason in a statistically justified manner about the number of tests required to fully exercise a system without a specification, and how to provide a valid adequacy measure for black-box test sets in an applied context

16 citations


Cites background or methods from "Learnability and the Vapnik-Chervon..."

  • ...The application of PAC-based probabilistic techniques [9, 4] to estimate lower bounds on the number of tests required for a test set of a black-box SUT to be adequate....

    [...]

  • ...Problems that are analogous to these have been the subject of much research in the context of inductive inference [4, 9, 24]....

    [...]

  • ...[4], it can be rearranged to yield the lower bound on m:...

    [...]

Journal ArticleDOI
LiMin Fu1
TL;DR: The expert neural network system the author has developed for use in DNA sequence analysis combines a traditional symbolic expert system with an artificial neural network that can accurately model underlying domain knowledge to improve accuracy, generalization performance, and information-processing speed.
Abstract: The expert neural network system the author has developed for use in DNA sequence analysis combines a traditional symbolic expert system with an artificial neural network. The resulting hybrid system can accurately model underlying domain knowledge to improve accuracy, generalization performance, and information-processing speed.

16 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations