scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract: Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
24 Jul 1998
TL;DR: This paper presents a general information-theoretic approach for obtaining lower bounds on the number of examples needed to PAC learn in the presence of noise, and the technique is applied to several different models, illustrating its generality and power.
Abstract: This paper presents a general information-theoretic approach for obtaining lower bounds on the number of examples needed to PAC learn in the presence of noise. This approach deals directly with the fundamental information quantities, avoiding a Bayesian analysis. The technique is applied to several different models, illustrating its generality and power. The resulting bounds add logarithmic factors to (or improve the constants in) previously known lower bounds.

20 citations

Book ChapterDOI
01 Jan 1998
TL;DR: This work considers the set system on X whose sets are all intersections of X with a halfplane, and focuses on applications in discrepancy theory, in combinatorial geometry, in derandomization of geometric algorithms, and in geometric range searching.
Abstract: Let X be a finite point set in the plane. We consider the set system on X whose sets are all intersections of X with a halfplane. Similarly one can investigate set systems defined on point sets in higher-dimensional spaces by other classes of simple geometric figures (simplices, balls, ellipsoids, etc.). It turns out that simple combinatorial properties of such set systems (most notably the Vapnik-Chervonenkis dimension and related concepts of shatter functions) play an important role in several areas of mathematics and theoretical computer science. Here we concentrate on applications in discrepancy theory, in combinatorial geometry, in derandomization of geometric algorithms, and in geometric range searching. We believe that the tools described might be useful in other areas of mathematics too.

20 citations

Posted Content
TL;DR: Given a computable probability measure, there is no computable, randomized method that can produce an arbitrarily large sample such that none of its members are outliers of $P$ .
Abstract: Given a computable probability measure P over natural numbers or infinite binary sequences, there is no computable, randomized method that can produce an arbitrarily large sample such that none of its members are outliers of P.

20 citations


Cites background from "Learnability and the Vapnik-Chervon..."

  • ...With certain probabilistic assumptions, learning algorithms that produce hypotheses h ⊇ γ of low Kolmogorov complexity are likely to approximate Υ well [BEHW89]....

    [...]

Proceedings Article
24 Aug 1991
TL;DR: It is shown that Horn clause theories, k-term DNF and general DNF Boolean functions are polynomially learnable from generative representative presentations.
Abstract: We study what kind of data may ease the computational complexity of learning of Horn clause theories (in Gold's paradigm) and Boolean functions (in PAC-learning paradigm). We give several definitions of good data (basic and generative representative sets), and develop data-driven algorithms that learn faster from good examples, and degenerate to learn in the limit from the "worst" possible examples. We show that Horn clause theories, k-term DNF and general DNF Boolean functions are polynomially learnable from generative representative presentations.

20 citations

Book ChapterDOI
19 Oct 2016
TL;DR: In this article, a sample compression scheme for extremal classes of size equal to their Vapnik-Chervonienkis (VC) dimension is presented, which is based on a powerful generalization of the Sauer-Shelah bound.
Abstract: It is a long-standing open problem whether there exists a compression scheme whose size is of the order of the Vapnik-Chervonienkis (VC) dimension d. Recently compression schemes of size exponential in d have been found for any concept class of VC dimension d. Previously, compression schemes of size d have been given for maximum classes, which are special concept classes whose size equals an upper bound due to Sauer-Shelah. We consider a generalization of maximum classes called extremal classes. Their definition is based on a powerful generalization of the Sauer-Shelah bound called the Sandwich Theorem, which has been studied in several areas of combinatorics and computer science. The key result of the paper is a construction of a sample compression scheme for extremal classes of size equal to their VC dimension. We also give a number of open problems concerning the combinatorial structure of extremal classes and the existence of unlabeled compression schemes for them.

19 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations