scispace - formally typeset
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

Reads0
Chats0
TLDR
This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract
Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Predictive PAC Learnability: A Paradigm for Learning from Exchangeable Input Data

TL;DR: Using de Finetti's theorem, it is shown that if a universally separable function class $\mathscr F$ is distribution-free PAC learnable under i.i.d. inputs, then it is Distribution-free predictive PAC learnability under exchangeable inputs, with a slightly worse sample complexity.
Journal ArticleDOI

The degree of approximation of sets in Euclidean space using sets with bounded Vapnik-Chervonenkis dimension

TL;DR: A new quantity ρn(F, Lq) which measures the degree of approximation of a function class F by the best manifold Hn of pseudo-dimension less than or equal to n in the Lq-metric has been introduced.
Proceedings Article

Active Learning with a Drifting Distribution

TL;DR: This work proves upper bounds on the number of prediction mistakes and number of label requests for established disagreement-based active learning algorithms, both in the realizable case and under Tsybakov noise.
Journal ArticleDOI

Learning from Examples and Membership Queries with Structured Determinations

TL;DR: Empirical results showing that when a tree-structured bias is available, the method significantly improves upon knowledge-free induction are presented and there are hard cryptographic limitations to generalizing these positive results to structured determinations in the form of a directed acyclic graph are presented.
Proceedings ArticleDOI

Improved lower bounds for learning from noisy examples: an information-theoretic approach

TL;DR: This paper presents a general information-theoretic approach for obtaining lower bounds on the number of examples needed to PAC learn in the presence of noise, and the technique is applied to several different models, illustrating its generality and power.
References
More filters
Book

Computers and Intractability: A Guide to the Theory of NP-Completeness

TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Book

The Art of Computer Programming

TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Book

Pattern classification and scene analysis

TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.