scispace - formally typeset
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TLDR
This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract
Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Bloom Filters in Adversarial Environments

TL;DR: This work considers a data structure known as a “Bloom filter” and proves a tight connection between Bloom filters in this model and cryptography and shows that non-trivial (memory-wise) Bloom filters exist if and only if one-way functions exist.
Proceedings ArticleDOI

From noise-free to noise-tolerant and from on-line to batch learning

TL;DR: A simple method is presented which virtually removes noise or misfit from data, and thereby converts a “noise-free” algorithm A, which on-line learns linear functions from data without noise orMisfit, into a ‘noises-tolerant’ algorithm Ant which learns linearfunction from data containing noise ormisfit.
Journal ArticleDOI

The value of agreement a new boosting algorithm

TL;DR: A new generalization bound is presented where the use of unlabeled examples results in a better ratio between training-set size and the resulting classifier's quality and thus reduce the number of labeled examples necessary for achieving it.
Journal ArticleDOI

Probably approximate learning of sets and functions

TL;DR: The scope of the learning model is widened to include the inference of functions, and the Vapnik–Chervonenkis dimension is extended to obtain a measure called the “generalized dimension” of a class of functions.
Proceedings Article

e-Entropy and the Complexity of Feedforward Neural Networks

TL;DR: A new feedforward neural network representation of Lipschitz functions from [0, ρ]n into [ 0, 1] based on the level sets of the function is developed and an upper bound on the number of nodes needed to represent f to within uniform error er is shown.
References
More filters
Book

Computers and Intractability: A Guide to the Theory of NP-Completeness

TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Book

The Art of Computer Programming

TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Book

Pattern classification and scene analysis

TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.