scispace - formally typeset
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TLDR
This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract
Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Bayesian applications of belief networks and multilayer perceptrons for ovarian tumor classification with rejection

TL;DR: A hybrid Bayesian methodology that consists in encoding prior knowledge in the form of a (Bayesian) belief network and then using this knowledge to estimate an informative prior for a black-box model (e.g. a multilayer perceptron) is proposed.
Proceedings ArticleDOI

Learning with a slowly changing distribution

TL;DR: An upper bound on &Ugr; is given that ensures learning is possible from a finite number of examples, to ensure that some learning algorithm can produce an acceptably small probability of misclassification.
Journal ArticleDOI

Can PAC learning algorithms tolerate random attribute noise

TL;DR: It is shown that product random attribute noise, where each attributei is flipped randomly and independently with its own probability pi, is nearly as harmful as malicious noise-no algorithm can tolerate more than a very small amount of such noise.
Journal ArticleDOI

Lower bounds in pattern recognition and learning

TL;DR: Lower bounds are derived for the performance of any pattern recognition algorithm, which, using training data, selects a discrimination rule from a certain class of rules, and the bounds involve the Vapnik-Chervonenkis dimension of the class and the minimal error probability within the class.
Proceedings ArticleDOI

General bounds on the number of examples needed for learning probabilistic concepts

TL;DR: A new method for designing learning algorithms: dynamic partitioning of the domain by use of splitting trees is introduced and it can be shown that the resulting lower bounds for learning ND are tight to within a logarithmic factor.
References
More filters
Book

Computers and Intractability: A Guide to the Theory of NP-Completeness

TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Book

The Art of Computer Programming

TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Book

Pattern classification and scene analysis

TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.