scispace - formally typeset
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

Reads0
Chats0
TLDR
This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract
Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

On the complexity of neural network classifiers: a comparison between shallow and deep architectures.

TL;DR: A new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes, and results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.
Proceedings ArticleDOI

A general lower bound on the number of examples needed for learning

TL;DR: This paper proves a lower bound on the number of random examples required for distribution-free learning of a concept class C and shows that for many interesting concept classes, including k CNF and k DNF, the bound is actually tight to within a constant factor.
Journal Article

Learnability, Stability and Uniform Convergence

TL;DR: This paper considers the General Learning Setting (introduced by Vapnik), which includes most statistical learning problems as special cases, and identifies stability as the key necessary and sufficient condition for learnability.
Journal ArticleDOI

Rademacher penalties and structural risk minimization

TL;DR: This work suggests a penalty function to be used in various problems of structural risk minimization, based on the sup-norm of the so-called Rademacher process indexed by the underlying class of functions (sets), and obtains oracle inequalities for the theoretical risk of estimators, obtained by structural minimization of the empirical risk withRademacher penalties.
Journal ArticleDOI

Efficient distribution-free learning of probabilistic concepts

TL;DR: A model of machine learning in which the concept to be learned may exhibit uncertain or probabilistic behavior is investigated, and an underlying theory of learning p-concepts is developed in detail.
References
More filters
Book

Computers and Intractability: A Guide to the Theory of NP-Completeness

TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Book

The Art of Computer Programming

TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Book

Pattern classification and scene analysis

TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.