scispace - formally typeset
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TLDR
This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Abstract
Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Book

Neural networks for pattern recognition

TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings ArticleDOI

Advances in kernel methods: support vector learning

TL;DR: Support vector machines for dynamic reconstruction of a chaotic system, Klaus-Robert Muller et al pairwise classification and support vector machines, Ulrich Kressel.
Journal ArticleDOI

An overview of statistical learning theory

TL;DR: How the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms are demonstrated and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems are demonstrated.
Book

Understanding Machine Learning: From Theory To Algorithms

TL;DR: The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way in an advanced undergraduate or beginning graduate course.
References
More filters
Proceedings ArticleDOI

A general lower bound on the number of examples needed for learning

TL;DR: This paper proves a lower bound on the number of random examples required for distribution-free learning of a concept class C and shows that for many interesting concept classes, including k CNF and k DNF, the bound is actually tight to within a constant factor.
Journal ArticleDOI

A general lower bound on the number of examples needed for learning

TL;DR: In this paper, a lower bound of Ω ((1/∆)ln(1/δ)+VCdim(C )/ε) was shown for distribution-free learning of a concept class C, where VCdim( C ) is the Vapnik-Chervonenkis dimension and ǫ and à are the accuracy and confidence parameters.
Book ChapterDOI

A course on empirical processes

Proceedings ArticleDOI

Epsilon-nets and simplex range queries

TL;DR: A new technique for half-space and simplex range query using random sampling to build a partition-tree structure and introduces the concept of anε-net for an abstract set of ranges to describe the desired result of this random sampling.
Proceedings ArticleDOI

Learning in the presence of malicious errors

TL;DR: A practical extension to the Valiant model of machine learning from examples, where the presence of errors, possibly maliciously generated by an adversary, in the sample data is studied to preserve an error-free oracle for examples of the function being learned.