Journal ArticleDOI
Learnability and the Vapnik-Chervonenkis dimension
Reads0
Chats0
TLDR
This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.Abstract:
Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.read more
Citations
More filters
Book
Deep Learning
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Book
Neural networks for pattern recognition
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings ArticleDOI
Advances in kernel methods: support vector learning
TL;DR: Support vector machines for dynamic reconstruction of a chaotic system, Klaus-Robert Muller et al pairwise classification and support vector machines, Ulrich Kressel.
Journal ArticleDOI
An overview of statistical learning theory
TL;DR: How the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms are demonstrated and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems are demonstrated.
Book
Understanding Machine Learning: From Theory To Algorithms
TL;DR: The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way in an advanced undergraduate or beginning graduate course.
References
More filters
Journal ArticleDOI
Inferring decision trees using the minimum description length principle
J. R. Quinlan,Ronald L. Rivest +1 more
TL;DR: The use of Rissanen's minimum description length principle for the construction of decision trees is explored and empirical results comparing this approach to other methods are given.
Journal ArticleDOI
Fast probabilistic algorithms for hamiltonian circuits and matchings
Dana Angluin,Leslie G. Valiant +1 more
TL;DR: Three simple efficient algorithms with good probabilistic behaviour are described and an algorithm with a run time of O ( n log n ) which almost certainly finds a perfect matching in a random graph of at least cn log n edges is analyzed.
Journal ArticleDOI
Linear Programming in Linear Time When the Dimension Is Fixed
TL;DR: In this paper, it was shown that the linear programming problem in d variables and n constraints can be solved in O(n) time when d is fixed and bounded by a slowly growing function of n.
Journal ArticleDOI
Cryptographic limitations on learning Boolean formulae and finite automata
Michael Kearns,Leslie G. Valiant +1 more
TL;DR: It is proved that a polynomial-time learning algorithm for Boolean formulae, deterministic finite automata or constant-depth threshold circuits would have dramatic consequences for cryptography and number theory and is applied to obtain strong intractability results for approximating a generalization of graph coloring.
Journal ArticleDOI
Computational Complexity of Probabilistic Turing Machines
TL;DR: It is shown that every nondeterministic machine can be simulated in the same space by a probabilistic machine with small error probability.