Open AccessBook
An Introduction to Computational Learning Theory
Michael Kearns,Umesh Vazirani +1 more
Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.Abstract:
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.read more
Citations
More filters
Journal ArticleDOI
A formal framework for positive and negative detection schemes
TL;DR: A new match rule is introduced, called r-chunks, and the generalizations induced by different partial matching rules are characterized in terms of the crossover closure, which affects the tradeoff between positive and negative detection.
Proceedings Article
Learning to resolve natural language ambiguities: a unified approach
TL;DR: In this paper, a sparse network of linear separators is proposed for natural language disambiguation, which is based on the Winnow learning algorithm and is shown to perform well in a variety of ambiguity resolution problems.
Book ChapterDOI
Statistical Learning Theory: Models, Concepts, and Results
U von Luxburg,Bernhard Schölkopf +1 more
TL;DR: The statistical learning theory as discussed by the authors is regarded as one of the most beautifully developed branches of artificial intelligence, and it provides the theoretical basis for many of today's machine learning algorithms, such as classification.
Proceedings ArticleDOI
New Results for Learning Noisy Parities and Halfspaces
TL;DR: The first nontrivial algorithm for learning parities with adversarial noise is given, which shows that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables and that majorities of halfspaces are hard to PAC-learn using any representation.
Dissertation
Bayesian Gaussian Process Models: PAC-Bayesian Generalisation Error Bounds and Sparse Approximations
TL;DR: The tractability and usefulness of simple greedy forward selection with information-theoretic criteria previously used in active learning is demonstrated and generic schemes for automatic model selection with many (hyper)parameters are developed.