scispace - formally typeset
Open AccessBook

An Introduction to Computational Learning Theory

Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.
Abstract
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.

read more

Citations
More filters
Book ChapterDOI

Active Automata Learning in Practice

TL;DR: The progress that has been made over the past five years is reviewed, the status of active automata learning techniques with respect to applications in the field of software engineering is assessed, and an updated agenda for future research is presented.
Journal ArticleDOI

Algorithmic luckiness

TL;DR: This paper studies learning algorithms more directly and in a way that allows us to exploit the serendipity of the training sample and presents an application of this framework to the maximum margin algorithm for linear classifiers which results in a bound that exploits the margin.
Journal ArticleDOI

Greedy algorithms for classification—consistency, convergence rates, and adaptivity

TL;DR: Focusing on specific classes of problems, this work provides conditions under which their greedy procedure achieves the (nearly) minimax rate of convergence, implying that the procedure cannot be improved in a worst case setting.
Journal ArticleDOI

On the Quantum versus Classical Learnability of Discrete Distributions

TL;DR: In this paper, the comparative power of classical and quantum learners for generative modelling within the Probably Approximately Correct (PAC) framework was studied, and it was shown that quantum learners exhibit a provable advantage over classical learning algorithms.
Journal ArticleDOI

Characterizing schema mappings via data examples

TL;DR: A foundation for the systematic investigation of data examples is developed and a tight connection with homomorphism dualities is established for determining whether or not a GAV schema mapping is uniquely characterizable by a finite set of universal examples with respect to the class of GAV s-t tgds.