scispace - formally typeset
Open AccessBook

An Introduction to Computational Learning Theory

Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.
Abstract
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.

read more

Citations
More filters
Journal ArticleDOI

The synthesis of language learners

TL;DR: The proofs of some of the positive results yield, as pleasant corollaries, subset-principle or tell-tale style characterizations for the learnability of the corresponding classes or families indexed.
Proceedings Article

Optimal Learning via the Fourier Transform for Sums of Independent Integer Random Variables

TL;DR: In this article, the authors studied the structure and learnability of sums of independent integer random variables (SIIRVs) of order n 2 Z+ and showed that the optimal sample complexity of this learning problem is (( k = 2 ) p log(1= )).
Journal ArticleDOI

Extension of the PAC framework to finite and countable Markov chains

TL;DR: For a Markov chain with finitely many states, it is shown that, if the target set belongs to a family of sets with a finite Vapnik-Chervonenkis (1995) dimension, then probably approximately correct (PAC) learning of this set is possible with polynomially large samples.
Journal ArticleDOI

Learning experiments with genetic optimization of a generalized regression neural network

TL;DR: Experiments compare hill-climbing optimization with that of a genetic algorithm, both in conjunction with a generalized regression neural network, and results consistently favor the GRNN unified with the genetic algorithm.
Journal ArticleDOI

Self-Improving Algorithms

TL;DR: This work investigates ways in which an algorithm can improve its expected performance by fine-tuning itself automatically with respect to an arbitrary, unknown input distribution, and gives self-improving algorithms for sorting and clustering.