scispace - formally typeset
Open AccessBook

An Introduction to Computational Learning Theory

Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.
Abstract
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.

read more

Citations
More filters

On the learnability of monotone functions

TL;DR: This thesis shows that Boolean functions computed by polynomial-size monotone circuits are hard to learn assuming the existence of one-way functions, and shows that non-monotone DNF formulas, juntas, and sparse GF 2 formulas are teachable in the average case.

Uniform Glivenko-Cantelli Theorems and Concentration of Measure in the Mathematical Modelling of Learning

TL;DR: This paper surveys certain developments in the use of probabilistic techniques for the modelling of generalization in machine learning, particularly the use (and derivation of) uniform Glivenko-Cantelli theorems, and theUse of concentration of measure results.
Journal ArticleDOI

On the connection between the phase transition of the covering test and the learning success rate in ILP

TL;DR: It is shown that a top-down data-driven strategy can cross any plateau if near-misses are supplied in the training set, whereas they do not change the plateau profile and do not guide a generate-and-test strategy.