scispace - formally typeset
Open AccessBook

An Introduction to Computational Learning Theory

Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.
Abstract
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.

read more

Citations
More filters
Proceedings Article

The set covering machine with data-dependent half-spaces

TL;DR: Compared to the support vector machine, the set covering machine with data-dependent half-spaces produces substantially sparser classifiers with comparable (and sometimes better) generalization.
DissertationDOI

Adaptive processing of structural data: from sequences to trees and beyond

TL;DR: In this article, a tree-recursive dynamical system (TRDS) is proposed, which is a class of deterministic state machines that operate in a continuous state space and enable the representation and the inductive inference of structure mappings.
Journal ArticleDOI

Boosting Classifiers Built from Different Subsets of Features

TL;DR: This work proposes the decomposition of the learning task into several dependent sub-problems of boosting, treated by different weak learners, that will optimally collaborate during the weight update stage to achieve this task.

Learning and inference in phrase recognition: a filtering-ranking architecture using perceptron

TL;DR: In this article, the problem of reconeixing structures of segments in an oracio is studied, in which the goal is to find the most suitable structure for a given set of decisions.
Proceedings Article

Learning from partial observations

TL;DR: A masking process model is proposed to capture the stochastic nature of information loss and it is shown that the concept classes of parities and monotone term 1-decision lists are not properly consistently learnable from partial observations, if RP ≠ NP.