scispace - formally typeset
Open AccessBook

An Introduction to Computational Learning Theory

Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.
Abstract
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.

read more

Citations
More filters
Journal ArticleDOI

A tensor approach to learning mixed membership community models

TL;DR: In this article, a tensor spectral decomposition method is proposed to detect communities in the mixed membership Dirichlet model, which allows for nodes to have fractional memberships in multiple communities.
Proceedings ArticleDOI

Consensus group stable feature selection

TL;DR: It is shown that stability of feature selection has a strong dependency on sample size and a novel framework for stable feature selection is proposed which first identifies consensus feature groups from subsampling of training samples, and then performs feature selection by treating each consensus feature group as a single entity.
Proceedings Article

A Tensor Spectral Approach to Learning Mixed Membership Community Models

TL;DR: In this paper, a tensor spectral decomposition approach is proposed to learn communities in a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Airoldi et al. (2008).
Journal ArticleDOI

A rigorous and robust quantum speed-up in supervised machine learning

TL;DR: In this paper, the authors construct a classifier for quantum machine learning and show that no classical learner can classify the data inverse-polynomially better than random guessing, assuming the widely believed hardness of the discrete logarithm problem.
Journal ArticleDOI

The Power of Localization for Efficiently Learning Linear Separators with Noise

TL;DR: This work provides the first polynomial-time active learning algorithm for learning linear separators in the presence of malicious noise or adversarial label noise, and achieves a label complexity whose dependence on the error parameter ϵ is polylogarithmic (and thus exponentially better than that of any passive algorithm).