scispace - formally typeset
Open AccessBook

An Introduction to Computational Learning Theory

Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.
Abstract
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.

read more

Citations
More filters
Book ChapterDOI

Selective sampling methods in one-class classification problems

TL;DR: The goal of this paper is to show why the best or most often used selective sampling methods for two- or multi-class problems are not necessarily the best ones for the one-class classification problem.
Proceedings ArticleDOI

Learning to impersonate

TL;DR: If one-way functions do not exist, then an efficient Eve can learn to impersonate any efficient Bob nearly as well as an unbounded Eve, and tightly bound the number of observations Eve makes in terms of the secret's entropy.
Posted Content

On the Robustness of Information-Theoretic Privacy Measures and Mechanisms

TL;DR: It is proved that the optimal privacy mechanisms for the empirical distribution approach the corresponding mechanism for the true distribution as the sample size of the privacy mechanism increases, thereby establishing the statistical consistency of the optimalprivacy mechanisms.
Proceedings Article

PAC Confidence Sets for Deep Neural Networks via Calibrated Prediction

TL;DR: This work proposes an algorithm combining calibrated prediction and generalization bounds from learning theory to construct confidence sets for deep neural networks with PAC guarantees---i.e., the confidence set for a given input contains the true label with high probability.
Book ChapterDOI

Consistent Identification in the Limit of Rigid Grammars from Strings Is NP-hard

TL;DR: It is shown that the learning functions for these learnable classes are all NP-hard.