scispace - formally typeset
Open AccessBook

An Introduction to Computational Learning Theory

Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.
Abstract
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.

read more

Citations
More filters
Proceedings ArticleDOI

Text Categorization Based on Boosting Association Rules

TL;DR: This work proposes a new approach in which a large number of association rules are generated and filtered using a new method which is equivalent to a deterministic Boosting algorithm, which effectively adapts to large-scale classification tasks such as text categorization.
Posted Content

Efficient Robust Proper Learning of Log-concave Distributions

TL;DR: This work gives the first computationally efficient algorithm for the robust proper learning of univariate log-concave distributions, which achieves the information-theoretically optimal sample size, runs in polynomial time, and is robust to model misspecification with nearly-optimal error guarantees.
Posted Content

Experimental learning of quantum states

TL;DR: In this paper, it was shown that quantum states can be approximately learned using only a linear number of measurements, in a probabilistic setting, in which the number of parameters describing a quantum state is linearly scaled with the quantum number of qubits.
Proceedings Article

Searching For Hidden Messages: Automatic Detection of Steganography

TL;DR: This work uses ML algorithms to distinguish clean and stego-bearing files, and shows that ML algorithms work in both content- and compression-based image formats, outperforming at least one current hand crafted steganalysis technique in the latter.
Proceedings ArticleDOI

A comparative study on model selection and multiple model fusion

TL;DR: It is argued that strong consistency only holds under large sample regime while soft model selection can still be better than choosing a single model with small sample size and the conditional model estimator (CME) has the best performance in selecting the correct model order and fusing multiple models for prediction and interpolation.