Open AccessBook
An Introduction to Computational Learning Theory
Michael Kearns,Umesh Vazirani +1 more
Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.Abstract:
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.read more
Citations
More filters
Computational models of language universals: Expressiveness, learnability and consequences
S. Edelman,Edward P. Stabler +1 more
TL;DR: This paper will review some fundamental results in this line of inquiry, from universals formulated in terms of expressive power of grammars, to results on learnable subsets of the languages defined by those grammARS, leading finally to recent views on semantically-characterized grammatical universals.
Proceedings ArticleDOI
Boosting grammatical inference with confidence oracles
TL;DR: This paper aims at improving the performance of state merging algorithms in the presence of noisy data by using, in the update rule, additional information provided by an oracle by constructing a new weighting scheme that takes into account the confidence in the labels of the examples.
Book ChapterDOI
Invariant Synthesis for Incomplete Verification Engines
TL;DR: This work proposes a framework for synthesizing inductive invariants for incomplete verification engines, which soundly reduce logical problems in undecidable theories to decidable theories and allows verification engines to communicate non-provability information to guide invariant synthesis.
Kernel methods and their application to structured data
Roni Khardon,Gabriel Wachman +1 more
TL;DR: The thesis investigates three aspects of machine learning algorithms that use linear classification functions that work implicitly in feature spaces by using similarity functions known as kernels, and investigates kernels for time series from astronomy and molecules from biochemistry, where the data are not initially expressed in Euclidean space.