Open AccessBook
An Introduction to Computational Learning Theory
Michael Kearns,Umesh Vazirani +1 more
Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.Abstract:
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.read more
Citations
More filters
Proceedings ArticleDOI
An Ensemble Learning Approach for Extracting Concept Prerequisite Relations from Wikipedia
Yang Zhou,Kui Xiao,Yan Zhang +2 more
TL;DR: In this paper, an ensemble learning approach for extracting concept prerequisite relations from Wikipedia is proposed, which can achieve better performance than baseline methods on two existing datasets, the CMU and AL-CPL datasets.
Proceedings ArticleDOI
On application of data mining in functional debug
TL;DR: This paper investigates how data mining can be applied in functional debug, which is formulated as the problem of explaining a functional simulation error based on human-understandable machine states, and presents a rule discovery methodology comprising two steps.
Book ChapterDOI
Learning finite state machines
TL;DR: Grammatical inference and grammar induction both seem to indicate that techniques aiming at building grammatical formalisms when given some information about a language are not concerned with automata or other finite state machines, but this is far from true, and many of the more important results in grammatical inference rely heavily on automata formalisms, and particularly on the specific use of determinism that is made.
Journal Article
On the learnability of shuffle ideals
TL;DR: In particular, this article showed that the class of shuffle ideals is not properly PAC learnable in polynomial time if RP ≠ NP, and that learning the class improperly would imply that learning it correctly would imply a solution to certain fundamental problems in cryptography.
Posted Content
Statistical Learning of Arbitrary Computable Classifiers
TL;DR: This work shows that learning over the set of all computable labeling functions is indeed possible, and develops a learning algorithm, and shows that bounding sample complexity independently of the distribution is impossible.