Open AccessBook
An Introduction to Computational Learning Theory
Michael Kearns,Umesh Vazirani +1 more
Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.Abstract:
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.read more
Citations
More filters
Journal ArticleDOI
Learning Qualitative Models of Dynamic Systems
David T. Hau,Enrico Coiera +1 more
TL;DR: A method that learns qualitative models from time-varying physiological signals, and it is shown that QSIM models are efficiently PAC learnable from positive examples only, and that GENMODEL is an ILP algorithm for efficiently constructing a QSIM model.
Proceedings ArticleDOI
FiG: Automatic Fingerprint Generation
TL;DR: Results show that such an automatic process can generate accurate fingerprints that classify each piece of software into its proper class and that the search space for query exploration remains largely unexploited, with many new such queries awaiting discovery.
Posted Content
Pattern Discovery in Time Series, Part I: Theory, Algorithm, Analysis, and Convergence
TL;DR: A new algorithm for discovering patterns in time series and other sequential data that makes no assumptions about the process’s causal architecture, and infers it from the data, which has important predictive optimality properties that conventional HMM states lack.
Domain adaptation of natural language processing systems
Fernando Pereira,John Blitzer +1 more
TL;DR: A measure of divergence is described, the HDH -divergence, that depends on the hypothesis class H from which the supervised model H is estimated, that is used to state an upper bound on the true target error of a model trained to minimize a convex combination of empirical source and target errors.
Journal ArticleDOI
A complexity gap for tree resolution
TL;DR: It is shown that any sequence of tautologies which expresses the validity of a fixed combinatorial principle either is “easy”, i.e. has polynomial size tree-resolution proofs, or is ‘difficult’, or requires exponential sizeTree resolution proofs.