scispace - formally typeset
Open AccessBook

An Introduction to Computational Learning Theory

Reads0
Chats0
TLDR
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.
Abstract
The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.

read more

Citations
More filters
Journal ArticleDOI

Integrating memetic search into the BioHEL evolutionary learning system for large-scale datasets

TL;DR: This paper adapts memetic operators for discrete representations that use information from the supervised learning process to heuristically edit classification rules and rule sets to BioHEL, a different evolutionary learning system applying the iterative learning approach, and proposes versions of these operators designed for continuous attributes and for dealing with noise.
Journal ArticleDOI

The complexity of exact learning of acyclic conditional preference networks from swap examples

TL;DR: This article focuses on the frequently studied case of exact learning from so-called swap examples, which express preferences among objects that differ in only one attribute, and presents bounds on or exact values of some well-studied information complexity parameters, namely the VC dimension, the teaching dimension, and the recursive teaching dimension for classes of acyclic CP-nets.
Posted Content

Introduction to Online Convex Optimization

TL;DR: In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model and use classical algorithmic theory and mathematical optimization, and it is necessary as well as beneficial to take a robust approach, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed as discussed by the authors.
Proceedings ArticleDOI

Symbolic assume-guarantee reasoning through BDD learning

TL;DR: A progressive witness analysis algorithm for automated assume-guarantee reasoning to exploit a multitude of traces from BDD-based symbolic model checkers by directly inferring BDD's as implicit assumptions outperforms monolithic symbolic model checking in four benchmark problems and an industrial case study in experiments.
Posted Content

LoopInvGen: A Loop Invariant Generator based on Precondition Inference

TL;DR: The LoopInvGen tool for generating loop invariants that can provably guarantee correctness of a program with respect to a given specification is described and appears to be significantly faster than the existing tools over the SyGuS-COMP 2018 benchmarks from the INV track.