scispace - formally typeset
Open AccessJournal Article

PAC-learnability of Probabilistic Deterministic Finite State Automata

Alexander Clark, +1 more
- 01 Dec 2004 - 
- Vol. 5, pp 473-497
Reads0
Chats0
TLDR
It is demonstrated that the class of PDFAs is PAC-learnable using a variant of a standard state-merging algorithm and the Kullback-Leibler divergence as error function.
Abstract
We study the learnability of Probabilistic Deterministic Finite State Automata under a modified PAC-learning criterion We argue that it is necessary to add additional parameters to the sample complexity polynomial, namely a bound on the expected length of strings generated from any state, and a bound on the distinguishability between states With this, we demonstrate that the class of PDFAs is PAC-learnable using a variant of a standard state-merging algorithm and the Kullback-Leibler divergence as error function

read more

Content maybe subject to copyright    Report

Citations
More filters
Book

Grammatical Inference: Learning Automata and Grammars

TL;DR: The author describes a number of techniques and algorithms that allow us to learn from text, from an informant, or through interaction with the environment that concern automata, grammars, rewriting systems, pattern languages or transducers.
Journal ArticleDOI

Probabilistic finite-state machines - part II

TL;DR: The relation of probabilistic finite-state automata with other well-known devices that generate strings as hidden Markov models and n-grams is studied and theorems, algorithms, and properties that represent a current state of the art of these objects are provided.
Journal ArticleDOI

State splitting and merging in probabilistic finite state automata for signal representation and analysis

TL;DR: This paper focuses on a special class of PFSA, which captures finite history of the symbol strings, called D-Markov machines, which have a simple algebraic structure and are computationally efficient to construct and implement.
Journal ArticleDOI

Spectral learning of weighted automata

TL;DR: A derivation of the spectral method for learning WFA that puts emphasis on providing intuitions on the inner workings of the method and does not assume a strong background in formal algebraic methods is presented.

Inductive learning of phonotactic patterns

Jeffrey Heinz
TL;DR: Of the Dissertation Inductive Learning of Phonotactic Patterns and its Applications to Teaching and Research: Foundations of a Response to the Response to Tocqueville's inequality.
References
More filters
Book

Elements of information theory

TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Book ChapterDOI

Probability Inequalities for sums of Bounded Random Variables

TL;DR: In this article, upper bounds for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt are derived for certain sums of dependent random variables such as U statistics.
Proceedings ArticleDOI

A theory of the learnable

TL;DR: This paper regards learning as the phenomenon of knowledge acquisition in the absence of explicit programming, and gives a precise methodology for studying this phenomenon from a computational viewpoint.
Book

Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids

TL;DR: This book gives a unified, up-to-date and self-contained account, with a Bayesian slant, of such methods, and more generally to probabilistic methods of sequence analysis.