scispace - formally typeset
Search or ask a question
Topic

ω-automaton

About: ω-automaton is a research topic. Over the lifetime, 2299 publications have been published within this topic receiving 68468 citations. The topic is also known as: stream automaton & ω-automata.


Papers
More filters
Journal ArticleDOI
TL;DR: The correspondence between first-order recurrent neural networks and deterministic finite state automata is examined in detail, showing two major stages in the learning process and a measure based on clustering that is correlated to the stability of the networks.
Abstract: We examine the correspondence between first-order recurrent neural networks and deterministic finite state automata. We begin with the problem of inducing deterministic finite state automata from finite training sets, that include both positive and negative examples, an NP-hard problem (Angluin and Smith 1983). We use a neural network architecture with two recurrent layers, which we argue can approximate any discrete-time, time-invariant dynamic system, with computation of the full gradient during learning. The networks are trained to classify strings as belonging or not belonging to the grammar. The training sets used contain only short strings, and the sets are constructed in a way that does not require a priori knowledge of the grammar. After training, the networks are tested using various test sets with strings of length up to 1000, and are often able to correctly classify all the test strings. These results are comparable to those obtained with second-order networks (Giles et al. 1992; Watrous and Kuhn 1992a; Zeng et al. 1993). We observe that the networks emulate finite state automata, confirming the results of other authors, and we use a vector quantization algorithm to extract deterministic finite state automata after training and during testing of the networks, obtaining a table listing the start state, accept states, reject states, all transitions from the states, as well as some useful statistics. We examine the correspondence between finite state automata and neural networks in detail, showing two major stages in the learning process. To this end, we use a graphics module, which graphically depicts the states of the network during the learning and testing phases. We examine the networks' performance when tested on strings much longer than those in the training set, noting a measure based on clustering that is correlated to the stability of the networks. Finally, we observe that with sufficiently long training times, neural networks can become true finite state automata, due to the attractor structure of their dynamics.

72 citations

Journal ArticleDOI
TL;DR: The pumping lemma in L-valued automata theory is set up, and it is shown that if related L- valued predicates are defined by using connective ∧ instead of & , then the pumping lemmas may not hold again.

72 citations

Journal ArticleDOI
TL;DR: It is shown that there exists a family of nondeterministic finite automata {An} over a two-letter alphabet such that, for any positive integer n, An is exponentially ambiguous and has n states, whereas the smallest equivalent deterministic finite Automaton has 2n states, and any smallest equivalent polynomially ambiguous finite automaton has2n -1 states.
Abstract: We resolve an open problem raised by Ravikumar and Ibarra [SIAM J. Comput., 18 (1989), pp. 1263--1282] on the succinctness of representations relating to the types of ambiguity of finite automata. We show that there exists a family of nondeterministic finite automata {An} over a two-letter alphabet such that, for any positive integer n, An is exponentially ambiguous and has n states, whereas the smallest equivalent deterministic finite automaton has 2n states, and any smallest equivalent polynomially ambiguous finite automaton has 2n -1 states.

71 citations

Proceedings ArticleDOI
01 Feb 1989
TL;DR: The finding is that Büchi, Streett, and EL automata span a spectrum of succinctness, and it is shown that the decision problem for ETL, where temporal connectives are represented byEL automata, is EXPSPACE-complete, and the decision problems for ETRL and S are PSPACE- complete.
Abstract: We study here the use of different representation for infinitary regular languages in extended temporal logic. We focus on three different kinds of acceptance conditions for finite automata on infinite words, due to Buchi, Streett, and Emerson and Lei (EL), and we study their computational properties. Our finding is that Buchi, Streett, and EL automata span a spectrum of succinctness. EL automata are exponentially more succinct than Buchi automata, and complementation of EL automata is doubly exponential. Streett automata are of intermediate complexity. While translating from Streett automata to Buchi automata involves an exponential blow-up, so does the translation from EL automata to Streett automata. Furthermore, even though Streett automata are exponentially more succinct than Buchi automata, complementation of Streett automata is only exponential. As a result, we show that the decision problem for ETLEL, where temporal connectives are represented by EL automata, is EXPSPACE-complete, and the decision problem for ETLS, where temporal connectives are represented by Streett automata, is PSPACE-complete.

71 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
88% related
Data structure
28.1K papers, 608.6K citations
83% related
Model checking
16.9K papers, 451.6K citations
83% related
Approximation algorithm
23.9K papers, 654.3K citations
82% related
Petri net
25K papers, 406.9K citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20238
202219
20201
20191
20185
201748