scispace - formally typeset
Search or ask a question

Showing papers on "Pushdown automaton published in 1993"


Journal ArticleDOI
TL;DR: These algorithms are the first which are provably effective for these problems, in the absence of a "reset," and also present probabilistic algorithms for permutation automata which do not require a teacher to supply counterexamples.
Abstract: We present new algorithms for inferring an unknown finite-state automaton from its input/output behavior, even in the absence of a means of resetting the machine to a start state. A key technique used is inference of a homing sequence for the unknown automaton. Our inference procedures experiment with the unknown machine, and from time to time require a teacher to supply counterexamples to incorrect conjectures about the structure of the unknown automaton. In this setting, we describe a learning algorithm that, with probability 1 − δ, outputs a correct description of the unknown machine in time polynomial in the automaton′s size, the length of the longest counterexample, and log(1/δ). We present an analogous algorithm that makes use of a diversity-based representation of the finite-state system. Our algorithms are the first which are provably effective for these problems, in the absence of a "reset." We also present probabilistic algorithms for permutation automata which do not require a teacher to supply counterexamples. For inferring a permutation automaton of diversity D, we improve the best previous time bound by roughly a factor of D3/log D.

395 citations


Posted Content
TL;DR: This paper discusses in detail the construction of a neural network pushdown automata (NNPDA), its construction, how it can be trained and how useful symbolic information can be extracted from the trained network.
Abstract: In order for neural networks to learn complex languages or grammars, they must have sufficient computational power or resources to recognize or generate such languages. Though many approaches have been discussed, one ob- vious approach to enhancing the processing power of a recurrent neural network is to couple it with an external stack memory - in effect creating a neural network pushdown automata (NNPDA). This paper discusses in detail this NNPDA - its construction, how it can be trained and how useful symbolic information can be extracted from the trained network. In order to couple the external stack to the neural network, an optimization method is developed which uses an error function that connects the learning of the state automaton of the neural network to the learning of the operation of the external stack. To minimize the error function using gradient descent learning, an analog stack is designed such that the action and storage of information in the stack are continuous. One interpretation of a continuous stack is the probabilistic storage of and action on data. After training on sample strings of an unknown source grammar, a quantization procedure extracts from the analog stack and neural network a discrete pushdown automata (PDA). Simulations show that in learning deterministic context-free grammars - the balanced parenthesis language, 1*n0*n, and the deterministic Palindrome - the extracted PDA is correct in the sense that it can correctly recognize unseen strings of arbitrary length. In addition, the extracted PDAs can be shown to be identical or equivalent to the PDAs of the source grammars which were used to generate the training strings.

48 citations


Book ChapterDOI
23 Aug 1993
TL;DR: This work exhibits close relationships between simple linear languages and the deterministic linear languages both according to Nasu and Honda and to Ibarra, Jiang, and Ravikumar.
Abstract: Several notions of deterministic linear languages are considered and compared with respect to their complexities and to the families of formal languages they generate. We exhibit close relationships between simple linear languages and the deterministic linear languages both according to Nasu and Honda and to Ibarra, Jiang, and Ravikumar. Deterministic linear languages turn out to be special cases of languages generated by linear grammars restricted to LL(1) conditions, which have a membership problem solvable in NC1. In contrast to that, deterministic linear languages defined via automata models turn out to have a DSPACE(logn)-complete membership problem. Moreover, they coincide with languages generated by linear grammars subject to LR(1) conditions.

34 citations


Journal ArticleDOI
TL;DR: Three classes of nonregular languages are defined, and for each of them it is shown that for any language L in the class, PDL, with L added to the set of regular programs as a new program, is decidable.
Abstract: Extensions of propositional dynamic logic (PDL) with nonregular programs are considered. Three classes of nonregular languages are defined, and for each of them it is shown that for any language L in the class, PDL, with L added to the set of regular programs as a new program, is decidable. The first class consists of the languages accepted by pushdown automata that act only on the basis of their input symbol, except when determining whether they reject or continue. The second class (which contains even noncontext-free languages) consists of the languages accepted by deterministic stack machines, but which have a unique new symbol prefixing each word. The third class represents a certain delicate combination of these, and, in particular, it serves to prove the 1983 conjecture that PDL with the addition of the language $\{ {a^i b^i c^i |i \geqslant 0} \}$ is decidable.

21 citations


Journal ArticleDOI
TL;DR: This paper constructs, in O( n log n ) time, for a given ground term equation system E and given ground trees p 1,…, p k, a deterministic tree automaton recognizing the congruential tree language [p1] ↔ ∗ E ∪…∪[pk] ↓ E.

18 citations



Book ChapterDOI
05 Jul 1993
TL;DR: It is shown that nondeterministic two-way reversal-bounded multicounter machines are effectively equivalent to finite automata on unary languages, and hence their emptiness, containment, and equivalence problems are decidable also.
Abstract: We look at some decision questions concerning two-way counter machines and obtain the strongest decidable results to date concerning these machines. In particular, we show that the emptiness, containment, and equivalence problems are decidable for two-way counter machines whose counter is reversal-bounded (i.e., the counter alternates between increasing and decreasing modes at most a fixed number of times). We use this result to give a simpler proof of a recent result that the emptiness, containment, and equivalence problems for two-way reversal-bounded pushdown automata accepting bounded languages (i.e., subsets of w 1 * ... w k * for some nonnull words w1,...,wk) are decidable. Other applications concern decision questions about simple programs. Finally, we show that nondeterministic two-way reversal-bounded multicounter machines are effectively equivalent to finite automata on unary languages, and hence their emptiness, containment, and equivalence problems are decidable also.

17 citations


Journal ArticleDOI
TL;DR: It is shown that two-way reversal-bounded push-down automata over bounded languages (i.e., subsets of for some distinct symbols a1,…, ak) are equivalent to two- way reversal- bounded counter machines.
Abstract: It is known that two-way pushdown automata are more powerful than two-way counter machines. The result is also true for the case when the pushdown store and counter are reversal-bounded. In contrast, we show that two-way reversal-bounded push-down automata over bounded languages (i.e., subsets of for some distinct symbols a1,…, ak) are equivalent to two-way reversal-bounded counter machines. We also show that, unlike the unbounded input case, two-way reversal-bounded pushdown automata over bounded languages have decidable emptiness, equivalence and containment problems.

14 citations


Journal ArticleDOI
TL;DR: Decidability of the equivalence problem is proved for deterministic pushdown automata and the main theorem leads to solution of a number of open problems in the theory of program schemes and in formal language theory.
Abstract: Decidability of the equivalence problem is proved for deterministic pushdown automata. A comparison algorithm for two automata is described. The main theorem leads to solution of a number of open problems in the theory of program schemes and in formal language theory.

11 citations


Journal ArticleDOI
TL;DR: Two transformations are presented which, for any pushdown automaton (PDA)M withn states andp stack symbols, reduce the number of stack symbols to any desired numberp′ greater than one.
Abstract: Two transformations are presented which, for any pushdown automaton (PDA)M withn states andp stack symbols, reduce the number of stack symbols to any desired numberp′ greater than one. The first transformation preserves deterministic behavior and produces an equivalent PDA witho(np/p′) states. The second construction, using a technique which introduces nondeterminism, produces an equivalent PDA withO(n√p/p′) states. Both transformations are essentially optimal, the former among determinism-preserving transformations, the latter among all transformations.

10 citations


Journal ArticleDOI
TL;DR: It is shown that synchronization dramatically enhances the power of pushdown automata, even under the severe restriction of the pushdown store to a counter making only one reversal, synchronized push down automata still recognize all recursively enumerable languages.

22 Apr 1993
TL;DR: The author analyzes the difficulty of original model and makes several modifications and thus enhanced the learning power of NNPDA, including the introduction of a linear 'full order' recurrent neural network as the stack controller.
Abstract: Previously (C. Sreerupa Das et al., 1992, 1993; C.L. Giles et al., 1990; G.Z. Sun et al., 1990, 1991), a model was developed of neural network pushdown automata (NNPDA). NNPDA is a hybrid system which couples the neural network finite controller with an external continuous stack memory. It learns context-free grammars from examples by minimizing the properly defined objective function. In the original version of the NNPDA, the neural network controller is built with high-order connected recurrent neural nets. However, due to the complexity of the learning surface, this model could not learn several non-trivial grammars, among which the Palindrome grammar is considered a very difficult one. The author analyzes the difficulty of original model and makes several modifications and thus enhanced the learning power of NNPDA. The major enhancement is the introduction of a linear 'full order' recurrent neural network as the stack controller.< >

C.L. Giles1
22 Apr 1993
TL;DR: The author presents a method for networks with second-order weights where inserting prior knowledge into a network becomes a straight-forward mapping (or programming) of grammatical rules into weights.
Abstract: Recurrent neural networks can be trained from string examples to behave like deterministic finite-state automata (DFA's) and pushdown automata (adapts) i.e. they recognize respectively deterministic regular and context-free grammars (DCFG's). The author discusses some of the successes and failures of this type of 'recurrent neural network' grammatical inference engine, as well as some of the issues of effectively using a priori symbolic knowledge in training dynamic networks. The author presents a method for networks with second-order weights where inserting prior knowledge into a network becomes a straight-forward mapping (or programming) of grammatical rules into weights. A more sophisticated hybrid machine was also developed, denoted as a neural network pushdown automata (NNPDA)-a recurrent net connected to a stack memory. This NNPDA learns to operate an external stack and recognize simple DCFG's from string examples. When hints about the grammars are given during training, the NNPDA is capable of learning more sophisticated DCFG's. >

Book ChapterDOI
25 Feb 1993
TL;DR: It is shown that two-way reversal-bounded pushdown automata over bounded languages (i.e., subsets of w 1 * ... w k * for some nonnull words w1 ..., wk) are equivalent to two- way reversal- bounded counter machines.
Abstract: It is known that two-way pushdown automata ate more powerful than two-way counter machines. The result is also true for the case when the pushdown store and counter are reversal-bounded. In contrast, we show that two-way reversal-bounded pushdown automata over bounded languages (i.e., subsets of w 1 * ... w k * for some nonnull words w1 ..., wk) are equivalent to two-way reversal-bounded counter machines. We also show that, unlike the unbounded input case, two-way reversal-bounded pushdown automata over bounded languages have decidable emptiness, equivalence and containment problems.

Book ChapterDOI
22 Feb 1993
TL;DR: It is shown that the minimum identification problem is polynomially transformable into a problem of determining a simplest congruence of the so-called basic hypothesis, and it is proved that the method produces a weakly exclusive set of simplest hypotheses.
Abstract: The paper proposes a problem transformation method for solving the minimum automaton identification problem. An algebraic characterization of a set of all simplest hypotheses explaining a given set of input-experiments is performed. It is shown that the minimum identification problem is polynomially transformable into a problem of determining a simplest congruence of the so-called basic hypothesis. It is proved that the method produces a weakly exclusive set of simplest hypotheses.

Journal ArticleDOI
TL;DR: It is shown that each 2-bounded language recognizing by a nonsensing multihead one-way deterministic pushdown automaton (1-DPDA) can be recognized by a sensing 3- head one- way deterministic finite automaton.

05 Dec 1993
TL;DR: In this paper, a new discrete recurrent model with discrete external stacks for learning context-free grammars (or pushdown automata) is described.
Abstract: In this paper, we describe a new discrete recurrent model with discrete external stacks for learning context-free grammars (or pushdown automata).