Topic
Pushdown automaton
About: Pushdown automaton is a research topic. Over the lifetime, 1868 publications have been published within this topic receiving 35399 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: It has been proved that if fuzzy deep pushdown automaton Mfd is constructed from fuzzy state grammar Gfs then L(Mfd) = L(Gfs) and that for any string α ∈ � ∗ , μ(α; α ∉ L( Gfs) = μ( α; α∗) where μ denotes the membership of a string.
Abstract: Motivated by the concept of fuzzy finite automata and fuzzy pushdown automata, we investigate a novel fuzzy state grammars and fuzzy deep pushdown automata concept. This concept represents a natural extension of contemporary state grammar and deep pushdown automaton, making them more robust in terms of imprecision, errors, and uncertainty. It has been proved that we can construct fuzzy deep pushdown automata from fuzzy state grammars and vice-versa. Furthermore, it has been proved that if fuzzy deep pushdown automaton Mfd is constructed from fuzzy state grammar Gfs then L(Mfd) = L(Gfs). In other words, for any string α ∈ � ∗ , μ(α; α ∈ L(Gfs)) = μ(α; α ∈ L(Mfd)) where μ denotes the membership of a string.
13 citations
••
06 Jul 2010TL;DR: This work provides a formalisation of the theory of pushdown automata (PDAs) using the HOL4 theorem prover, illustrating how provers such as HOL can be used for mechanising complicated proofs, but also how intensive such a process can turn out to be.
Abstract: We provide a formalisation of the theory of pushdown automata (PDAs) using the HOL4 theorem prover. It illustrates how provers such as HOL can be used for mechanising complicated proofs, but also how intensive such a process can turn out to be. The proofs blow up in size in way difficult to predict from examining original textbook presentations. Even a meticulous text proof has "intuitive" leaps that need to be identified and formalised.
13 citations
••
01 Jan 1988TL;DR: A new computational paradigm for the evaluation of recursive Datalog queries, which is based on a pushdown automaton (PDA) model, and a general and simple technique for constructing efficient polynomial query evaluators.
Abstract: We propose a new computational paradigm for the evaluation of recursive Datalog queries, which is based on a pushdown automaton (PDA) model. By extending to these automata a dynamic programming technique developed for PDAs in context-free parsing, we obtain a general and simple technique for constructing efficient polynomial query evaluators. Keywords: Datalog, Recursive Queries, Complete Strategies, Dynamic Programming, Polynomial Complexity.
13 citations
••
08 Jun 2006TL;DR: The nonforgetting restarting automaton is a restarted automaton that is not forced to reset its internal state to the initial state when executing a restart operation.
Abstract: The nonforgetting restarting automaton is a restarting automaton that is not forced to reset its internal state to the initial state when executing a restart operation. We analyse the expressive power of the various deterministic and/or monotone variants of this model.
13 citations
•
TL;DR: A “neural state” pushdown automaton (NSPDA), which consists of a discrete stack instead of an continuous one and is coupled to a neural network state machine, and empirically shows its effectiveness in recognizing various context-free grammars (CFGs).
Abstract: In order to learn complex grammars, recurrent neural networks (RNNs) require sufficient computational resources to ensure correct grammar recognition. A widely-used approach to expand model capacity would be to couple an RNN to an external memory stack. Here, we introduce a "neural state" pushdown automaton (NSPDA), which consists of a digital stack, instead of an analog one, that is coupled to a neural network state machine. We empirically show its effectiveness in recognizing various context-free grammars (CFGs). First, we develop the underlying mechanics of the proposed higher order recurrent network and its manipulation of a stack as well as how to stably program its underlying pushdown automaton (PDA) to achieve desired finite-state network dynamics. Next, we introduce a noise regularization scheme for higher-order (tensor) networks, to our knowledge the first of its kind, and design an algorithm for improved incremental learning. Finally, we design a method for inserting grammar rules into a NSPDA and empirically show that this prior knowledge improves its training convergence time by an order of magnitude and, in some cases, leads to better generalization. The NSPDA is also compared to a classical analog stack neural network pushdown automaton (NNPDA) as well as a wide array of first and second-order RNNs with and without external memory, trained using different learning algorithms. Our results show that, for Dyck(2) languages, prior rule-based knowledge is critical for optimization convergence and for ensuring generalization to longer sequences at test time. We observe that many RNNs with and without memory, but no prior knowledge, fail to converge and generalize poorly on CFGs.
13 citations