scispace - formally typeset
Search or ask a question

Showing papers on "Context-sensitive grammar published in 2023"


Posted ContentDOI
27 Apr 2023
TL;DR: In this paper , the complexity of computing the degree of a polynomial given by an arithmetic circuit was shown to lie in the counting hierarchy by reduction to the decision problem for the existential fragment of real closed fields.
Abstract: In this paper we obtain complexity bounds for computational problems on algebraic power series over several commuting variables. The power series are specified by systems of polynomial equations: a formalism closely related to weighted context-free grammars. We focus on three problems -- decide whether a given algebraic series is identically zero, determine whether all but finitely many coefficients are zero, and compute the coefficient of a specific monomial. We relate these questions to well-known computational problems on arithmetic circuits and thereby show that all three problems lie in the counting hierarchy. Our main result improves the best known complexity bound on deciding zeroness of an algebraic series. This problem is known to lie in PSPACE by reduction to the decision problem for the existential fragment of the theory of real closed fields. Here we show that the problem lies in the counting hierarchy by reduction to the problem of computing the degree of a polynomial given by an arithmetic circuit. As a corollary we obtain new complexity bounds on multiplicity equivalence of context-free grammars restricted to a bounded language, language inclusion of a nondeterministic finite automaton in an unambiguous context-free grammar, and language inclusion of a non-deterministic context-free grammar in an unambiguous finite automaton.

Journal ArticleDOI
TL;DR: In this article , the power of contextual grammars with selection languages from subfamilies of the family of regular languages has been studied, and two independent hierarchies have been obtained for external and internal contextual Grammars, one based on structural properties (finite, monoidal, nilpotent, combinational, definite, ordered, non-counting, power-separating, suffix-closed, commutative, circular, or union-free languages) and the other based on resources (number of nonterminal symbols, production rules, or states needed for generating or accepting them).
Abstract: In this paper, we continue the research on the power of contextual grammars with selection languages from subfamilies of the family of regular languages. In the past, two independent hierarchies have been obtained for external and internal contextual grammars, one based on selection languages defined by structural properties (finite, monoidal, nilpotent, combinational, definite, ordered, non-counting, power-separating, suffix-closed, commutative, circular, or union-free languages), the other one based on selection languages defined by resources (number of non-terminal symbols, production rules, or states needed for generating or accepting them). In the present paper, we compare the language families of these hierarchies for external contextual grammars and merge the hierarchies.

Journal ArticleDOI
TL;DR: In this paper , the core of the work is to construct Derivation Trees of Plus Weighted Context Free Grammars and the bonding between Plus weighted context free grammar and Plus weighted Context Free DendroSystem is established.
Abstract: Abstract: The core of this paper is to construct Derivation Trees of Plus Weighted Context Free Grammars and the bonding between Plus weighted Context Free grammar and Plus weighted Context Free Dendrosystem is established.

Journal ArticleDOI
TL;DR: In this paper , the Reactive Turing Machine (RTM) is used to model an old-fashioned computer that does not interact with the user or with other computers, and only does batch processing.
Abstract: The Turing machine models an old-fashioned computer, that does not interact with the user or with other computers, and only does batch processing. Therefore, we came up with a Reactive Turing Machine that does not have these shortcomings. In the Reactive Turing Machine, transitions have labels to give a notion of interactivity. In the resulting process graph, we use bisimilarity instead of language equivalence. Subsequently, we considered other classical theorems and notions from automata theory and formal languages theory. In this paper, we consider the classical theorem of the correspondence between pushdown automata and context-free grammars. By changing the process operator of sequential composition to a sequencing operator with intermediate acceptance, we get a better correspondence in our setting. We find that the missing ingredient to recover the full correspondence is the addition of a notion of state awareness.



Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper introduced weighted multiple context-free grammar (WMCFG) as a quantitative extension of MCFG and investigated properties of WMCFG such as polynomial-time computability of basic problems, its closure property and expressive power.
Abstract: Multiple context-free grammar (MCFG) is an extension of context-free grammar (CFG), which generates tuples of words. The expressive power of MCFG is between CFG and context-sensitive grammar while MCFG inherits good properties of CFG. In this paper, we introduce weighted multiple context-free grammar (WMCFG) as a quantitative extension of MCFG. Then we investigate properties of WMCFG such as polynomial-time computability of basic problems, its closure property and expressive power.

Posted ContentDOI
19 Jan 2023
TL;DR: In this article , the authors explore the influence of the formal differences between two Lindenmayer grammars (the Fibonacci grammar and the Skip grammar) on the extraction of hierarchical structure by the participants.
Abstract: In this paper, we explore the extraction of recursive nested structure in the processing of self-similar binary sequences generated by two Lindenmayer grammars: the Fibonacci grammar and the Skip grammar. In each of these grammars only sequential order information marks the hierarchical structure. Although closely related, these grammars differ from a formal point of view: the Fibonacci grammar is perfectly scale-free and presents an isomorphism between its surface and derivational properties while the Skip grammar, although also self-similar, does not present this isomorphism. Our goal was to explore the influence of these formal differences on the extraction of hierarchical structure by the participants. To this end, we implemented these grammars in a serial reaction time task. The results show that in both the Fibonacci grammar and the Skip grammar, participants elaborated a hierarchical structure from the signal. This suggests the involvement of at least partially similar mechanisms during processing. However, some processing differences remained that cannot be explained by the hypotheses proposed so far regarding the processing of strings generated by L-systems. We hypothesize that these effects would be due to the self-similarity of the signal which would act as a reinforcement of the structure elaborated by the participants.

Journal ArticleDOI
TL;DR: For grammars with context operators, the even-odd normal form is defined in this paper , where every nonterminal symbol defines only strings of odd length in left contexts of even length.
Abstract: In 1973, Greibach (“The hardest context-free language”, SIAM J. Comp., 1973) constructed a context-free language L0 with the property that every context-free language can be reduced to L0 by a homomorphism, thus representing it as an inverse homomorphic image h−1(L0). In this paper, a similar characterization is established for a family of grammars equipped with operators for referring to the left context of any substring, recently defined by Barash and Okhotin (“An extension of context-free grammars with one-sided context specifications”, Inform. Comput., 2014). An essential step of the argument is a new normal form for grammars with context operators, in which every nonterminal symbol defines only strings of odd length in left contexts of even length: the even-odd normal form. The characterization is completed by showing that the language family defined by grammars with context operators is closed under inverse homomorphisms; actually, it is closed under injective nondeterministic finite transductions.

Journal ArticleDOI
TL;DR: The authors showed that it is decidable whether the language generated by a given context-free grammar over a tree alphabet is a tree language, and if the answer to this question is "yes" then they can construct a regular tree grammar which generates that tree language.
Abstract: . We show that it is decidable whether the language generated by a given context-free grammar over a tree alphabet is a tree language. Furthermore, if the answer to this question is “yes”, then we can even effectively construct a regular tree grammar which generates that tree language.

Journal ArticleDOI
TL;DR: In this article , the problem of identifying a probabilistic context free grammar has two aspects: the first is determining the grammar's topology (the rules of the grammar) and the second is estimating probabilistically weights for each rule.
Abstract: The problem of identifying a probabilistic context free grammar has two aspects: the first is determining the grammar's topology (the rules of the grammar) and the second is estimating probabilistic weights for each rule. Given the hardness results for learning context-free grammars in general, and probabilistic grammars in particular, most of the literature has concentrated on the second problem. In this work we address the first problem. We restrict attention to structurally unambiguous weighted context-free grammars (SUWCFG) and provide a query learning algorithm for \structurally unambiguous probabilistic context-free grammars (SUPCFG). We show that SUWCFG can be represented using \emph{co-linear multiplicity tree automata} (CMTA), and provide a polynomial learning algorithm that learns CMTAs. We show that the learned CMTA can be converted into a probabilistic grammar, thus providing a complete algorithm for learning a structurally unambiguous probabilistic context free grammar (both the grammar topology and the probabilistic weights) using structured membership queries and structured equivalence queries. A summarized version of this work was published at AAAI 21.