scispace - formally typeset
Search or ask a question
Topic

Context-sensitive grammar

About: Context-sensitive grammar is a research topic. Over the lifetime, 1938 publications have been published within this topic receiving 45911 citations. The topic is also known as: CSG.


Papers
More filters
17 Aug 2007
TL;DR: A new hybrid ADM is developed which uses Schmitz' method to filter out parts of a grammar that are guaranteed to be unambiguous, and is the most practically usable on grammars.
Abstract: textThe Meta-Environment enables the creation of grammars using the SDF formalism. From these grammars an SGLR parser can be generated. One of the advantages of these parsers is that they can handle the entire class of context-free grammars (CFGs). The grammar developer does not have to squeeze his grammar into a specific subclass of CFGs that is deterministically parsable. Instead, he can now design his grammar to best describe the structure of his language. The downside of allowing the entire class of CFGs is the danger of ambiguities. An ambiguous grammar prevents some sentences from having a unique meaning, depending on the semantics of the used language. It is best to remove all ambiguities from a grammar before it is used. Unfortunately, the detection of ambiguities in a grammar is an undecidable problem. For a recursive grammar the number of possibilities that have to be checked might be infinite. Various ambiguity detection methods (ADMs) exist, but none can always correctly identify the (un)ambiguity of a grammar. They all try to attack the problem from different angles, which results in different characteristics like termination, accuracy and performance. The goal of this project was to find out which method has the best practical usability. In particular, we investigated their usability in common use cases of the Meta-Environment, which we try to represent with a collection of about 120 grammars with different numbers of ambiguity. We distinguish three categories: small (less than 17 production rules), medium (below 200 production rules) and large (between 200 and 500 production rules). On these grammars we have benchmarked three implementations of ADMs: AMBER (a derivation generator), MSTA (a parse table generator used as the LR(k) test) and a modified Bison tool which implements the ADM of Schmitz. We have measured their accuracy, performance and termination on the grammar collections. From the results we analyzed their scalability (the scale with which accuracy can be traded for performance) and their practical usability. The conclusion of this project is that AMBER was the most practically usable on our grammars. If it terminates, which it did on most of our grammars, then all its other characteristics are very helpful. The LR(1) precision of Schmitz was also pretty useable on the medium grammars, but needed too much memory on the large ones. Its downside is that its reports are hard to comprehend and verify. The insights gained during this project have led to the development of a new hybrid ADM. It uses Schmitz' method to filter out parts of a grammar that are guaranteed to be unambiguous. The remainder of the grammar is then tested with a derivation generator, which might find ambiguities in less time. We have built a small prototype which was indeed faster than AMBER on the tested grammars, making it the most usable ADM of all.

16 citations

01 Dec 1993
TL;DR: Grammars reflect the structure of normal inhabitants in such a way that, when non-terminals are ignored, a derivation tree of the grammars yielding a-termM can be identified with B?hm tree ofM.
Abstract: We present grammatical (or equational) descriptions of the set of normal inhabitants {M|??M:A,Min?-normal form} of a given typeAunder a given basis?, both for the standard simple type system (in the partial discharge convention) and for the system in the total discharge convention (or the Prawitz-style natural deduction system). It is shown that in the latter system we can describe the set by a (finite) context-free grammar, but for the standard system this is not necessarily the case because we may need an infinite supply of fresh (bound) variables to describe the set. In both cases, however, our grammars reflect the structure of normal inhabitants in such a way that, when non-terminals are ignored, a derivation tree of the grammars yielding a?-termMcan be identified with B?hm tree ofM. We give some applications of the grammatical descriptions. Among others, we give simple algorithms for the emptiness/finiteness problem of the set of normal inhabitants of a given type (both for the standard and nonstandard systems).

16 citations

Journal ArticleDOI
TL;DR: The theoretical basis for a concept of ‘computation-friendly’ shape grammars is explored, through a formal examination of tractability of the grammar formalism, and parametric subshape recognition is shown to be NP.
Abstract: In this paper we explore the theoretical basis for a concept of ‘computation-friendly’ shape grammars, through a formal examination of tractability of the grammar formalism. Although a variety of shape grammar definitions have evolved over time, it is possible to unify these to be backwards compatible. Under this unified definition, a shape grammar can be constructed to simulate any Turing machine from which it follows that: A shape grammar may not halt; its language space can be exponentially large; and in general, its membership problem is unsolvable. Moreover, parametric subshape recognition is shown to be NP. This implies that it is unlikely, in general, to find a polynomial-time algorithm to interpret parametric shape grammars, and that more pragmatic approaches need to be sought. Factors that influence the tractability of shape grammars are identified and discussed.

16 citations

Journal ArticleDOI
TL;DR: A syntactic model for generating sets of images, where an image can be viewed as an array over finite alphabet is defined, called image grammar, which can be considered as a generalization of classical Chomsky grammar.
Abstract: We define a syntactic model for generating sets of images, where an image can be viewed as an array over finite alphabet. This model is called image grammar. Image grammar can be considered as a generalization of classical Chomsky grammar. Then we study some combinatorial and language theoretical properties such as reduction, pumping lemmas, complexity measure, we give a strict infinite hierarchy. We also characterize these families in terms of deterministic substitutions and Chomsky languages.

16 citations

Journal ArticleDOI
01 Apr 2003-Grammars
TL;DR: The aim of this paper is to give prospective PhD students in the area hints at where to start a promising research; and to supplement earlier reference lists on parallel grammars, trying to cover recent papers as well as ``older'' papers which were somehow neglected in other reviews.
Abstract: The aim of this paper is at least 2 fold: to give prospective PhD students in the area hints at where to start a promising research; and to supplement earlier reference lists on parallel grammars, trying to cover recent papers as well as ``older'' papers which were somehow neglected in other reviews. Together with the nowadays classical book on L systems by G. Rozenberg and A. Salomaa and with the articles on L systems in the Handbook of Formal Languages, researchers will be equipped with a hopefully comprehensive list of references and ideas around parallel grammars.

16 citations


Network Information
Related Topics (5)
Graph (abstract data type)
69.9K papers, 1.2M citations
80% related
Time complexity
36K papers, 879.5K citations
79% related
Concurrency
13K papers, 347.1K citations
78% related
Model checking
16.9K papers, 451.6K citations
77% related
Directed graph
12.2K papers, 302.4K citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202311
202212
20211
20204
20191
20181