scispace - formally typeset
Search or ask a question
Topic

Context-sensitive grammar

About: Context-sensitive grammar is a research topic. Over the lifetime, 1938 publications have been published within this topic receiving 45911 citations. The topic is also known as: CSG.


Papers
More filters
Proceedings ArticleDOI
Martin Kay1
23 Aug 1967
TL;DR: The study discusses the notation used to write rules and the extent to which these rules can be made to state the same linguistic facts as a transformational grammar.
Abstract: : A description is given of a sophisticated computer program for the syntactic analysis of natural languages. The study discusses the notation used to write rules and the extent to which these rules can be made to state the same linguistic facts as a transformational grammar. Whereas most existing programs apply context-free phrase-structure grammars, this new program can analyze sentences with context-sensitive grammars and with grammars of a class very similar to transformational grammars. The program, which is written for the IBM 7040/44 computer, is nondeterministic: The various interpretations of an ambiguous sentence are all worked on simultaneously; at no stage does the program develop one interpretation rather than another. If two interpretations differ only in some small part of a partial syntactic structure, then only one complete structure is stored with two versions of the ambiguous part. The unambiguous portion is worked on only once for both interpretations. Although the current version of the program is written in ALGOL, with very little regard for efficiency, the basic algorithm is inherently much more efficient than any of its competitors. (Author)

55 citations

01 Jan 1999
TL;DR: The recognizers presented in this dissertation will recognize languages generated by Minimalist Grammars as defined in [Sta97] and will be used to rigorously explore the computational consequences of psycholinguistic theories of human sentence processing.
Abstract: In this paper I will present a formal specification of a recognizer for languages generated by Minimalist Grammars (Stabler, 1997). Minimalist Grammars are simple, formal grammars modeling some important aspects of the kind of grammars developed in the framework of Chomsky's Minimalist Program (Chomsky, 1995). A Minimalist Grammar is defined by a set of lexical items, which varies from language to language, and two universal structure building functions, which are defined on trees or configurations: merge and move. Michaelis (1998) has shown that the set of languages generated by Minimalist Grammars is mildly context-sensitive: it falls properly between the set of context-sensitive languages and the set of context-free languages. Mildly context-sensitive languages are assumed to be appropriately powerful for the description of natural languages (Joshi, 1985). Minimalist Grammars can move material from positions arbitrarily deep inside a sentence. This property contributes to the non-context-free nature of these grammars. Michaelis' equivalence result (Michaelis, 1998) permits a representation of Minimalist Grammars in which the operations of the grammar are characterized in such a way that they are strictly local. In this representation, configurations are reduced to those properties that determine their behavior with regard to the structure building functions merge and move. This representation will be used in this paper to derive a top-down recognizer for languages generated by Minimalist Grammars. The recognizer starts from the assumption that the sentence to be parsed is actually a grammatical sentence, and then tries to disassemble it into lexical items by repeatedly applying the structure building functions merge and move in reverse. The recognizer presented in this paper has the correct-prefix property: it goes through the input sentence from left to right and, in case of an ungrammatical sentence, it will halt at the first word that does not fit into a grammatical structure, i.e., the recognizer will not go beyond a prefix that cannot be extended to a grammatical sentence in the language. Similarly, the parser of the human sentence processor detects an ungrammaticality at the first word that makes a sentence ungrammatical, and for garden path sentences it will generally hesitate at the first word that does not fit into the structure that has been hypothesized for the sentence. This is a computationally advantageous property for a recognizer to have, because it prevents the recognizer from spending any effort on a sentence once it is known that it is ungrammatical. Besides contributing to a deeper understanding of Minimalist Grammars and theories that can be formalized in a similar fashion, e.g. Strict Asymmetry Grammars (e.g. Di Sciullo, 1999), the work reported in this paper may also be relevant for psycholinguistic inquiries. In most psycholinguistic proposals, the operations of the human parser are informally sketched in the context of a small set of example sentences, leaving open the question whether a parser with the desired properties covering the entire language actually exists. Another drawback of some psycholinguistic work is that it is based on simplistic and outdated conceptions of syntactic structure. Having a formal, sound and complete parsing model for Minimalist Grammars may help to remedy these problems.

55 citations

Book ChapterDOI
11 Feb 1988
TL;DR: An abstract notion of context-free grammar is introduced that deals with abstract objects that can be words, trees, graphs or other combinatorial objects and is applied to NLC graph grammars introduced by Rozenberg and Janssens.
Abstract: An abstract notion of context-free grammar is introduced. It deals with abstract objects that can be words, trees, graphs or other combinatorial objects. It is applied to NLC graph grammars introduced by Rozenberg and Janssens. The monadic second-order theory of a context-free NLC set of graphs is decidable.

55 citations

Proceedings ArticleDOI
01 May 1998
TL;DR: This work shows that the queries expressible by BAGs are precisely those definable by first-order inductions of linear depth, or, equivalently, those computable in linear time on a parallel machine with polynomially many processors, and shows that RAGs is more expressive than monadic second-order logic for queries of any arity.
Abstract: Structured document databases can be naturally viewed as derivation trees of a context-free grammar. Under this view, the classical formalism of attribute grammars becomes a formalism for structured document query languages. From this perspective, we study the expressive power of BAGs: Boolean-valued attribute grammars with propositional logic formulas as semantic rules, and RAGs: relation-valued attribute grammars with first-order logic formulas as semantic rules. BAGs can express only unary queries; RAGs can express queries of any arity. We first show that the (unary) queries expressible by BAGs are precisely those definable in monadic second-order logic. We then show that the queries expressible by RAGs are precisely those definable by first-order inductions of linear depth, or, equivalently, those computable in linear time on a parallel machine with polynomially many processors. Further, we show that RAGs that only use synthesized attributes are strictly weaker than RAGs that use both synthesized and inherited attributes. We show that RAGs are more expressive than monadic second-order logic for queries of any arity. Finally, we discuss relational attribute grammars in the context of BAGs and RAGs. We show that in the case of BAGs this does not increase the expressive power, while different semantics for relational RAGs capture the complexity classes NP, coNP and UP ∩ coUP.

54 citations

Journal ArticleDOI
TL;DR: A criterion to measure derivational complexity of formal grammars and languages is proposed and discussed and it is shown that for each nonnegative integer k, there exists a context\3-free language whose rank is k.
Abstract: A criterion to measure derivational complexity of formal grammars and languages is proposed and discussed. That is, the associate language and the L-associate language are defined for a grammar such that the former represents all the valid derivations and the latter represents all the valid leftmost derivations. It is shown that for any phrase\3-structure grammar, the associate language is a contex\3-sensitive language and the L\3-associate language is a context\3-free language. Necessary and sufficient conditions for an associate language to be a regular set and to be a context\3-free language are found. The idea in the above necessary and sufficient conditions is extended to the notion of “rank≓ for a measure of derivational complexity of context\3-free grammars and languages. It is shown that for each nonnegative integer k, there exists a context\3-free language whose rank is k. The paper also includes a few solvable decision problems concerning derivational complexity of grammars.

53 citations


Network Information
Related Topics (5)
Graph (abstract data type)
69.9K papers, 1.2M citations
80% related
Time complexity
36K papers, 879.5K citations
79% related
Concurrency
13K papers, 347.1K citations
78% related
Model checking
16.9K papers, 451.6K citations
77% related
Directed graph
12.2K papers, 302.4K citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202311
202212
20211
20204
20191
20181