scispace - formally typeset
Search or ask a question

Showing papers on "Context-sensitive grammar published in 1975"


Journal ArticleDOI
TL;DR: It is shown that any deterministic algorithm which solves the circularity problem for a grammar must for infinitely many cases use an exponential amount of time.
Abstract: Attribute grammars are an extension of context-free grammars devised by Knuth as a mechanism for including the semantics of a context-free language with the syntax of the language. The circularity problem for a grammar is to determine whether the semantics for all possible sentences (programs) in fact will be well defined. It is proved that this problem is, in general, computationally intractable. Specifically, it is shown that any deterministic algorithm which solves the problem must for infinitely many cases use an exponential amount of time. An improved version of Knuth's circularity testing algorithm is also given, which actually solves the problem within exponential time.

108 citations


Journal ArticleDOI
TL;DR: In an attempt to provide a unified theory of grammars, a model is introduced which has two components, a ''grammar form,'' which provides the general structure of the productions in the grammar form, and an ''interpretation'', which yields a specific grammar.

91 citations


Proceedings ArticleDOI
01 Jan 1975
TL;DR: In this article, a membership test is given which determines whether a given attribute grammar satisfies the required restrictions, and the membership test can be embedded in a compiler writing system which accepts an attribute grammar as input and outputs a compiler for the associated language provided the grammar meets the restrictions.
Abstract: In order to make the use of attribute grammars practical in (automatic) compiler generation, restricted attribute grammars are introduced. A membership test is given which determines whether a given attribute grammar satisfies the required restrictions. The major advantage of the restricted attribute grammars is that they are non-circular. The given membership test can be embedded in a compiler writing system which accepts an attribute grammar as input and outputs a compiler for the associated language provided the grammar meets the restrictions. The technique is also applicable to translation grammars of [15]. It is assumed that the reader is familiar with context free grammars but not necessarily with attribute grammars.

70 citations


Journal ArticleDOI
TL;DR: It is shown that the problem of determining whether an arbitrary context-free grammar is a member of some easily parsed subclass of grammars such as the LR(k) Grammars is NP-complete when k is expressed in unary.
Abstract: The problem of determining whether an arbitrary context-free grammar is a member of some easily parsed subclass of grammars such as the LR(k) grammars is considered. The time complexity of this problem is analyzed both when k is considered to be a fixed integer and when k is considered to be a parameter of the test. In the first case, it is shown that for every k there exists an O(nk+2) algorithm for testing the LR(k) property, where n is the size of the grammar in question. On the other hand, if both k and the subject grammar are problem parameters, then the complexity of the problem depends very strongly on the representation chosen for k. More specifically, it is shown that this problem is NP-complete when k is expressed in unary. When k is expressed in binary the problem is complete for nondeterministic exponential time. These results carry over to many other parameterized classes of grammars, such as the LL(k), strong LL(k), SLR(k), LC(k), and strong LC(k) grammars.

31 citations


01 Jan 1975
TL;DR: Shape grammars as mentioned in this paper provide a means for the recursive specification of shapes, where a phrase structura grammar is defined over an alphabet of symbols and generates a language of sequences of symbols.
Abstract: Shape grammars provide a means for the recursive specification of shapes. The formalism for shape grammars is designed to be easily usable and understandable by people and at the same time to be adaptable for use in computer programs. Shape grammars are similar to phrase structura grammars, which were developed by Chomsky [ 1956, 1957]. Where a phrase structura grammar is defined over an alphabet of symbols and generates a language of sequences of symbols, a shape grammar is defined over an alphabet of shapes and generates a language of shapes. This dissertation explores the uses of shape grammars. The dissertation is divided into three sections and an appendix. In the first section: Shape grammars are defined. Some simple examples are given for instructiva purposes. Shape grammars are used to generate a new class of reversible figures. Shape grammars are given for some well-known mathematical curves (the Snowflake curve, a variation of Peano's curve, and Hilbert's curve). To show the general computational power of shape grammars, a procedura that given any Turing machine constructs a shape grammar that simulates the operation of that Turing machine is presented. Related work on various formalisms for pictura grammars is described. A symbolic characterization of shape grammars is given that is useful for implementing shape grammars in computer programs.

26 citations



Journal ArticleDOI

23 citations


Book ChapterDOI
01 Jan 1975
TL;DR: It is found that context-sensitive fractionally fuzzygrammars are recursive and can be parsed by most methods used for ordinary context-free grammars.
Abstract: A new type of fuzzy grammar, called the fractionally fuzzy grammar , is introduced. These grammars are especially suitable for pattern recognition because they are powerful and can easily be parsed. It is shown that the languages produced by the class of type i (Chomsky) fractionally fuzzy grammars properly includes the set of languages generated by type i fuzzy grammars. It is also shown that the set of languages generated by all type 3 (regular) fractionally fuzzy grammars is not a subset of the set of languages produced by all unrestricted (type 0) fuzzy grammars. It is found that context-sensitive fractionally fuzzy grammars are recursive and can be parsed by most methods used for ordinary context-free grammars. Finally, a pattern recognition experiment which uses fractionally fuzzy grammars to recognize the script letters i, e, t and l without the help of the dot on the i or the crossing of the t is given. The construction of a fractionally fuzzy grammar based on a training set and the experimental results are discussed.

21 citations


Journal ArticleDOI
TL;DR: It is shown that some context-sensitive languages can be generated by type 3 ∗-L- fuzzy grammars with cut points, and that for type 2 L -fuzzy Grammars, Chomsky and Greibach normal form can be constructed as an extension of corresponding notion in the theory of formal grammarmars.

19 citations


Journal ArticleDOI
TL;DR: The Bounded Context Parsable Grammars are defined, a class of recursive subsets of context free grammars for which the authors can construct linear time parsers and it is shown that the set of languages of thegrammars thus defined properly contains theSet of deterministic languages without the empty sentence.
Abstract: In this paper we extend Floyd's notion of parsing by bounded context to define the Bounded Context Parsable Grammars, a class of recursive subsets of context free grammars for which we can construct linear time parsers. It is shown that the set of languages of the grammars thus defined properly contains the set of deterministic languages without the empty sentence.

18 citations


Journal ArticleDOI
TL;DR: Two equivalent formulations of max-product grammars are presented and it is shown that /oL contains the family of regular languages as a proper subfamily and that there are context-free and stochastic languages which are not in /oS.

Journal ArticleDOI
TL;DR: The notion of structural equivalence for derivations is formalized, extended to unrestricted grammars, and it is proved that two derivations are structurally equivalent if and only if they have the same syntactic structure.
Abstract: Formal definitions for the syntactic structures of unrestricted grammars are given. The traditional forms for grammar productions give rise to “generative grammars” with “derivation structures” (where productions have the form α → β), and “phrase structure grammars” with “phrase structures” (where productions have the form A → B/μ-ν), two distinct notions of grammar and syntactic structure which become indistinguishable in the context free case, where the structures are trees. Parallel theories are developed for both kinds of grammar and structure. We formalize the notion of structural equivalence for derivations, extended to unrestricted grammars, and we prove that two derivations are structurally equivalent if and only if they have the same syntactic structure. Structural equivalence is an equivalence relation over the derivations of a grammar, and we give a simpler proof of a theorem by Griffiths that each equivalence class contains a rightmost derivation. We also give a proof for the uniqueness of the rightmost derivation, following a study of some of the properties of syntactic structures. Next, we investigate the relationship between derivation structures and phrase structures and show that the two concepts are nonisomorphic. There is a natural correspondence between generative productions and phrase structure productions, and, by extension, between the two kinds of grammars and between their derivations. But we show that the correspondence does not necessarily preserve structural equivalence, in either direction. However, if the correspondence from the productions of a phrase structure grammar to the productions of a generative grammar is a bijection, then structural equivalence on the generative derivations refines the image under the correspondence of structural equivalence on the phrase structure derivations.

Journal ArticleDOI
TL;DR: Probabilistic grammars acting as information sources are considered and concepts from information theory defined by other authors are partly redefined and a specific probability assignment for maximizing the rate of a language source is found.
Abstract: Probabilistic grammars acting as information sources are considered and concepts from information theory defined by other authors are partly redefined. A specific probability assignment for maximizing the rate of a language source is found. Further, the problem of coding a language source is treated.

Journal ArticleDOI
TL;DR: Grammars whose languages consist of cycles (“necklaces”) rather than strings are considered, and automata on cyclic tapes are also discussed.
Abstract: Grammars whose languages consist of cycles (“necklaces”) rather than strings are considered. If G is context free, and we regard G as generating cycles instead of strings, the resulting language is just what we would get if we “bent” the strings of L ( G ) into cycles. This is no longer true if G is context sensitive. However, in this case too, the context-sensitive cycle languages are just the “bendings” of the context-sensitive string languages. Automata on cyclic tapes are also discussed.

Journal ArticleDOI
TL;DR: A generalization of the finite state acceptors for derivation structures and for phrase structures is defined and it is proved that the set of syntactic structures of a recursively enumerable language is recursive.
Abstract: We define a generalization of the finite state acceptors for derivation structures and for phrase structures. Corresponding to the Chomsky hierarchy of grammars, there is a hierarchy of acceptors, and for both kinds of structures, the type 2 acceptors are tree automata. For i = 0, 1, 2, 3, the sets of structures recognized by the type i acceptors are just the sets of projections of the structures of the type i grammars, and the languages of the type i acceptors are just the type i languages. Finally, we prove that the set of syntactic structures of a recursively enumerable language is recursive.

Proceedings ArticleDOI
13 Oct 1975
TL;DR: It is shown that the languages generated by a constrained form of Chomsky's transformational grammars characterize the languages recognized by Turing machines in deterministic exponential (2cn) time.
Abstract: We show that the languages generated by a constrained form of Chomsky's transformational grammars characterize the languages recognized by Turing machines in deterministic exponential (2cn) time. The constraints on the transformational grammars are satisfied by many, though not all, known grammars in linguistic practice. We also give a simple algebraic characterization of the same class of languages and use it for the linguistic characterization.




Proceedings ArticleDOI
05 May 1975
TL;DR: The general question considered in this paper is: which grammar forms are more efficient than other grammar forms for the expression of classes of languages, and how much gain in efficiency is possible?
Abstract: The definition of “grammar form” introduced in [CG] makes it possible to state and prove results about various types of grammars in a uniform way. Among questions naturally formalizable in this framework are many about the complexity or efficiency of grammars of different kinds. Grammar forms provide a reasonable way of considering the totality of other forms we might use, and so answering the question with both upper and lower bound results. The general question considered in this paper is the following: which grammar forms are more efficient than other grammar forms for the expression of classes of languages, and how much gain in efficiency is possible? Our results deal solely with context-free grammars, and use both derivation complexity and size of grammars as complexity measures.

Journal ArticleDOI
01 Jan 1975
TL;DR: It is shown that every context-sensitive language can be generated by a context-free grammar with graph control over sets of productions, corresponding to unconditional transfer programmed grammars and programmedgrammars with empty failure fields.
Abstract: It is shown that every context-sensitive language can be generated by a context-free grammar with graph control over sets of productions. This can be done in two different ways, corresponding to unconditional transfer programmed grammars and programmed grammars with empty failure fields. Also some results concerning ordinary programmed grammars are established.

Journal ArticleDOI
TL;DR: The main focus of the paper is on deciding whether a given functor between two grammars is surjective, and an additional theorem gives the means for deciding a certain type of structural similarity which is defined by the existence of such a functor.
Abstract: We present a procedure for deciding a sufficient condition for equivalence of context-free grammars. The main focus of the paper is on deciding whether a given functor between two grammars is surjective. An additional theorem gives us the means for deciding a certain type of structural similarity which is defined by the existence of such a functor.


15 May 1975
TL;DR: The relation between basic transition networks and context-free grammars is demonstrated and error-correcting parsing algorithms are proposed from the viewpoint of syntactic pattern recognition.
Abstract: : The application of transition network grammars to syntactic pattern recognition is studied in this paper. The relation between basic transition networks and context-free grammars is demonstrated. Augmented transition networks can be used to represent context-sensitive, or even type 0 languages. Stochastic transition networks are defined and the parsing of languages represented by transition networks and stochastic transition networks investigated. Error-correcting parsing algorithms are proposed from the viewpoint of syntactic pattern recognition. The voice-chess grammar is used in an experiment to illustrate various parsing results.

01 Sep 1975
TL;DR: The analogous extension of the LL(k) grammers is considered, called the LL-regular grammars, which can be parsed with a very simple two-scan parsing algorithm and some proofs of correctness are given.
Abstract: Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars The relations of this class of grammars to other classes of grammars are shown Every LL-regular grammar can be transformed to an equivalent LL-regular grammar in Greibach Normal Form LL-regular grammars can be parsed with a very simple two-scan parsing algorithm Algorithms and some proofs of correctness are given In this report extensions of some other classes of grammars are also considered


Journal ArticleDOI
TL;DR: It is shown that the local adjunct languages are actually closely related to the regular and context-free languages, despite the entirely different form of definition.
Abstract: The local adjunct grammars and languages have been introduced by Joshi, Kosaraju, and Yamada in response to linguistic considerations. These grammars differ fundamentally from the Chomsky phrase-structure grammars, and they generate a distinct class of languages.