scispace - formally typeset
Search or ask a question

Showing papers on "Tree-adjoining grammar published in 1987"



Book ChapterDOI
01 Oct 1987
TL;DR: This paper shows how attributes in an attribute grammar can be simply and efficiently evaluated using a lazy functional language.
Abstract: The purpose of this paper is twofold. Firstly we show how attributes in an attribute grammar can be simply and efficiently evaluated using a lazy functional language. The class of attribute grammars we can deal with are the most general ones possible: attributes may depend on each other in an arbitrary way, as long as there are no truly circular data dependencies.

151 citations


Book ChapterDOI
01 Jan 1987
TL;DR: Tree Adjoining Grammars (TAG) is a formalism that factors recursion and dependencies in a special way, leading to a kind of locality and the possibility of incremental generation.
Abstract: Grammatical formalisms can be viewed as neutral with respect to comprehension or generation, or they can be investigated from the point of view of their suitability for comprehension or generation. Tree Adjoining Grammars (TAG) is a formalism that factors recursion and dependencies in a special way, leading to a kind of locality and the possibility of incremental generation. We will examine the relevance of these properties from the point of view of sentence generation.

68 citations



Journal ArticleDOI
TL;DR: Using regular tree grammars weighted over a semiring, Kleene's theorem is established in the context of formal tree power series.

63 citations


Proceedings Article
01 Jan 1987

34 citations


Journal ArticleDOI
TL;DR: It is shown that so-called context-free string grammars with disconnecting with finite Church Rosser property can generate NP-complete languages.

31 citations



Proceedings Article
13 Jul 1987
TL;DR: The tree adjoining grammar (TAG) formalism has been investigated from the point of view of its ability to handle word-order variation in the context of generation.
Abstract: In natural language generation the grammatical component has to be systematically interfaced to the other components of the system, for example, the planning component Grammatical formalisms can be studied with respect to their suitability for generation. The tree adjoining grammar (TAG) formalism has been previously studied in terms of incremental generation. In this paper, the TAG formalism has been investigated from the point of view of its ability to handle word-order variation in the context of generation. Word-order cannot be treated as a last minute adjustment of a structure; this position is not satisfactory cognitively or computationally. The grammatical framework has to be able to deal with the word-order phenomena in a way such that it can be systematically interfaced to the other components of the generation system.

26 citations


Proceedings ArticleDOI
06 Jul 1987
TL;DR: It is shown that one benefit of FUG, the ability to state global constraints on choice separately from syntactic rules, is difficult in generation systems based on augmented context free grammars (e.g., Definite Clause Grammars).
Abstract: In this paper, we show that one benefit of FUG, the ability to state global constraints on choice separately from syntactic rules, is difficult in generation systems based on augmented context free grammars (e.g., Definite Clause Grammars). They require that such constraints be expressed locally as part of syntactic rules and therefore, duplicated in the grammar. Finally, we discuss a reimplementation of FUG that achieves the similar levels of efficiency as Rubinoff's adaptation of MUMBLE, a deterministic language generator.

25 citations



Proceedings Article
23 Aug 1987
TL;DR: A class of unification grammars for whcih the parsing problem is solvable and a parallel parsing algorithm for this class of Grammars is presented.
Abstract: The parsing problem for arbitrary unification grammars is unsolvable We present a class of unification grammars for whcih the parsing problem is solvable and a parallel parsing algorithm for this class of grammars.

Journal ArticleDOI
TL;DR: Four factors are shown to enter into a typology of grammars: the mode of use, the language of the user, the level of the users, and the aims of use.
Abstract: Four types of grammar are distinguished according to mode of use: reference grammars, pedagogical grammars, theoretical grammars, and teach-yourself grammars. Four factors are shown to enter into a typology of grammars: the mode of use, the language of the user, the level of the user, and the aims of use. Reference grammars and pedagogical grammars are characterized, the former with particular respect to A Comprehensive Grammar of the English Language. In general it is not desirable to attempt to combine in one book the functions of these two types of grammars.

Journal ArticleDOI
TL;DR: Generalized Context- free (Regular) Kolam Array Grammars [GCF(R)KAG] are introduced as models for generation of rectangular arrays and are found to be richer in generative capacity than Context-Free (Regular).
Abstract: Generalized Context-Free (Regular) Kolam Array Grammars [GCF(R)KAG] are introduced as models for generation of rectangular arrays. These grammars are found to be richer in generative capacity than Context-Free (Regular) Kolam Array Grammars. Two subclasses of these grammars are also considered. Comparisons are made. Hierarchies and closure properties are examined. The effects of control devices on GCF(R)KAGs are discussed.


Book ChapterDOI
01 Jan 1987
TL;DR: A phrase-structure grammar has been written which generates exactly the set of sentences generated by a fairly large transformational grammar written by Noam Chomsky.
Abstract: A phrase-structure grammar has been written which generates exactly the set of sentences generated by a fairly large transformational grammar written by Noam Chomsky.2 The phrase structure version of Chomsky’s grammar is included in the appendis. It is written in an abbreviated notation which is explained below.

Proceedings ArticleDOI
06 Jul 1987
TL;DR: It is proved in this paper that unordered, or ID/LP grammars, are exponentially more succinct than context-free grammARS, by exhibiting a sequence of finite languages such that the size of any CFG for Ln must grow exponentially in n, but which can be described by polynomial-size ID/ LP grammar.
Abstract: We prove in this paper that unordered, or ID/LP grammars, are exponentially more succinct than context-free grammars, by exhibiting a sequence (Ln) of finite languages such that the size of any CFG for Ln must grow exponentially in n, but which can be described by polynomial-size ID/LP grammars. The results have implications for the description of free word order languages.

Journal ArticleDOI
TL;DR: The characterization of random-context array grammars and random- Context structure grammARS by two-dimensional random- context array automata and three-dimensionalrandom-context structure automata respectively is investigated.



Journal ArticleDOI
TL;DR: The context-free-like structure of the grammar is used as a tool to investigate normal-form transformations, Dyck languages and homomorphic characterizations and the two derivation modes yield the classes of indexed and type-0 languages respectively.




Book
01 Nov 1987
TL;DR: An overview of results on the complexity of the membership problem for families of languages generated by several types of generalized grammars based on context-independent rewriting and on iterated context-dependent rewriting is presented.
Abstract: We present an overview of results on the complexity of the membership problem for families of languages generated by several types of generalized grammars. In particular, we consider generalized grammars based on context-independent rewriting, i.e., grammars consisting of a finite number of (non)deterministic substitutions, and on iterated context-dependent rewriting , i.e., grammars composed of a finite number of transductions. We give some conditions on the classes of these substitutions and transductions that guarantee the solvability of this membership problem within certain time and space bounds. As consequences we obtain additional closure properties of some time- and space-bounded complexity classes.

Proceedings ArticleDOI
01 Feb 1987
TL;DR: This abstract focuses on the use of semantic information and grammatical inference in syntactic pattern recognition, which enables a system to learn most information from an input pattern, and to apply the obtained knowledge to future recognition processes.
Abstract: During the past several years, syntactic approach [1,2] has attracted growing attention as promising avenues of approach in image analysis. The object of image analysis is to extract as much information as possible from a given image or a set of images. In this abstract, we will focus our attention on the use of semantic information and grammatical inference.In an attributed grammar, there are still a set of nonterminals, a set of terminals and a start symbol just as in conventional grammars. The productions are different. Each semantic rule. Two kinds of attributes are included in the semantic rules: inherited attributes and synthesized attributes. One example of the attributes is the length of a specific line segment used as a primitive. All the attributes identified for a pattern are expressed in a “total attribute vector”.Instead of using attributes, stochastic grammars associate with each production a probability. That means, one sub-pattern may generate one subpattern with some probability, and another with a different probability. A string may have two or more possible parses. In this case of ambiguity, the probabilities associated with the several possible productions are compared to determine the best fit one. Probabilities are multiplied in multiple steps of stochastic derivations.Besides these, fuzzy languages[3-6] have also been introduced into pattern recognition. By using similarity measures as membership functions, this approach describes patterns in a more understandable way than stochastic grammars. Moreover, fuzzy languages make use of individual characteristics of a class of patterns rather than collective characteristics as in stochastic languages, and therefore it is probably easier to develop grammars than stochastic languages. Yet a lot of work still need to be done in order to develop sufficient theories in this field for practical uses.An appropriate grammar is the core of any type of syntactic pattern recognition process. Grammars may be established by inferring from a priori knowledge about the objects or scenes to be recognized. Another way to establish a pattern grammar is by direct inference from some sample input patterns.Once a grammar is derived from some sample input patterns, other patterns similar to them or belonging to the same class can be parsed according to the grammar. Therefore grammatical inference enables a system to learn most information from an input pattern, and, furthermore, to apply the obtained knowledge to future recognition processes. It seems to be the ultimate aim of image analysis.Inference can be supervised or unsupervised. In supervised inference, a “teacher” who is able to discriminate valid and invalid strings helps in reducing the length of sentences or inserting substrings until some iterative regularity is detected. In unsupervised inference, no prior knowledge about the grammar is assumed.The difficulty of inference is proportional to the complexity of the grammar, and the inference problem does not have a unique solution unless some additional constraints are placed upon the grammars. Some theoretical algorithms have been developed for inferencing regular (finite-state) grammars, but they still have severe limitations for practical use because of large amount computation due to the combinatorial effect.Context-free grammars are even harder to deal with since many decidable properties about regular grammars are undecidable for context-free grammars, such as the equivalency of two contex-free grammars. Therefore, inference algorithms have been developed only for some specific types of context-free grammars and most of them rely on heuristic methods.Syntactic approach to image analysis may be applied to many areas including space object surveillance and identification [7].

Journal ArticleDOI
TL;DR: A technique for construction of this parse table is given which in the lookahead case involves elimination of inverses in a grammar for lookahead strings for LR(0) items and computation of first sets for strings of symbols in the given grammar.
Abstract: Simple LR(1) and lookahead LR(1) phrase structure grammars are defined and corresponding deterministic two-pushdown automata which parse all sentences are given. These grammars include a wide variety of grammars for non context-free languages. A given phrase structure grammar is one of these types if the parse table for the associated automaton has no multiple entries. A technique for construction of this parse table is given which in the lookahead case involves elimination of inverses in a grammar for lookahead strings for LR(0) items and computation of first sets for strings of symbols in the given grammar.

Journal ArticleDOI
01 Mar 1987
TL;DR: The theory of graphic grammars is presented, various programming implementations will be discussed and many motivating examples will be given, including the development of biological organisms the ''semantic net'' representation of expert system knowledge.
Abstract: Graphics are graphs with attributes at their vertices. Graphic grammars are natural extensions of graph and attribute grammars with rules that are attributed extensions of the ''pushout'' productions of graph grammars. The theory of graphic grammars is presented and various programming implementations will be discussed. Many motivating examples will be given, including the development of biological organisms the ''semantic net'' representation of expert system knowledge.

01 Oct 1987
TL;DR: This expository memorandum sets out the links between the two areas, via stochastic grammars, and points to Stochastic context-free Grammars as an interesting area for practical application.
Abstract: : The theory of formal grammars is widely used in computer science and linguistics. Hidden Markov models are well established in automatic speech recognition. This expository memorandum sets out the links between the two areas, via stochastic grammars, and points to stochastic context-free grammars as an interesting area for practical application.