scispace - formally typeset
Search or ask a question

Showing papers on "Tree-adjoining grammar published in 1988"


Book
01 Jan 1988
TL;DR: A tutorial introduction to the algebraic approach of graph grammars can be found in this paper, where the authors introduce the Hyperedge replacement method and an introduction to parallel map generating systems.
Abstract: Tutorial introduction to the algebraic approach of graph grammars- May we introduce to you: Hyperedge replacement- An introduction to parallel map generating systems- Set theoretic approaches to graph grammars- An introduction to the NLC way of rewriting graphs- Array grammars- Graph grammar based specification of interconnection structures for massively parallel computation- Towards distributed graph grammars- On partially ordered graph grammars- A representation of graphs by algebraic expressions and its use for graph rewriting systems- On context-free sets of graphs and their monadic second-order theory- Restricting the complexity of regular DNLC languages- Apex graph grammars- Graph grammar engineering: A software specification method- A linguistic formalism for engineering solid modeling- Graph grammars and diagram editing- Graphics and their grammars- On network algebras and recursive equations- Ada-concurrency specified by graph grammars- Basic notions of actor grammars- Embedding rule independent theory of graph grammars- Supporting the software development process with attributed NLC graph grammars- Practical applications of precedence graph grammars- Is parallelism already concurrency? Part 1: Derivations in graph grammars- Is parallelism already concurrency? Part 2: Non-sequential processes in graph grammars- Map OL-systems with edge label control: Comparison of marker and cyclic systems- From 0L and 1L map systems to indeterminate and determinate growth in plant morphogenesis- Fundamentals of edge-label controlled graph grammars- Parallelism analysis in rule-based systems using graph grammars- An efficient algorithm for the solution of hierarchical networks of constraints- A software development environment based on graph technology- Map 0L systems with markers- Graph rewriting with unification and composition- Complexity of pattern generation via planar parallel binary fission/fusion grammars- Applications of L-systems to computer imagery- Advances in array languages- Rosenfeld's cycle grammars and kolam- Application of graph grammars in music composing systems- Boundary NLC and partition controlled graph grammars

401 citations


Proceedings ArticleDOI
22 Aug 1988
TL;DR: A novel general parsing strategy for 'lexicalized' grammars is discussed and it is argued that even if one extends the domain of locality of CFGs to trees, using only substitution does not give the freedom to choose the head of each structure.
Abstract: In this paper we present a general parsing strategy that arose from the development of an Earley-type parsing algorithm for TAGs (Schabes and Joshi 1988) and from recent linguistic work in TAGs (Abeille 1988).In our approach elementary structures are associated with their lexical heads. These structures specify extended domains of locality (as compared to a context-free grammar) over which constraints can be stated. These constraints either hold within the elementary structure itself or specify what other structures can be composed with a given elementary structure.We state the conditions under which context-free based grammars can be 'lexicalized' without changing the linguistic structures originally produced. We argue that even if one extends the domain of locality of CFGs to trees, using only substitution does not give the freedom to choose the head of each structure. We show how adjunction allows us to 'lexicalize' a CFG freely.We then show how a 'lexicalized' grammar naturally follows from the extended domain of locality of TAGs and present some of the linguistic advantages of our approach.A novel general parsing strategy for 'lexicalized' grammars is discussed. In a first stage, the parser builds a set structures corresponding to the input sentence and in a second stage, the sentence is parsed with respect to this set. The strategy is independent of the linguistic theory adopted and of the underlying grammar formalism. However, we focus our attention on TAGs. Since the set of trees needed to parse an input sentence is supposed to be finite, the parser can use in principle any search strategy. Thus, in particular, a top-down strategy can be used since problems due to recursive structures are eliminated. The parser is also able to use non-local information to guide the search.We then explain how the Earley-type parser for TAGs can be modified to take advantage of this approach.

264 citations


BookDOI
TL;DR: Categorial Grammars as Theories of Language: The Lambek Calculus, Generative Power of Categorial grammars, Semantic Categories and the Development of Categorized Grammar and Phrase Structure Grammar: An Excursion on the Syntax-Semantics Frontier as discussed by the authors.
Abstract: Categorial Grammars as Theories of Language- The Lambek Calculus- Generative Power of Categorial Grammars- Semantic Categories and the Development of Categorial Grammars- Aspects of a Categorial Theory of Binding- Type Raising, Functional Composition, and Non-Constituent Conjunction- Implications of Process-Morphology for Categorial Grammar- Phrasal Verbs and the Categories of Postponement- Natural Language Motivations for Extending Categorial Grammar- Categorial and Categorical Grammars- Mixed Composition and Discontinuous Dependencies- Multi-Dimensional Compositional Functions as a Basis for Grammatical Analysis- Categorial Grammar and Phrase Structure Grammar: An Excursion on the Syntax-Semantics Frontier- Combinators and Grammars- A Typology of Functors and Categories- Consequences of Some Categorially-Motivated Phonological Assumptions- Index of Names- Index of Subjects- Index of Categories and Functors

264 citations


Book ChapterDOI
01 Jan 1988
TL;DR: This chapter discusses the class of context-free phrase structure grammars, which has been alluded to a number of times in the recent linguistic literature: by Klein (1981) in connection with nested comparative constructions, by Dahl (1982) in connected with topicalised pronouns, by Engdahl ( 1982) and Gazdar (1982).
Abstract: If we take the class of context-free phrase structure grammars (CFPSGs) and modify it so that (i) grammars are allowed to make use of finite feature systems and (ii) rules are permitted to manipulate the features in arbitrary ways, then what we end up with is equivalent to what we started out with Suppose, however, that we take the class of contextfree phrase structure grammars and modify it so that (i) grammars are allowed to employ a single designated feature that takes stacks of items drawn from some finite set as its values, and (ii) rules are permitted to push items onto, pop items from, and copy the stack What we end up with now is no longer equivalent to the CF-PSGs but is significantly more powerful, namely the indexed grammars (Aho, 1968) This class of grammars has been alluded to a number of times in the recent linguistic literature: by Klein (1981) in connection with nested comparative constructions, by Dahl (1982) in connection with topicalised pronouns, by Engdahl (1982) and Gazdar (1982) in connection with Scandinavian unbounded dependencies, by Huybregts (1984) and Pulman and Ritchie (1984) in connection with Dutch, by Marsh and Partee (1984) in connection with variable binding, and doubtless elsewhere as well

185 citations


Book
01 Jan 1988
TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and this attribute grammars definitions systems and bibliography.
Abstract: Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this attribute grammars definitions systems and bibliography. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.

181 citations


Proceedings ArticleDOI
22 Aug 1988
TL;DR: Two algorithms which construct two different types of generators for lexical functional grammars (LFGs) are described, which generate sentences from functional structures and the second from semantic structures.
Abstract: This paper describes two algorithms which construct two different types of generators for lexical functional grammars (LFGs). The first type generates sentences from functional structures and the second from semantic structures. The latter works on the basis of oxtended LFGs, which contain a mapping from f-structures into semantic structures. Both algorithms can be used on all grammars within the respective class of LFG-grammars. Thus sentences can be generated from input structures by means of LFG-grammars and the same grammar formalism, although not necessarily the same grammar, can be used for both analysis and synthesis.

57 citations


Book ChapterDOI
01 Jan 1988
TL;DR: The authors survey the generative capacity of categorial grammars and show that strong and weak generative capacities of various kinds of categorical grammar can be found in a large number of cases.
Abstract: This paper surveys the author’s results in strong and weak generative capacity of various kinds of categorial grammars The growing interest of linguists, computer scientists, and linguistically-minded logicians in the domain of categorial grammar calls for a new, more advanced and profound elaboration of its internal mathematics, in which the problems to be discussed here have traditionally been recognized to play a quite fundamental role

56 citations


Book ChapterDOI
11 Feb 1988
TL;DR: An abstract notion of context-free grammar is introduced that deals with abstract objects that can be words, trees, graphs or other combinatorial objects and is applied to NLC graph grammars introduced by Rozenberg and Janssens.
Abstract: An abstract notion of context-free grammar is introduced. It deals with abstract objects that can be words, trees, graphs or other combinatorial objects. It is applied to NLC graph grammars introduced by Rozenberg and Janssens. The monadic second-order theory of a context-free NLC set of graphs is decidable.

55 citations


Proceedings ArticleDOI
07 Jun 1988
TL;DR: The structural descriptions produced by Combinatory Categorial Grammars are discussed and compared to those of grammar formalisms in the class of Linear Context-Free Rewriting Systems.
Abstract: Recent results have established that there is a family of languages that is exactly the class of languages generated by three independently developed grammar formalisms: Tree Adjoining Grammars, Head Grammars, and Linear Indexed Grammars. In this paper we show that Combinatory Categorial Grammars also generates the same class of languages. We discuss the structural descriptions produced by Combinatory Categorial Grammars and compare them to those of grammar formalisms in the class of Linear Context-Free Rewriting Systems. We also discuss certain extensions of Combinatory Categorial Grammars and their effect on the weak generative capacity.

53 citations



Proceedings ArticleDOI
22 Aug 1988
TL;DR: It is shown that TAG formalism provides sufficient constraints for handling most of the linguistic phenomena, with minimal linguistic stipulations, and is the first time support verb constructions are handled in a parser.
Abstract: We present the first sizable grammar written for TAG. We present the linguistic coverage of our grammar, and explain the linguistic reasons which lead us to choose the particular representations. We show that TAG formalism provides sufficient constraints for handling most of the linguistic phenomena, with minimal linguistic stipulations. We first state the basic structures needed for parsing French, with a particular emphasis on TAG's extended domain of locality that enables us to state complex sub-categorization phenomena in a natural way. We then give a detailed analysis of sentential complements, because it has lead us to introduce substitution in the formalism, and because TAG makes interesting predictions. We discuss the different linguistic phenomena corresponding to adjunction and to substitution respectively. We then move on to support verb constructions, which are represented in a TAG in a simpler way than the usual double analysis. It is the first time support verb constructions are handled in a parser. We lastly give an overview of the treatment of adverbs, and suggest a treatment of idioms which make them fall into the same representations as 'free' structures.

Book ChapterDOI
29 Aug 1988
TL;DR: Each phrase-structure grammar can be replaced by an equivalent grammar with all of the rules context-free, of the form S→v, where S is the initial symbol.
Abstract: Some new normal forms for the phrase-structure grammars are presented. Each phrase-structure grammar can be replaced by an equivalent grammar with all of the rules context-free, of the form S→v, where S is the initial symbol, and either two extra rules AB→ ɛ, CD→ ɛ, or two extra rules AB→ ɛ, CC→ ɛ, or two extra rules AA→ ɛ, BBB→ ɛ, or even a single extra rule ABBBA→ ɛ, or a single extra rule ABC→ ɛ.

Book ChapterDOI
01 Jan 1988
TL;DR: This paper will take as a general framework the program and set of assumptions that have been called ‘extended Montague grammar’ and in particular a slightly modified version of Montague’s ‘Universal Grammar’ (UG: Paper 7 in Montague, 1974).
Abstract: In recent years, there has been a growing interest in categorial grammar as a framework for formulating empirical theories about natural language. This conference bears witness to that revival of interest. How well does this framework fare when used in this way? And how well do particular theories in what we might call the family of categorial theories fare when they are put up against the test of natural language description and explanation? I say ‘family’ of theories, for there have been a number of different developments, all of which take off from the fundamental idea of a categorial grammar as it was first introduced by Ajdukiewicz and later modified and studied by Bar-Hillel, Curry, and Lambek. In this paper I would like to discuss these questions, considering a number of different hypotheses that have been put forward within the broad framework that we may call ‘extended categorial grammar’ and making a few comparisons with other theories. In my remarks, I will take as a general framework the program and set of assumptions that have been called ‘extended Montague grammar’ and in particular a slightly modified version of Montague’s ‘Universal Grammar’ (UG: Paper 7 in Montague, 1974). From this point of view, the syntax of a language is looked at as a kind of algebra. Then, the empirical problem of categorial grammar can be seen as part of a general program that tries to answer these questions: (A) What is the set of primitive and derived categories that we need to describe and explain natural languages in their syntax and semantics (and phonology, etc.)? (B) What are the operations that we need to describe and explain natural languages (in the syntax, semantics, phonology, morphology, etc.)? (C) What are the relations that we need in order to hook up with each other the various categories and operations mentioned or alluded to in (A) and (B)?

Journal ArticleDOI
TL;DR: The notions introduced in the paper are useful for researches in less restricted edNLC-graph Grammars, for example grammars analogical to context-free string grammARS.

Journal ArticleDOI
TL;DR: The proposed constructive method for the inference of Even Linear Grammars from positive samples is employed and it is shown that the method can be used in a hierarchical manner to infer grammars for more complex pictures.

Proceedings ArticleDOI
22 Aug 1988
TL;DR: This paper constructs a parser by compiling systemic grammars into the notation of Functional Unification Grammar, and testing is the basis for some observations about the bidirectional use of a grammar.
Abstract: We describe a general parsing method for systemic grammars. Systemic grammars contain a paradigmatic analysis of language in addition to structural information, so a parser must assign a set of grammatical features and functions to each constituent in addition to producing a constituent structure. Our method constructs a parser by compiling systemic grammars into the notation of Functional Unification Grammar. The existing methods for parsing with unification grammars have been extended to handle a fuller range of paradigmatic descriptions. In particular, the PATR-II system has been extended by using disjunctive and conditional information in functional descriptions that are attached to phrase structure rules. The method has been tested with a large grammar of English which was originally developed for text generation. This testing is the basis for some observations about the bidirectional use of a grammar.

Book ChapterDOI
01 Jan 1988
TL;DR: A main claim about Categorial Grammars is that they involve semantic categories rather than the standard syntactic categories employed in linguistic description, but, what kind of entities are semantic categories?
Abstract: A main claim about Categorial Grammars is that they involve semantic categories rather than the standard syntactic categories employed in linguistic description. But, what kind of entities are semantic categories? What relation do they impose between syntactic structure and semantic representation?

Proceedings ArticleDOI
01 Dec 1988
TL;DR: An efficient algorithm for learning context-free grammars using two types of queries: structural equivalence queries and structural membership queries is presented, and it is shown that a grammar learned by the algorithm is not only a correct grammar but also structurally equivalent to it.
Abstract: We consider the problem of learning a context-free grammar from its structural descriptions. Structural descriptions of a context-free grammar are unlabelled derivation trees of the grammar. We present an efficient algorithm for learning context-free grammars using two types of queries: structural equivalence queries and structural membership queries. The learning protocol is based on what is called “minimally adequate teacher”, and it is shown that a grammar learned by the algorithm is not only a correct grammar, i.e. equivalent to the unknown grammar but also structurally equivalent to it. Furthermore, the algorithm runs in time polynomial in the number of states of the minimum frontier-to-root tree automaton for the set of structural descriptions of the unknown grammar and the maximum size of any counter-example returned by a structural equivalence query.

Book
Daniel M. Yellin1
15 Apr 1988
TL;DR: Grammar-based translation methodologies and RIFs as discussed by the authors have been used to translate between programming languages in the context of RIF grammars and generalizing RIF.
Abstract: Grammar based translation methodologies and RIFs- The inversion of RIF grammars- Generalizing RIFs- The INVERT system- Translating between programming languages- Conclusions and future directions

Journal ArticleDOI
01 Nov 1988
TL;DR: A grammar that formally constructs the types of diagrams used in system dynamics is fully described and a method to achieve a formal program for interactive modeling upon any diagram, starting from the formal specification of the diagram, is presented.
Abstract: A grammar that formally constructs the types of diagrams used in system dynamics is fully described. It belongs to the special kinds of grammars (attributed programmed graph grammars) that are applied to the construction of graphs and geometric figures. The flow diagrams used in system dynamics have been defined by 'attributed graphs' so that the approach could be applied. The grammar manipulates the graphs according to the requirements of the methodology. Basically, a method to achieve a formal program for interactive modeling upon any diagram, starting from the formal specification of the diagram, is presented. >


15 Sep 1988
TL;DR: P-PATR as mentioned in this paper is a compiler for unification-based grammars that is written in Quintus Prolog running on a Sun 2 workstation and is based on the PATR-II formalism developed at SRI International.
Abstract: : P-PATR is a compiler for unification-based grammars that is written in Quintus Prolog running on a Sun 2 workstation. P-PATR is based on the PATR-II formalism [14] developed at SRI International. PATR is a simple, unification-based formalism capable of encoding a wide variety of grammars. As a result of this versatility, several parsing systems and development environments based on this formalism have been implemented [18,5]. P-PATR is one such system, designed in response to the slow parse times of most of the other PATR implementations Most of the currently running PATR systems operate by interpreting a PATR grammar. P-PATR differs from these systems by compiling the grammar into a Prolog definite clause grammar (DCG) [8]. The compilation is done only once for a given grammar; the resulting DCG contains all the information in the original PATR grammar in a form readily conducive to parsing. The advantage of compilation is that less work needs to be done during parsing, as some of the necessary computations have already been performed in the compilation phase.

Book ChapterDOI
16 May 1988
TL;DR: It was possible to use AG techniques to replace, during TYPOL specifications execution, run-time unification by semantic attribute evaluation, with a general construction to build an Attribute Grammar from a TYPOL program.
Abstract: We have shown in a previous paper that TYPOL specifications and Attribute Grammars are strongly related: we presented a general construction to build an Attribute Grammar from a TYPOL program. Thus, it was possible to use AG techniques to replace, during TYPOL specifications execution, run-time unification by semantic attribute evaluation.

01 Jan 1988
TL;DR: This work presents the first sizable grammar written in the Tree Adjoining Grammar formalism (TAG), and gives an overview of the treatment of adjuncts, and suggests a treatment of idioms which make them fall into the same representations as 'free' structures.
Abstract: We present the first sizable grammar written in the Tree Adjoining Grammar formalism (TAG)1. In particular we have used 'lexicalized' TAGs as described in [Schabes, Abeille and Joshi 1988]. We present the linguistic coverage of our grammar, and explain the linguistic reasons which lead us to choose the particular representations. We have shown that a wide range of linguistic phenomena can be handled within the TAG formalism with lexically specified structures only. We first state the basic structures needed for French, with a particular emphasis on TAG's extended domain of locality that enables us to state complex subcategorization phenomena in a natural way. We motivate the choice of the head for the different structures and we contrast the treatment of nominal arguments with that of sentential ones, which is particular to the TAG framework. We also give a detailed analysis of sentential complements, because it has lead us to introduce substitution into the formalism, and because TAG makes interesting predictions in these cases. We discuss the different linguistic phenomena corresponding to adjunction and to substitution respectively. We then move on to 'light verb' constructions, in which extraction freely occurs out of the predicative NP. They are handled in a TAG straightforwardly as opposed to the usual double analysis. We lastly give an overview of the treatment of adjuncts,and suggest a treatment of idioms which make them fall into the same representations as 'free' structures. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-88-64. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/683 A LEXICALIZED TREE ADJOINING GRAMMAR FOR FRENCH: THE GENERAL FRAMEWORK Anne be ill; MS-CIS-88-64 LlNC LAB 125 Department of Computer and Information Science School of Engineering and Applied Science University of Pennsylvania Philadelphia, PA 191 04


Journal ArticleDOI
TL;DR: A new proof of this theorem is given which relies on the algebra of phrase structures and exhibits a possibility to justify the key construction used in Gaifman's proof by means of the Lambek calculus of syntactic types.
Abstract: The equivalence of (classical) categorial grammars and context-free grammars, proved by Gaifman [4], is a very basic result of the theory of formal grammars (an essentially equivalent result is known as the Greibach normal form theorem [1], [14]). We analyse the contents of Gaifman's theorem within the framework of structure and type transformations. We give a new proof of this theorem which relies on the algebra of phrase structures and exhibit a possibility to justify the key construction used in Gaifman's proof by means of the Lambek calculus of syntactic types [15].



Book ChapterDOI
29 Aug 1988
TL;DR: A theory of bzes~th-first (BF) phrase-structure grailars is defined and developed, and BCF languages are recognized by a queue automaton with a single state essentially, and display an interesting "pumping lemma" which allows some language family oomparlsons.
Abstract: We define and develop a theory of bzes~th-first (BF) phrase-structure grailars. ~aeir novelty comes from a different application of rewrltmrJg rules to derivations: the least recently produced nontermlnal symbol ~st be rewritten first. In other words nonterminals in a sententlal form are inserted and rewritten by a FIFO discipline. The naturally correspondirJg recognizer is then a queue automaton, a class of devices equipped with a FIF0 memory tape, investigated by Brandenburg [2],[3], Vauquelin and Franchi Zannettacei [6]. However the idea of BF grammars is original, and corresponds to automata making a restricted use of states (for type 2 grammars). Ayers's [I] automata can operate on two ends of the tape: the grammatical characterization is in term of ordered type i grammars. BF grammars of type 0 and i have the same generative capacity of their classical counterparts, and correspond to unrestricted queue automata ar~ to linearly bounded queue automata respectively. Type 2 BF grammars (or Breadth-first C ontext-F_ree) are essentially different from context-free grammars (e.g. they generate the anagrams on three letters but not the palindromes). BCF languages are recognized by a queue automaton with a single state essentially, and display an interesting "pumping lemma". which allows us to obtain some language family oomparlsons.

Proceedings ArticleDOI
T. Rus1, J.P. Le Peau1
01 Jan 1988
TL;DR: Two classes of algorithms for languages parsing based on multi-axiom grammars are developed: an algorithm obtained by generalizing context-free LR-parsers to multi-AXiom Grammars, and a pattern-matching algorithm that results from the ability to layer a multi-Axiom language into levels such that each sublanguage is independent of the language that contains it.
Abstract: Multiaxiom grammars and language, presented as generalizations of context-free grammars and languages, are defined and used as a mechanism for programming language specification and implementation It is shown how to divide such a grammar into a sequence of subgrammars that generate inductively the language specified by the original grammar Furthermore, it is shown how to use this sequence of subgrammars for inductive language recognition by a process of tokenizing Two classes of algorithms for languages parsing based on multi-axiom grammars are developed: an algorithm obtained by generalizing context-free LR-parsers to multi-axiom grammars, and a pattern-matching algorithm that results from the ability to layer a multi-axiom language into levels such that each sublanguage is independent of the language that contains it The implications of multi-axiom grammars for compiler code generation are briefly discussed >