scispace - formally typeset
Search or ask a question

Showing papers on "Tree-adjoining grammar published in 1990"


01 Jan 1990
TL;DR: A representation of prepositional complements that is based on extended elementary trees, and how to deal with semantic non compositionality in verb-particle combinations, light verb constructions and idioms, without losing the internal syntactic composition of these structures are presented.
Abstract: This paper presents a sizable grammar for English written in the Tree Adjoining grammar (TAG) formalism. The grammar uses a TAG that is both lexicalized (Schabes, Abeille, Joshi 1988) and feature-based (VijayShankar, Joshi 1988). In this paper, we describe a wide range of phenomena that it covers. A Lexicalized TAG (LTAG) is organized around a lexicon, which associates sets of elementary trees (instead of just simple categories) with the lexical items. A Lexicalized TAG consists of a finite set of trees associated with lexical items, and operations (adjunction and substitution) for composing the trees. A lexical item is called the anchor of its corresponding tree and directly determines both the tree's structure and its syntactic features. In particular, the trees define the domain of locality over which constraints are specified and these constraints are local with respect to their anchor. In this paper, the basic tree structures of the English LTAG are described, along with some relevant features. The interaction between the morphological and the syntactic components of the lexicon is also explained. Next, the properties of the different tree structures are discussed. The use of S complements exclusively allows us to take full advantage of the treatment of unbounded dependencies originally presented in Joshi (1985) and Kroch and Joshi (1985). Structures for auxiliaries and raising-verbs which use adjunction trees are also discussed. We present a representation of prepositional complements that is based on extended elementary trees. This representation avoids the need for preposition incorporation in order to account for double whquestions (preposition stranding and pied-piping) and the pseudo-passive. A treatment of light verb constructions is also given, similar to what Abeille (1988c) has presented. Again, neither noun nor adjective incorporation is needed to handle double passives and to account for CNPC violations in these constructions. TAG'S extended domain of locality allows us to handle, within a single level of syntactic description, phenomena that in other frameworks require either dual analyses or reanalysis. In addition, following Abeille and Schabes (1989), we describe how to deal with semantic non compositionality in verb-particle combinations, light verb constructions and idioms, without losing the internal syntactic composition of these structures. The last sections discuss current work on PRO, case, anaphora and negation, and outline future work on copula constructions and small clauses, optional arguments, adverb movement and the nature of syntactic rules in a lexicalized framework. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-90-24. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/527 A Lexicalized Tree Adjoining Grammar For English MS-CIS-90-24 LINC LAB 170 Anne Abeillh Kathleen Bishop Sharon Cote Yves Schabes Department of Computer and Information Science School of Engineering and Applied Science University of Pennsylvania Philadelphia, PA 19104-6389

326 citations


Journal ArticleDOI
TL;DR: In this paper, an efficient algorithm for learning context-free grammars using two types of queries, structural equivalence queries and structural membership queries, is presented. But it is not shown that a grammar learned by the algorithm is not only a correct grammar but also structurally equivalent to it.

176 citations


Book ChapterDOI
19 Sep 1990
TL;DR: I have always been fascinated by the origin of ideas, so it is pleased that the organizers of this conference have asked me to open the proceedings by taking a look back at how the subject began, and I'm glad that my job is to give this conference a history attribute.
Abstract: I have always been fascinated by the origin of ideas, so I am pleased that the organizers of this conference have asked me to open the proceedings by taking a look back at how the subject began. In other words, I'm glad that my job is to give this conference a " history attribute. " Attribute grammars were born in the exhilarating days of the mid-60s when a great many fundamental principles of computer science were beginning to be understood. An enormous number of ideas were floating in the air, waiting to be captured in just the right combinations that would prove to be powerful sources of future energy. The human mind is notoriously bad at reconstructing past experiences, so I have tried to make my story as accurate as possible by digging up as many scraps of authentic information from the early days as I could find. People say that nostalgia isn't what it used to be, but I hope that the participants of this conference will be able to share some of the pleasure I have experienced while preparing this retrospective study. Much of my story takes place in 1967, by which time a great many sophisticated computer programs had been written all over the world. Computing machines had come a long way since their invention in the 30s and 40s; for example, the recently announced Burroughs B6500 was aptly called a " third generation " system [35], and J. P. Eckert was noting a trend toward parallelism in new computer designs [37]. Yet many problems without easy solutions made people well aware that the vast potential of computers was only beginning to be tapped and that a great many important issues were not at all well understood. One of the puzzling questions under extensive investigation at the time was the problem of programming language semantics: How should we define the meaning of statements in algorithmic languages? Dozens of experts had gathered in Vienna in 1964 for a conference on formal language description, and the prevailing mood at that conference is well summarized by the remarks of T. B. Steel, Jr.: " Improvements in programming language description methods are imperative. .. I don't fully know myself how to describe the semantics of a language. I daresay nobody does or we wouldn't be here " [31]. Steel edited the proceedings of that meeting, which include lively transcriptions of discussions …

93 citations


Proceedings ArticleDOI
20 Aug 1990
TL;DR: This paper defines lexical transfer rules that avoid the defects of a mere word-to-word approach but still benefit from the simplicity and elegance of a lexical approach.
Abstract: Lexicalized Tree Adjoining Grammar (LTAG) is an attractive formalism for linguistic description mainly because of its extended domain of locality and its factoring recursion out from the domain of local dependencies (Joshi, 1985, Kroch and Joshi, 1985, Abeille, 1988). LTAG's extended domain of locality enables one to localize syntactic dependencies (such as filler-gap), as well as semantic dependencies (such as predicate-arguments). The aim of this paper is to show that these properties combined with the lexicalized property of LTAG are especially attractive for machine translation.The transfer between two languages, such as French and English, can be done by putting directly into correspondence large elementary units without going through some interlingual representation and without major changes to the source and target grammars. The underlying formalism for the transfer is "synchronous Tree Adjoining Grammars" (Shieber and Schabes [1990]). Transfer rules are stated as correspondences between nodes of trees of large domain of locality which are associated with words. We can thus define lexical transfer rules that avoid the defects of a mere word-to-word approach but still benefit from the simplicity and elegance of a lexical approach.We rely on the French and English LTAG grammars (Abeille [1988], Abeille [1990 (b)], Abeille et al. [1990], Abeille and Schabes [1989, 1990]) that have been designed over the past two years jointly at University of Pennsylvania and University of Paris 7-Jussieu.

91 citations


Journal ArticleDOI
TL;DR: It is shown that the class of boundary graph languages is closed under the operation of edge contraction, where the label of the edge indicates whether or not the edge should be contracted.
Abstract: Context-free hypergraph grammars and boundary graph grammars of bounded nonterminal degree have the same power, both for generating sets of graphs and for generating sets of hypergraphs. Arbitrary boundary graph grammars have more graph generating power than context-free hypergraph grammars, but they have the same hypergraph generating power. To obtain these results, several normal forms for boundary graph grammars are given. It is also shown that the class of boundary graph languages is closed under the operation of edge contraction, where the label of the edge indicates whether or not the edge should be contracted.

72 citations


Book ChapterDOI
05 Mar 1990
TL;DR: An elementary introduction to the notion of an NLC graph grammar is given, and several of its extensions and variations are discussed in a systematic way.
Abstract: An elementary introduction to the notion of an NLC graph grammar is given, and several of its extensions and variations are discussed in a systematic way. Simple concepts are considered rather than technical details.

58 citations


Journal ArticleDOI
TL;DR: This work investigates this type of graph grammar and shows that the use of edge labels (together with the NCE feature) is responsible for some new properties, and proves that the class of (boundary) eNCE languages properly contains the closure of theclass of ( boundary) NLC languages under node relabelings.

49 citations


BookDOI
01 Jan 1990
TL;DR: The goal of this paper is to emphasize attribute grammars role as a tool for design, formal specification and implementation of practical systems, so the presentation is example rich.
Abstract: Attribute grammars are a framework for defining semantics of programming languages in a syntax-directed fashion. In this paper, we define attribute grammars, and then illustrate their use for language definition, compiler generation, definite clause grammars, design and specification of algorithms, etc. Our goal is to emphasize its role as a tool for design, formal specification and implementation of practical systems, so our presentation is example rich.

45 citations


Journal ArticleDOI
TL;DR: A formal declarative notation for describing the semantics and translation of programming languages, and the various components of the description cannot be decomposed into modules.
Abstract: Attribute grammars provide a formal declarative notation for describing the semantics and translation of programming languages. Describing any real programming language is a significant software engineering challenge. From a software engineering viewpoint, current notations for attribute grammars have two flaws: tedious repetition of essentially the same attribute computations is inevitable, and the various components of the description cannot be decomposed into modules.… — From the Authors' Abstract

45 citations


Book
01 Jan 1990
TL;DR: This book discusses the Computational Implementation of Principle-Based Parsers, a Uniform Formal Framework for Parsing, and the semantic impact of topic-focus articulation.
Abstract: 1 Why Parsing Technologies?.- 1.1 The gap between theory and application.- 1.2 About this book.- 2 The Computational Implementation of Principle-Based Parsers.- 2.1 Introduction.- 2.2 The principle ordering problem.- 2.3 Examples of parsing using the Po-Parser.- 2.4 Concluding remarks.- 3 Parsing with Lexicalized Tree Adjoining Grammar.- 3.1 Introduction.- 3.2 Lexicalization of CFGs.- 3.3 Lexicalized TAGs.- 3.4 Parsing lexicalized TAGs.- 3.5 Concluding remarks.- 4 Parsing with Discontinuous Phrase Structure Grammar.- 4.1 Introduction.- 4.2 Trees with discontinuities.- 4.3 Disco-Trees in grammar rules.- 4.4 Implementing DPSG: An enhanced chart parser.- 4.5 Concluding remarks.- 5 Parsing with Categorial Grammar in Predictive Normal Form.- 5.1 Introduction.- 5.2 Overview of predictive normal form.- 5.3 Source grammar (G).- 5.4 Predictive normal form (G).- 5.5 Ambiguity in G.- 5.6 Equivalence of G and G.- 5.7 Concluding remarks.- 6 PREMO: Parsing by conspicuous lexical consumption.- 6.1 Introduction.- 6.2 The preference machine.- 6.3 Global data.- 6.4 Preference semantics.- 6.5 PREMO example.- 6.6 Comparison to other work.- 6.7 Concluding remarks.- 7 Parsing, Word Associations, and Typical Predicate-Argument Relations.- 7.1 Mutual information.- 7.2 Phrasal verbs.- 7.3 Preprocessing the corpus with a part of speech tagger.- 7.4 Preprocessing with a syntactic parser.- 7.5 Significance levels.- 7.6 Just a powerful tool.- 7.7 Practical applications.- 7.8 Alternatives to collocation for recognition applications.- 7.9 Concluding remarks.- 8 Parsing Spoken Language Using Combinatory Grammars.- 8.1 Introduction.- 8.2 Structure and intonation.- 8.3 Combinatory grammars.- 8.4 Parsing with CCG.- 8.5 Intonational structure.- 8.6 A hypothesis.- 8.7 Conclusion.- 9 A Dependency-Based Parser for Topic and Focus.- 9.1 Introduction.- 9.2 Dependency-based output structures.- 9.3 The semantic impact of topic-focus articulation.- 9.4 Parsing procedure for topic and focus.- 9.5 Parsing sentences in a text.- 9.6 Concluding remarks.- 10 A Probabilistic Parsing Method for Sentence Disambiguation.- 10.1 Introduction.- 10.2 Probabilistic context-free grammar.- 10.3 Experiments.- 10.4 Concluding remarks.- 11 Towards a Uniform Formal Framework for Parsing.- 11.1 Introduction.- 11.2 Context-free parsing.- 11.3 Horn clauses.- 11.4 Other linguistic formalisms.- 11.5 Concluding remarks.- 12 A Method for Disjunctive Constraint Satisfaction.- 12.1 Introduction.- 12.2 Turning disjunctions into contexted constraints.- 12.3 Normalizing the contexted constraints.- 12.4 Extracting the disjunctive residue.- 12.5 Producing the models.- 12.6 Comparison with other techniques.- 12.7 Concluding remarks.- 13 Polynomial Parsing of Extensions of Context-Free Grammars.- 13.1 Introduction.- 13.2 Linear indexed grammars.- 13.3 Combinatory categorial grammars.- 13.4 Tree Adjoining Grammars.- 13.5 Importance of linearity.- 13.6 Concluding remarks.- 14 Overview of Parallel Parsing Strategies.- 14.1 Introduction.- 14.2 From one to many traditional serial parsers.- 14.3 Translating grammar rules into process configurations.- 14.4 From sentence words to processes.- 14.5 Connectionist parsing algorithms.- 14.6 Concluding remarks.- 15 Chart Parsing for Loosely Coupled Parallel Systems.- 15.1 Introduction.- 15.2 Parsing for loosely coupled systems.- 15.3 Parallelism and the chart.- 15.4 Distributing the chart.- 15.5 Communication vs. computation - Results for the Hypercube(TM).- 15.6 Towards wider comparability - The abstract parallel agenda.- 15.7 Termination and Synchronization.- 15.8 Testing the portable system - Results of network experiment.- 15.9 Alternative patterns of edge distribution.- 15.10 Concluding remarks.- 16 Parsing with Connectionist Networks.- 16.1 Introduction.- 16.2 Incremental parsing.- 16.3 Connectionist network formalism.- 16.4 Parsing network architecture.- 16.5 Parsing network performance.- 16.6 Extensions.- 16.7 Concluding remarks.- 17 A Broad-Coverage Natural Language Analysis System.- 17.1 Introduction.- 17.2 A syntactic sketch: PEG.- 17.3 Semantic readjustment.- 17.4 The paragraph as a discourse unit.- 17.5 Concluding remarks.- 18 Parsing 2-Dimensional Language.- 18.1 Introduction.- 18.2 The 2D-Earley parsing algorithm.- 18.3 The 2D-LR parsing algorithm.- 18.4 More interesting 2D grammars.- 18.5 Formal property of 2D-CFG.- 18.6 Concluding remarks.

44 citations


Book ChapterDOI
05 Mar 1990
TL;DR: S-HH grammars have the same graph generating power as the vertex rewriting context-free NCE graph Grammars, and as recursive systems of equations with four types of simple operations on graphs.
Abstract: Separated handle-rewriting hypergraph grammars (S-HH grammars) are introduced, where separated means that the nonterminal handles are disjoint. S-HH grammars have the same graph generating power as the vertex rewriting context-free NCE graph grammars, and as recursive systems of equations with four types of simple operations on graphs.

Proceedings ArticleDOI
01 Jun 1990
TL;DR: Attribute grammars serve as the underlying formal basis for a number of language-based environments and environment generators and can provide immediate feedback that guides further user interaction, as in the case of error attributes indicating violations of context-sensitive conditions.
Abstract: Attribute grammars [Knu68] serve as the underlying formal basis for a number of language-based environments and environment generators [DRTSl] [RT84] [SDB84] [JF85] [BC85] [Pfr86] [LMOWSS] [FZ89] [RT89] [BFHP89] [Jou89]. In such environments, attributes decorate an abstract-syntax-tree representation of an object being edited and are kept up to date as the underlying abstract-syntax tree is modified, either by direct user manipulation or by indirect transformation actions. The collection of attributes constitute a derived database of facts about the object. Attributes can provide immediate feedback that guides further user interaction, as in the case of error attributes indicating violations of context-sensitive conditions, and can also provide incremental translation, as in the case of objectcode attributes. A weakness of this firs&order attribute-grammar editing model is its strict separation of syntactic and semantic levels, with priority given to syntax. The attributes are completely constrained by their defining equations, whereas the abstract-syntax tree is unconstrained, except by the local restrictions of the underlying contextfree grammar. The attributes, which are relied on to communicate context-sensitive information throughout the syntax tree, have no way of generating derivation trees. They can be used to diagnose or reject incorrect syntax a posteriori but cannot be used to guide the syntax a priori. A few examples illustrate the desirability of permit-

Book ChapterDOI
20 Jun 1990
TL;DR: It is shown that every pushdown automaton can be transformed into a graph grammar generating its transition graph, and that this transformation can be applied to any graph grammar.
Abstract: We saw (in section 1) that we can transform every pushdown automaton into a graph grammar generating its transition graph.

Book ChapterDOI
05 Mar 1990
TL;DR: Graph Grammars as an operational specification method have been successfully used for this purpose for many years for the purpose of formally specifying the structure and the operations of these internal data structures in modelling environments.
Abstract: Modelling environments (e.g. software development environments) offer tools which build up and maintain complex internal data structures. Therefore, before implementing such tools, it is advisable for the tool developer to formally specify the structure and the operations of these internal data structures. Graph Grammars as an operational specification method have been successfully used for this purpose for many years.


Journal ArticleDOI
TL;DR: The approach makes it possible to reduce the problem of inferring the structure of a context-free grammar back to the normal grammatical inference problem.

Proceedings ArticleDOI
20 Aug 1990
TL;DR: It will be shown how unification grammars can be used to build a reversible machine translation system and how to obtain a completely reversible MT system using a series of (bidirectional) unification Grammars.
Abstract: In this paper it will be shown how unification grammars can be used to build a reversible machine translation system.Unification grammars are often used to define the relation between strings and meaning representations in a declarative way. Such grammars are sometimes used in a bidirectional way, thus the same grammar is used for both parsing and generation. In this paper I will show how to use bidirectional unification grammars to define reversible relations between language dependent meaning representations. Furthermore it is shown how to obtain a completely reversible MT system using a series of (bidirectional) unification grammars.

Journal ArticleDOI
TL;DR: It is shown that these grammars with productions having associated only words consisting of one or two symbols characterize type 0 languages.
Abstract: Each production of a generalized forbidding grammar has an associated finite set of words. Such a production can be applied only if none of its associated words is a substring of a given rewritten sentential form. It is shown that these grammars with productions having associated only words consisting of one or two symbols characterize type 0 languages

Journal ArticleDOI
TL;DR: In this article the formal definitions of modifiable grammars are presented, and the equivalence between classes ofModifiable gramMars and Turing machines is proved, and some criteria for reducing modifiablegrammars to context-free grammARS are provided.
Abstract: In this article the formal definitions of modifiable grammars are presented, and the equivalence between classes of modifiable grammars and Turing machines is proved. Some criteria for reducing modifiable grammars to context-free grammars are provided. A lazy LR(1) algorithm for context-free grammars and an algorithm for constructing a LR(1) parser for modifiable grammars are discussed.

Book ChapterDOI
05 Mar 1990
TL;DR: In this paper, the authors investigate the relationship between the algebraic definition of graph grammars and logic programming, and show that the operational semantics of any logic program can be faithfully simulated by a particular context-free hypergraph grammar.
Abstract: In this paper we investigate the relationship between the algebraic definition of graph grammars and logic programming. In particular, we show that the operational semantics of any logic program can be faithfully simulated by a particular context-free hypergraph grammar. In the process of doing that, we consider the issue of representing terms, formulas, and clauses as particular graphs or graph productions, by first evaluating the approaches already proposed for Term Rewriting Systems (TRS), and then by giving an original extension of those approaches, to be able to deal with the unique features of logic programming. Actually, not only does our representation of definite clauses by graph productions allow us to deal correctly with logical unification, but also it overcomes some of the problems encountered by other approaches for representing TRS's as graph grammars. The main result of the paper states the soundness and completeness of the representation of clauses by productions, and this correspondence is extended to entire computations, showing how a context-free grammar (over a suitable category of graphs) can be associated with a logic program. The converse holds as well, i.e. given any context-free graph grammar (over that category), a logic program can be extracted from it.

Proceedings ArticleDOI
04 Oct 1990
TL;DR: Relation grammars (RGs) are introduced as a possible general framework for specifying the syntax of visual languages and, more generally, of multi-dimensional languages.
Abstract: Relation grammars (RGs) are introduced as a possible general framework for specifying the syntax of visual languages and, more generally, of multi-dimensional languages. A formal definition of relation grammars is given. Two examples of applications on graphs are shown. RG formalism is compared to conventional context-free grammars. RGs are used to describe the syntax of horizontal lines and statechart graphs using picture processing grammars and picture layout grammars, respectively. >

Journal ArticleDOI
TL;DR: A general pattern-recognition procedure for application to unconstrained alphanumeric characters is presented, and preliminary experimental results indicate recognition rates comparable to the state of the art, but with a considerable reduction in computing time.
Abstract: A general pattern-recognition procedure for application to unconstrained alphanumeric characters is presented. The procedure is designed to allow hierarchical redescription of the input images in terms of significant elements only, and formal developments are given within the framework of elementary phrase-structure grammars. The extraction of the primitives associated with the terminal vocabularies of the grammars is always deterministic, and the productions of the parsers are characterized by a significant degree of topological fidelity. Preliminary experimental results indicate recognition rates comparable to the state of the art, but with a considerable reduction in computing time. >

Journal ArticleDOI
TL;DR: The investigated topics are: closure properties, the efficiency of generating a (linear) language by such a system compared with usual grammars, hierarchies, and so on.
Abstract: We continue the study of parallel communicating grammar systems introduced in P[acaron]un and Sântean [7] as a grammatical model of parallel computing. The investigated topics are: closure properties, the efficiency of generating a (linear) language by such a system compared with usual grammars, hierarchies.

Journal ArticleDOI
TL;DR: An optimal parallel recognition/parsing algorithm is presented for languages generated by tree adjoining grammars (TAGs), a grammatical system for natural language.
Abstract: An optimal parallel recognition/parsing algorithm is presented for languages generated by tree adjoining grammars (TAGs), a grammatical system for natural language. TAGs are strictly more powerful than context-free grammars (CFGs), e.g., they can generate $\{a'' b'' c'' | n \geqq 0\}$, which is not context-free. However, serial parsing of TAGs is also slower, having time complexity $O(n^{6})$ for inputs of length n (as opposed to $O(n^{3})$ for CFGs). The parallel algorithm achieves optimal speedup: it runs in linear time on a five-dimensional array of $n^5$ processors. Moreover, the processors are finite-state; i.e., their function and size depends only on the underlying grammar and not on the length of the input.

Proceedings ArticleDOI
24 Jun 1990
TL;DR: Augmented phrase structure grammar (APSG) formalisms can express many of the relevant syntactic and semantic regularities of spoken language systems, but they are computationally less suitable for language modeling, because of the inherent cost of computing state transitions in APSG parsers.
Abstract: Grammars for spoken language systems are subject to the conflicting requirements of language modeling for recognition and of language analysis for sentence interpretation. Current recognition algorithms can most directly use finite-state acceptor (FSA) language models. However, these models are inadequate for language interpretation, since they cannot express the relevant syntactic and semantic regularities. Augmented phrase structure grammar (APSG) formalisms, such as unification grammars, can express many of those regularities, but they are computationally less suitable for language modeling, because of the inherent cost of computing state transitions in APSG parsers.

01 Jan 1990
TL;DR: In this paper, the problem of attribute evaluation during LR parsing is considered and several definitions of LR-attributed grammars are presented, and relations of corresponding attribute grammar classes are analyzed.
Abstract: The problem of attribute evaluation during LR parsing is considered. Several definitions of LR-attributed grammars are presented. Relations of corresponding attribute grammar classes are analysed. Also the relations between LR-attributed grammars and LL-attributed grammars and between LR-attributed grammars and a class of one-pass attributed grammars based on left-corner grammars are considered.

Book ChapterDOI
01 Jan 1990

Book ChapterDOI
05 Mar 1990
TL;DR: An overview how notions in the theory of grammars and that of module specifications correspond to each other and discuss how both theories can benefit from each other is given.
Abstract: Algebraic specification grammars have recently been introduced implicitly by the second author as a new kind of graph grammars in order to generate algebraic specifications using productions and derivations. In fact, in the well-known algebraic approach to graph grammars, also known as "Berlin-approach", we mainly have to replace the category of graphs by the category of algebraic specifications to obtain the basic definitions, constructions and results for this new kind of grammars. Since a production in an algebraic specification grammars corresponds exactly to an interface of an algebraic module specification for modular software systems this new kind of grammars can be used for modular system design. For this purpose we give an overview how notions in the theory of grammars and that of module specifications correspond to each other and discuss how both theories can benefit from each other. Concerning full technical detail and proofs we refer to other published or forthcoming papers.

Proceedings ArticleDOI
06 Jun 1990
TL;DR: This paper shows that not only constituent structures rules but also most syntactic rules are subject to lexical constraints (on top of syntactic, and possibly semantic, ones) and that such puzzling phenomena are naturally handled in a 'lexicalized' formalism such as Tree Adjoining Grammar.
Abstract: Taking examples from English and French idioms, this paper shows that not only constituent structures rules but also most syntactic rules (such as topicalization, wh-question, pronominalization ...) are subject to lexical constraints (on top of syntactic, and possibly semantic, ones). We show that such puzzling phenomena are naturally handled in a 'lexicalized' formalism such as Tree Adjoining Grammar. The extended domain of locality of TAGs also allows one to 'Jexicalize' syntactic rules while defining them at the level of constituent structures.

Proceedings ArticleDOI
06 Jun 1990
TL;DR: A number of proposals for generation are considered, outlining their consequences for the form of grammars, and experience arising from the addition of a generator to an existing unification environment is reported on.
Abstract: Recent developments in generation algorithms have enabled work in unification-based computational linguistics to approach more closely the ideal of grammars as declarative statements of linguistic facts, neutral between analysis and synthesis. From this perspective, however, the situation is still far from perfect; all known methods of generation impose constraints on the grammars they assume.We briefly consider a number of proposals for generation, outlining their consequences for the form of grammars, and then report on experience arising from the addition of a generator to an existing unification environment. The algorithm in question (based on that of Shieber et al. (1989)), though among the most permissive currently available, excludes certain classes of parsable analyses.