scispace - formally typeset
Search or ask a question

Showing papers on "Natural language published in 1987"


01 Jun 1987
TL;DR: This paper establishes a new definitional foundation for RST, Definitions are made more systematic and explicit, they introduce a new functional element, and incidentally reflect more experience in text analysis.
Abstract: : Rhetorical Structure Theory is a descriptive theory of a major aspect of the organization of natural text. It is a linguistically useful method for describing natural texts, characterizing their structure primarily in terms of relations that hold between parts of the text. This paper establishes a new definitional foundation for RST. Definitions are made more systematic and explicit, they introduce a new functional element, and incidentally reflect more experience in text analysis. Along with the definitions, the paper examines three claims and findings of RST: the predominance of nucleus/satellite structural patterns, the functional basis of hierarchy, and the communicative role of text structure. (Author) Keywords: Artificial intelligence; Coherence; Computational linguistics; Discourse; Grammar; Knowledge delivery; Natural language processing; Pragmatics.

1,022 citations


Journal ArticleDOI
TL;DR: Several general problems of natural-language processing that were faced in constructing the TEAM system are discussed, including quantifier scoping, various pragmatic issues, and verb acquisition.

446 citations


Journal ArticleDOI
TL;DR: Treatment and generalization data demonstrated that manipulation of parameters of natural language interactions and motivational techniques resulted in broadly generalized treatment gains.
Abstract: The purpose of this study was to attempt to improve verbal language acquisition for nonverbal autistic children by manipulating traditional teaching techniques so they incorporated parameters of natural language interactions and motivational techniques. Within a multiple baseline design, treatment was conducted in a baseline condition with trials presented serially in a traditional analogue clinical format where the therapist presented instructions, prompts, and reinforcers for correct responses. Then, these variables were manipulated in the natural language teaching condition such that (a) stimulus items were functional and varied, (b) natural reinforcers were employed, (c) communicative attempts were also reinforced, and (d) trials were conducted within a natural interchange. Treatment and generalization data demonstrated that manipulation of these variables resulted in broadly generalized treatment gains. Implications for language intervention are discussed.

445 citations


Book
01 Jan 1987
TL;DR: This 1000 plus page reference work would certainly be a useful and impressive acquisition to any linguist's bookshelf and is a veritable mine of knowledge concerning language knowledge.
Abstract: * Covers historical development, grammar, sound system, writing system and sociological factors of the world's major languages and language families * Comprehensive index of languages for ease of access * Written by internationally recognized specialists in each language * Includes an introductory survey 'This 1000 plus page reference work would certainly be a useful and impressive acquisition to any linguist's bookshelf ...It is a veritable mine of knowledge concerning language knowledge, and anyone with an interest in this field is bound to find the book a fascinating source of information.' - Language Monthly 'This work has a rare combination of virtues ...it can be recommended as a useful work of reference to which contributions have been made by a large team of scholars.' - Journal of Linguistics

428 citations


Book ChapterDOI
01 Jan 1987
TL;DR: This paper attempts a modular approach to language acquisition theory, which states that some aspects of language and its acquisition seem better stated not in linguistic theory, but outside it, in, say, a learning module.
Abstract: Modern theory has provided evidence that universal grammar contains principles of a general, but specifically linguistic, form that apply in all natural languages. A goal of this paper is to extend the notion of principle theory to language acquisition. In such a theory each choice that the child makes in his or her growing language is determined by a principle of language or by a principle of learning or by the interaction of these two kinds of principles. The language principles and the learning principles are obviously related (they interact). However, it seems to be a promising approach to see if the two kinds of principles can be separated to some degree. That is, we attempt a modular approach to language acquisition theory. Some aspects of language and its acquisition seem better stated not in linguistic theory, but outside it, in, say, a learning module.

374 citations


Book
01 Jan 1987

369 citations


Book
01 Jan 1987

293 citations



Journal ArticleDOI
TL;DR: It is suggested that the finding that phrase structure cues are a necessary aspect of language input reflects the limited capacities of human language learners; languages may incorporate structural cues in part to circumvent such limitations and ensure successful acquisition.

269 citations


Book
01 Jan 1987
TL;DR: This is an introduction to the concepts and techniques of diachronic linguistics, the study of language change over time, which covers all major areas of historical linguistic, presenting concepts in a concise manner.
Abstract: All languages change, just as other aspects of human society are constantly changing This is an introduction to the concepts and techniques of diachronic linguistics, the study of language change over time It covers all major areas of historical linguistics, presenting concepts in a concise manner While examples are given from a wide range of languages, most major concepts and techniques are illustrated by material drawn from the languages of Australia and the Pacific This edition has been substantially revised and rewritten Further exercises have been added for student use, and there are new sections on language planning and language contact

239 citations


Book
01 Jan 1987
TL;DR: A unified and coherent account emerges of how complexity theory can probe the information-processing structure of grammars, discovering why a grammar is easy or difficult to process and suggesting where to look for additional grammatical constraints.
Abstract: From the Publisher: Computational Complexity and Natural Language heralds an entirely new way of looking at grammatical systems. It applies the recently developed computer science tool of complexity theory to the study of natural language. A unified and coherent account emerges of how complexity theory can probe the information-processing structure of grammars, discovering why a grammar is easy or difficult to process and suggesting where to look for additional grammatical constraints. For the linguist or cognitive scientist, the book presents a nontechnical introduction to complexity theory and discusses its strengths, its weaknesses, and how it can be used to study grammars. For the computer scientist, it offers a more sophisticated and efficient computational analysis of linguistic theories. Given the variety of new techniques rising from complexity theory, the authors foresee a developing cooperation among linguists, cognitive scientists, and computer scientists toward understanding the nature of human language. The book also describes a set of case studies that use complexity theory to analyze grammatical problems. And it examines several grammatical systems currently of interest to computational linguists - including spelling-change/dictionary lookup and morphological analysis, agreement processes in natural language, and lexical-functional grammar - demonstrating how complexity analysis can illuminate and improve each one. All of the authors are at the MIT Artificial Intelligence Laboratory. Robert C. Berwick is an Associate Professor in the Department of Electrical Engineering and Computer Science. A Bradford Book.



01 Jan 1987
TL;DR: This book collects much of the best research currently available on the problem of lexical ambiguity resolution in the processing of human language from a valuable source book for cognitive scientists in AI, psycholinguistics, neuropsychology, or theoretical linguistics
Abstract: This book collects much of the best research currently available on the problem of lexical ambiguity resolution in the processing of human language. When taken out of context, sentences are usually ambiguous. When actually uttered in a dialogue or written in text, these same sentences often have unique interpretations. The inherent ambiguity of isolated sentences, becomes obvious in the attempt to write a computer program to understand them. Different views have emerged on the nature of context and the mechanisms by which it directs unambiguous understanding of words and sentences. These perspectives are represented and discussed. Eighteen original papers from a valuable source book for cognitive scientists in AI, psycholinguistics, neuropsychology, or theoretical linguistics.

Journal ArticleDOI
TL;DR: This paper reanalyse Clahsen and Muysken's data in terms of three parameters of Germanic word order and shows that the stages that adult learners go through, the errors that they make and the rules that they adopt are perfectly consistent with a UG incorating such parameters.
Abstract: In a recent paper, Clahsen and Muysken (1986) argue that adult second lan guage (L2) learners no longer have access to Universal Grammar (UG) and acquire the L2 by means of learning strategies and ad hoc rules. They use evidence from adult L2 acquisition of German word order to argue that the rules that adults use are not natural language rules. In this paper, we argue that this is not the case. We explain properties of Germanic word order in terms of three parameters (to do with head position, proper government and adjunc tion). We reanalyse Clahsen and Muysken's data in terms of these parameters and show that the stages that adult learners go through, the errors that they make and the rules that they adopt are perfectly consistent with a UG incor porating such parameters. We suggest that errors are the result of some of the parameters being set inappropriately for German. The settings chosen are nevertheless those of existing natural languages. We also discuss additional data, from our own research on t...

Journal Article
TL;DR: This paper presents a particular type of lexicon, elaborated within a formal theory of natural language called Meaning-Text Theory (MTT), embodying a vast amount of linguistic information, which can be used in different computational applications.
Abstract: The goal of this paper is to present a particular type of lexicon, elaborated within a formal theory of natural language called Meaning-Text Theory (MTT). This theory puts strong emphasis on the development of highly structured lexica. Computational linguistics does of course recognize the importance of the lexicon in language processing. However, MTT probably goes further in this direction than various well-known approaches within computational linguistics; it assigns to the lexicon a central place, so that the rest of linguistic description is supposed to pivot around the lexicon. It is in this spirit that MTT views the model of natural language: the Meaning-Text Model, or MTM. It is believed that a very rich lexicon presenting individual information about lexemes in a consistent and detailed way facilitates the general task of computational linguistics by dividing it into two more or less autonomous subtasks: a linguistic and a computational one. The MTM lexicon, embodying a vast amount of linguistic information, can be used in different computational applications.We will present here a short outline of the lexicon in question as well as of its interaction with other components of the MTM, with special attention to computational implications of the Meaning-Text Theory.

Book
28 Dec 1987
TL;DR: This book explores an approach to text understanding, in particular anaphora resolution, that tries to limit the use of detailed domain knowledge and commonsense inference by exploiting general linguistic knowledge as much as possible.
Abstract: This book explores an approach to text understanding, in particular anaphora resolution, that tries to limit the use of detailed domain knowledge and commonsense inference by exploiting general linguistic knowledge as much as possible. Carter proposes that, because natural language texts are relatively redundant and are constructed considerately, it should be possible in many cases to recover the interpretation of the text by using either linguistic or nonlinguistic techniques. Linguistic techniques are preferable because they are more general and less open-ended. Carter's approach is called "shallow processing." Since domain knowledge and reasoning tend to be expensive to implement, maintain, and use during processing, the shallow processing approach should be much more efficient and portable than an approach based heavily on domain-specific knowledge. If it can at the same time provide reasonable accuracy, then it should be very useful. This seems to be a very sensible approach in principle, and Carter demonstrates that it is quite effective in practice. The shallow processing hypothesis is tested in a program called SPAR (Shallow Processing Anaphor Resolver), which was implemented as part of the author's University of Cambridge thesis. Shallow processing is presented as an engineering solution to the problem of dealing with the use of domain knowledge in text understanding, for certain applications, not as a psychological hypothesis. In the SPAR architecture, general linguistic techniques, such as focusing, are used first. Domain reasoning is only used if more than one candidate referent remains after the application of linguistic knowledge. Although this approach is tested specifically only for reference resolution, obviously it could be extended to other areas of natural language processing in which both linguistic and domain knowledge could be used, such as reasoning. The book presents an excellent and very clear review of both current and older approaches to anaphora resolution, as well as a clear description of the SPAR system. For this reason, it would serve as a very good text for a seminar on reference resolution as well as an extra reading for a class on knowledge representation. One attractive aspect of the SPAR system is that it builds on previous work where appropriate, and extends it where required. In particular, it integrates the work of Boguraev (1979) in parsing, the work of Sidner (1979) in focusing, and the work of Wilks (1975) in preference semantics. Where Sidner's work, for example, is incomplete, as in the treatment of intrasentential anaphora, Carter presents a reasonable extension to handle the additional phenomena. One minor oversight in this work is that, in the treatment of one-anaphora, Carter fails to explore recent pragmatically oriented approaches, such as those discussed by Webber (1983) and Dahl (1984), who propose unified treatments of definite pronouns and one-anaphora. Instead, SPAR uses the older, and probably less effective, syntactic approach suggested by Webber (1978) and Halliday and Hasan (1976). Another commendable aspect of this work is that Carter presents specific statistics on the accuracy of his system-93% of pronominal anaphors (out of 242) and 82% of nonpronominal anaphors (out of 80) are resolved correctly. Although these statistics go beyond what is usually reported, it would have been even more interesting to see a detailed breakdown of anaphor types and accuracy. It would also have been interesting to see statistics on the efficiency of the system, since the overall algorithm is quite complex. This high level of accuracy provides evidence that shallow processing is a promising approach. However, these statistics do raise the issue that we don't really know what level of accuracy in anaphora resolution is "good enough." In fact, the "good enough" level of accuracy may vary by application. Ninety-three percent may be accurate enough for some applications, such as machine translation, or message routing, but not for others, such as database update. Perhaps the relatively inexpensive shallow processing approach will turn out to be the method of choice for applications with lower accuracy requirements. A related issue that this work raises is how accurate we can expect to get-that is, how good is human performance on anaphora resolution, and how close is Carter's system to that level? These are questions that we simply don't know the answers to, and that await future research, Carter has done a valuable service in providing his statistics, but it is difficult to interpret them without having these bases of comparison and without having comparable statistics from other approaches. In addition to supporting the shallow processing hypothesis, this work also supports the usefulness (at least from an

Journal ArticleDOI
01 May 1987
TL;DR: MDSL is a new data manipulation language providing functions for data manipulation within a single database that is not available in other languages.
Abstract: With the increase in availability of databases, data needed by a user are frequently in separate autonomous databases. The logical properties of such data differ from the classical ones within a single database. In particular, they call for new functions for data manipulation. MDSL is a new data manipulation language providing such functions. Most of the MDSL functions are not available in other languages.

Proceedings ArticleDOI
06 Jul 1987
TL;DR: It is claimed that any manageable logic or other formal system for natural language temporal descriptions will have to embody such an ontology, as will any usable temporal database for knowledge about events which is to be interrogated using natural language.
Abstract: A semantics of linguistic categories like tense, aspect, and certain temporal adverbials, and a theory of their use in defining the temporal relations of events, both require a more complex structure on the domain underlying the meaning representations than is commonly assumed. The paper proposes an ontology based on such notions as causation and consequence, rather than on purely temporal primitives. We claim that any manageable logic or other formal system for natural language temporal descriptions will have to embody such an ontology, as will any usable temporal database for knowledge about events which is to be interrogated using natural language.

Journal ArticleDOI
TL;DR: In this paper, a continuum of cues that occasion language responses is recommended to resolve definitional ambiguities and the same continuum can be invoked for training purposes for spontaneous language use.
Abstract: A rationale for the importance of analyzing spontaneous language use by persons with severe disabilities is offered. Definition represents the first barrier. A continuum of cues that occasion language responses is recommended to resolve definitional ambiguities. The same continuum can be invoked for training purposes. Three recent studies representing state-of-the-art procedures for teaching spontaneous language use are reviewed. Finally, future directions for conceptualizing, analyzing, and teaching spontaneous language use are discussed.

Journal ArticleDOI
TL;DR: The paper examines the theoretical model developed by John Anderson as it applies to memory representation, learning, and language skill acquisition and suggests that the theory is useful both in explaining second language acquisition processes and in identifying areas in which research is needed.
Abstract: This paper describes recent theoretical developments in cognitive psychology that can be applied to second language acquisition and uses the theory to analyze phenomena discussed regularly in the second language literature. Some limitations of linguistic theories in addressing the role of mental processes in second language acquisition are identified, and current cognitive learning theory in general is outlined. The paper then examines the theoretical model developed by John Anderson (1983, 1985) as it applies to memory representation, learning, and language skill acquisition. The remainder of the paper describes possible applications of this model to issues in second language acquisition and suggests that the theory is useful both in explaining second language acquisition processes and in identifying areas in which research is needed.

Book
01 Jan 1987
TL;DR: The authors introduce the general paradigm of knowledge-based MT, survey major recent developments, compare it with other approaches and present a paradigmatic view of its component processes.
Abstract: This is the first book devoted exclusively to knowledge-based machine translation. While most approaches to the machine translation for natural languages seek ways to translate source language texts into target language texts without full understanding of the text, knowledge-based machine translation is based on extracting and representing the meaning of the source text. It is scientifically the most challenging approach to the task of machine translation, and significant progress has been achieved within it in recent years. The authors introduce the general paradigm of knowledge-based MT, survey major recent developments, compare it with other approaches and present a paradigmatic view of its component processes

Journal Article
Karen Jensen1, Jean-Louis Binot1
TL;DR: A set of computational tools and techniques used to disambiguate prepositional phrase attachments in English sentences, by accessing on-line dictionary definitions, offer hope for eliminating the time-consuming hand coding of semantic information that has been conventional in natural language understanding systems.
Abstract: Standard on-line dictionaries offer a wealth of knowledge expressed in natural language form. We claim that such knowledge can and should be accessed by natural language processing systems to solve difficult ambiguity problems. This paper sustains that claim by describing a set of computational tools and techniques used to disambiguate prepositional phrase attachments in English sentences, by accessing on-line dictionary definitions. Such techniques offer hope for eliminating the time-consuming, and often incomplete, hand coding of semantic information that has been conventional in natural language understanding systems.


01 Jun 1987
TL;DR: The authors describes theoretical developments in the cognitive psychology of second language acquisition and concludes that such theories have not been sufficiently developed to permit a descriptive analysis of the role learning strategies play in acquiring language skills.
Abstract: : This paper describes theoretical developments in the cognitive psychology of second language acquisition One conclusion reached is that such theories have not been sufficiently developed to permit a descriptive analysis of the role learning strategies play in acquiring language skills A second conclusion is that language skills have characteristics in common with other complex cognitive skills that can be described within the cognitive theory of John Anderson Anderson's theory is seen as having promise for serving as the foundation for a research model on the role of learning strategies in second language acquisition Keywords: English as a second language

Journal ArticleDOI
Lisa F. Rau1
TL;DR: The SCISOR system is described, which illustrates the potential for increased recall and precision of stored information through the understanding in context of articles in its domain of corporate takeovers.
Abstract: Traditional approaches to information retrieval, based on automatic or manually constructed keywords, are inappropriate for certain desirable tasks in an intelligent information system. Obtaining simple answers to direct questions, a summary of an event sequence that could span multiple documents, and an update of recent developments in an ongoing event sequence are three examples of such tasks. In this paper, the SCISOR system is described. SCISOR illustrates the potential for increased recall and precision of stored information through the understanding in context of articles in its domain of corporate takeovers. A constrained form of marker passing is used to answer queries of the knowledge base posed in natural language. Among other desirable characteristics, this method of retrieval focuses search on likely candidates, and tolerates incomplete or incorrect input indices very well.

Book
01 Jan 1987
TL;DR: The model emphasizes the semantic, syntactic and lexical constraints that must be dealt with when establishing a relationship between meaning and form, and it is consideration of such linguistic constraints that determines Danlos' generation algorithm.
Abstract: This study presents an original and penetrating analysis of the complex problems surrounding the automatic generation of natural language text. Laurence Danlos provides a valuable critical review of current research in this important and increasingly active field, and goes on to describe a new theoretical model that is thoroughly grounded in linguistic principles.The model emphasizes the semantic, syntactic and lexical constraints that must be dealt with when establishing a relationship between meaning and form, and it is consideration of such linguistic constraints that determines Danlos' generation algorithm. The book concludes with a description of a generation system based on this algorithm which produces texts in several domains and also a system for the synthesis of spoken messages from semantic representations.The book is a significant addition to the literature on text generation, and will be of particular interest to all computational linguists and AI researchers who have wrestled with the problem of vocabulary selection.


Journal Article
TL;DR: The development of a dictionary support environment linking a restructured version of the Longman Dictionary of Contemporary English to natural language processing systems is described and an evaluation of the utility of the grammar coding system for use by automatic natural language parsing systems is offered.
Abstract: This article focusses on the derivation of large lexicons for natural language processing. We describe the development of a dictionary support environment linking a restructured version of the Longman Dictionary of Contemporary English to natural language processing systems. The process of restructuring the information in the machine readable version of the dictionary is discussed. The Longman grammar code system is used to construct 'theory neutral' lexical entries. We demonstrate how such lexical entries can be put to practical use by linking up the system described here with the experimental PATR-II grammar development environment. Finally, we offer an evaluation of the utility of the grammar coding system for use by automatic natural language parsing systems.

Book
01 May 1987
TL;DR: A new theory of knowledge representation is proposed, called Cognitive Representation Theory (CRT), which eliminates the frame/slot distinction found in frame-based languages and incorporates as representational entities notions reminiscent of natural language metaphoric and metanymic relationships.
Abstract: A new theory of knowledge representation is proposed, called Cognitive Representation Theory (CRT). CRT encompasses representational ideas that have emerged from work in semantic networks, frames, frame semantics, and Conceptual Dependency. the theory attempts to meet certain desiderata for a meaning representation, namely, the principles of adequacy, interpret ability, uniformity, economy, and, in particular, cognitive correspondence. Motivated by these principles, the theory eliminates the frame/slot distinction found in frame-based languages (alternatively, node/link distinction. In addition, the theory incorporates as representational entities notions reminiscent of natural language metaphoric and metanymic relationships. This is done through a mechanism called views. The theory allows for the representation of some ideas that in the past have only been represented procedurally, informally, or not at all. An implementation of much of CRT, called KODIAK, has been created, and used in a number of experiments.