scispace - formally typeset
Search or ask a question

Showing papers on "Natural language published in 1983"


Book
01 Jan 1983

732 citations




Patent
28 Jan 1983
TL;DR: In this article, a system for interactively generating a natural language input interface, without any computer-skill programming work being required, is presented, which allows a totally unskilled computer user, who need not even be able to type, to access a relational or hierarchical database without any possibility of error.
Abstract: A system for interactively generating a natural-language input interface, without any computer-skill programming work being required. The natural-language menu interface thus generated provides a menu-selection technique whereby a totally unskilled computer user, who need not even be able to type, can access a relational or hierarchical database, without any possibility of error. That is, the user addresses commands to the database system simply by selecting words from an appropriate menu of words which could legally follow in commands, so that the user inputs commands which are phrased entirely in English, and these commands cannot be misunderstood by the database system. The present invention provides an automatic interactive system whereby such an interface is constructed. The database is itself loaded in, and the interactive interface-construction system then addresses a series of queries to the user's technical expert, in response to which the user must classify, which tables in the database are to be used, which attributes of particular tables in the database are key attributes, and, in particular, what the various connections between tables in the database are and what natural-language connecting phrases will describe those relations.

365 citations


Book
01 Dec 1983
TL;DR: A new approach to analyze the risks a computer system may be subject to and a non-numeric method that allows natural language expression is presented.
Abstract: A new approach to analyze the risks a computer system may be subject to. A non-numeric method that allows natural language expression is presented. A tutorial for implementation of the ideas of fuzzy set theory in general, and of the linguistic approach to risk analysis in particular, are discussed

362 citations



Proceedings ArticleDOI
J. F. Kelley1
12 Dec 1983
TL;DR: This research demonstrates that the methodological tools of the engineering psychologist can help build user-friendly software that accommodates the unruly language of computer-naive, first-time users by eliciting the cooperation of such users as partners in an iterative, empirical development process.
Abstract: A six-step, iterative, empirical, human factors design methodology was used to develop CAL,a natural language computer application to help computer-naive business professionals manage their personal calendars. Language is processed by a simple, non-parsing algorithm having limited storage requirements and a quick response time. CAL allows unconstrained English inputs from users with no training (except for a 5 minute introduction to the keyboard and display) and no manual (except for a two-page overview of the system). In a controlled test of performance, CAL correctly responded to between 86% and 97% of the inputs it received, according to various criteria. This research demonstrates that the methodological tools of the engineering psychologist can help build user-friendly software that accommodates the unruly language of computer-naive, first-time users by eliciting the cooperation of such users as partners in an iterative, empirical development process.The principal purpose of the research reported here was to design and test a systematic, empirical methodology for developing natural language computer applications. This paper describes that methodology and its successful use in the development of a natural language computer application: CAL,Calendar Access Language. The limited context or domain in which the application operates is the management of a personal calendar, or appointment book, data base by computer-naive business professionals.

246 citations


Proceedings ArticleDOI
Karen Kukich1
15 Jun 1983
TL;DR: Three fundamental principles of the technique are the use of domain-specific semantic and linguistic knowledge, its use of macro-level semantic and language constructs, and its production system approach to knowledge representation.
Abstract: Knowledge-Based Report Generation is a technique for automatically generating natural language reports from computer databases. It is so named because it applies knowledge-based expert systems software to the problem of text generation. The first application of the technique, a system for generating natural language stock reports from a daily stock quotes database, is partially implemented. Three fundamental principles of the technique are its use of domain-specific semantic and linguistic knowledge, its use of macro-level semantic and linguistic constructs (such as whole messages, a phrasal lexicon, and a sentence-combining grammar), and its production system approach to knowledge representation.

245 citations




Journal ArticleDOI
TL;DR: In this paper, the modal concepts of intensional and extensional data constraints and queries are introduced and contrasted, and the potential application of these ideas to the problem of natural language database querying is discussed.
Abstract: The concept of a historical database is introduced as a tool for modeling the dynamic nature of some part of the real world. Just as first-order logic has been shown to be a useful formalism for expressing and understanding the underlying semantics of the relational database model, intensional logic is presented as an analogous formalism for expressing and understanding the temporal semantics involved in a historical database. The various components of the relational model, as extended to include historical relations, are discussed in terms of the model theory for the logic ILs, a variation of the logic IL formulated by Richard Montague. The modal concepts of intensional and extensional data constraints and queries are introduced and contrasted. Finally, the potential application of these ideas to the problem of natural language database querying is discussed.

Proceedings Article
08 Aug 1983
TL;DR: The discussion of TEAM shows how domain-independent and domain-dependent information can be separated in the different components of a NL interface system, and presents one method of obtaining domain-specific information from a domain expert.
Abstract: This paper describes the design of a transportable natural language (NL) interface to databases and the constraints that transportability places on each component of such a system. By a transportable NL system, we mean an NL processing system that is constructed so that a domain expert (rather than an AI or linguistics expert) can move the system to a new application domain. After discussing the general problems presented by transportability, this paper describes TEAM (an acronym for Transportable English database Access Medium), a demonstratable prototype of such a system. The discussion of TEAM shows how domain-independent and domain-dependent information can be separated in the different components of a NL interface system, and presents one method of obtaining domain-specific information from a domain expert.

Book ChapterDOI
01 Jan 1983
TL;DR: The assumption that the acquisition of a second or third or fourth language is, in the normal case, by no means inevitable as is the acquiring of a first language has been investigated in the literature as discussed by the authors.
Abstract: Traditionally, the central problem in the study of second language acquisition and use has been the determination of those factors that differentiate cases in which a relatively high degree of proficiency in a second language is attained from those cases in which it is not. Implicit in the pursuit of solutions to this problem is the assumption - fully justified on the basis of systematic as well as anecdotal observation - that the acquisition of a second or third or fourth language is, in the normal case, by no means inevitable as is the acquisition of a first language. The non-inevitability of second language acquisition gives rise to a variety of questions in the study of second language phenomena that do not ordinarily arise in the study of first; questions concerning the personality, motivations, general cognitive style, and (most importantly for the present discussion) the age of the learner, as well as features of the environmental conditions under which acquisition occurs.

Book
01 Mar 1983
TL;DR: A fundamental change is taking place in the study of computational linguistics analogous to that which has taken place inThe study of computer vision over the past few years and indicative of trends that are likely to affect future work in artificial intelligence generally.
Abstract: From the Publisher: As the contributions to this book make clear, a fundamental change is taking place in the study of computational linguistics analogous to that which has taken place in the study of computer vision over the past few years and indicative of trends that are likely to affect future work in artificial intelligence generally. The first wave of efforts on machine translation and the formal mathematical study of parsing yielded little real insight into how natural language could be understood by computers or how computers could lead to an understanding of natural language. The current wave of research seeks both to include a wider and more realistic range of features found in human languages and to limit the dimensions of program goals. Some of the new programs embody for the first time constraints on human parsing which Chomsky has uncovered, for example. The isolation of constraints and the representations for their expression, rather than the design of mechanisms and ideas about process organization, is central to the work reported in this volume. And if present goals are somewhat less ambitious, they are also more realistic and more realizable. Contents: Computational Aspects of Discourse, Robert Berwick; Recognizing Intentions from Natural Language Utterances, James Allen; Cooperative Responses from a Portable Natural Language Data Base Query System, Jerrold Kaplan; Natural Language Generation as a Computational Problem: An Introduction, David McDonald; Focusing in the Comprehension of Definite Anaphor, Candace Sidner; So What Can We Talk About Now? Bonnie Webber. A Preface by David Israel relates these chapters tothe general considerations of philosophers and psycholinguists. Michael Brady is Senior Research Scientist at the MIT Artificial Intelligence Laboratory. The book is included in the MIT Press Artificial Intelligence Series.

Journal Article
TL;DR: This famous book will not become a unity of the way for you to get amazing benefits at all, but, it will serve something that will let you get the best time and moment to spend for reading the book.
Abstract: It sounds good when knowing the the language of the classroom in this website. This is one of the books that many people looking for. In the past, many people ask about this book as their favourite book to read and collect. And now, we present hat you need quickly. It seems to be so happy to offer you this famous book. It will not become a unity of the way for you to get amazing benefits at all. But, it will serve something that will let you get the best time and moment to spend for reading the book.

Journal ArticleDOI
TL;DR: A model of generalization that is part of a system for language understanding, the Integrated Partial Parser (IPP), includes the retrieval of relevant examples from long-term memory so that the concepts to be created can be determined when new stories are read.

Proceedings ArticleDOI
Donald Hindle1
15 Jun 1983
TL;DR: It is a mystery that people have little difficulty understanding the non-fluent speech that is the essential medium of everyday life and that children can succeed in acquiring the grammar of a language on the basis of evidence provided by a mixed set of apparently grammatical and ungrammatical strings.
Abstract: It is often remarked that natural language, used naturally, is unnaturally ungrammatical. *Spontaneous speech contains all manner of false starts, hesitations, and self-corrections that disrupt the well-formedness of strings. It is a mystery then, that despite this apparent wide deviation from grammatical norms, people have little difficulty understanding the non-fluent speech that is the essential medium of everyday life. And it is a still greater mystery that children can succeed in acquiring the grammar of a language on the basis of evidence provided by a mixed set of apparently grammatical and ungrammatical strings.

Journal Article
TL;DR: This paper classifies different types of grammatical deviations and related phenomena at the lexical, sentential and dialogue levels and presents recovery strategies tailored to specific phenomena in the classification.
Abstract: Practical natural language interfaces must exhibit robust behaviour in the presence of extragrammatical user input. This paper classifies different types of grammatical deviations and related phenomena at the lexical, sentential and dialogue levels and presents recovery strategies tailored to specific phenomena in the classification. Such strategies constitute a tool chest of computationally tractable methods for coping with extragrammaticality in restricted domain natural language. Some of the strategies have been tested and proven viable in existing parsers.

Journal ArticleDOI

Journal ArticleDOI
TL;DR: The design and implementation of a paraphrase component for a natural language question-answering system (CO-OP) is presented and a major point made is the role of given and new information in formulating a paraphras that differs in a meaningful way from the user's question.
Abstract: The design and implementation of a paraphrase component for a natural language question-answering system (CO-OP) is presented. The component is used to produce a paraphrase of a user's question to the system, which is presented to the user before the question is evaluated and answered. A major point made is the role of given and new information in formulating a paraphrase that differs in a meaningful way from the user's question. A description is also given of the transformational grammar that is used by the paraphraser.

01 Jan 1983
TL;DR: Penman as discussed by the authors is a text generation system based on a large systemic grammar of English for multiparagraph text generation, which includes a knowledge acquisition module, a text planninq module, and an evaluatron and plan-oerturbation module.
Abstract: The problem of programming computers to produce natural language explanations and other texts on demand is an active research area in artificial intelligence. In the past, research systems designed for this purpose have been limited by the weakness of their linguistic bases, especially their grammars, and their techniques often cannot be transferred to new knowledge domains. A new text generation system, Penman, is designed to overcome these problems and produce fluent multiparagraph text in English in response to a goal presented to the system. Penman consists of four major modules: a knowledae acauisition module which can perform domain-specific searches for knowledge relevant to a given communication goal; a text planninq module which can organize the relevant information, decide what portion to present. and decide how to lead the reader’s attention and knowledge through the content; a sentence generation module based on a large systemic grammar of English; and an evaluatron and plan-oerturbation module which revises text plans based on evaluation of text produced. Development of Penman has included implementation of the largest systemic grammar of English in a single notation. A new semantic notation has been added to the systemic framework, and the semantics of nearly the entire grammar has been defined. The semantics is designed to be independent of the system’s knowledge notation, so that it is usable with widely differing knowledge representations, including both frame-based and predicate-calculus-based approaches.

Book ChapterDOI
01 Jan 1983
TL;DR: The ways in which the learning of mathematics interacts with problems of language are beginning to receive more attention than has formerly been the case as mentioned in this paper, and it is good that at this Congress considerable time should be devoted to discussing language problems.
Abstract: The ways in which the learning of mathematics interacts with problems of language are beginning to receive more attention than has formerly been the case. The enormous increase in the variety of children who are now introduced to mathematics has alerted us to aspects previously ignored. It is good, then, that at this Congress considerable time should be devoted to discussing language problems. We have already heard a plenary lecture by Hermina Sinclair on the role of language in cognitive development, and there has been an opportunity for members to respond to this. Further sessions are to be devoted to detailed consideration of the problems of teaching mathematics in a second language. It was my intention to attempt in this paper to survey aspects of the mathematics education/language interface which would not be touched upon in those sessions. Eventually, I rejected that idea and decided to speak on one particular aspect - - that of symbolism. Those in search of a survey will find one such in Austin and Howson (1979). Yet, even by restricting myself to the one area of symbolism, I find that all too frequently I can make only brief allusions to problems. My purpose, then, is to attempt to identify particular areas of possible research and, where possible, to mention existing work.

Journal ArticleDOI
TL;DR: This report focuses on how the boris program handles a complex story involving a divorce.

Proceedings Article
22 Aug 1983
TL;DR: Development of Penman has included implementation of the largest systemic grammar of English in a single notation, and the semantics of nearly the entire grammar has been defined.
Abstract: The problem of programming computers to produce natural language explanations and other texts on demand is an active research area in artificial intelligence. In the past, research systems designed for this purpose have been limited by the weakness of their linguistic bases, especially their grammars, and their techniques often cannot be transferred to new knowledge domains. A new text generation system, Penman, is designed to overcome these problems and produce fluent multiparagraph text in English in response to a goal presented to the system. Penman consists of four major modules: a knowledge acquisition module which can perform domain-specific searches for knowledge relevant to a given communication goal; a text planning module which can organize the relevant information, decide what portion to present, and decide how to lead the reader's attention and knowledge through the content; a sentence generation module based on a large systemic grammar of English; and an evaluation and plan-perturbation module which revises text plans based on evaluation of text produced. Development of Penman has included implementation of the largest systemic grammar of English in a single notation. A new semantic notation has been added to the systemic framework, and the semantics of nearly the entire grammar has been defined. The semantics is designed to be independent of the system's knowledge notation, so that it is usable with widely differing knowledge representations, including both frame-based and predicate-calculus-based approaches.

Posted Content
TL;DR: The design and results of a field evaluation of a natural language system-NLS-used for data retrieval and its practical usefulness are presented.
Abstract: Although a large number of natural language database interfaceshave been developed, there have been few empirical studies of theirpractical usefulness. This paper presents the design and results of afield evaluation of a natural language system - NLS - used for dataretrieval .A balanced, multifactorial design comparing NLS with a referenceretrieval language, SQL, is described. The data are analyzed on twolevels: work task (n=87) and query (n=1081). SQL performed betterthan NLS on a variety of measures, but NLS required less effort touse. Subjects performed much poorer than expected based on theresults of laboratory studies. This finding is attributed to thecomplexity of the field setting and to optimism in grading laboratoryexperiments.The methodology developed for studying computer languages in realwork settings was successful in consistently measuring differences intreatments over a variety of conditions.

Journal ArticleDOI
TL;DR: The second language classroom has long been a center of research interest as mentioned in this paper, which is based on the priority of direct observation of second-language classroom activity and is directed primarily at identifying the numerous factors which shape the second language instructional experience.
Abstract: The second language classroom has long been a center of research interest. In the last several years, attempts to examine the second language classroom-to clarify how the language classroom experience differs from what is available outside the classroom and how language classrooms differ among themselves-have been increasingly guided by a shared set of goals and premises. Classroom process research is based on the priority of direct observation of second language classroom activity and is directed primarily at identifying the numerous factors which shape the second language instructional experience. The result has been a marked departure from earlier research on the nature and effects of classroom instruction in a second language. Selected studies in three areas are reviewed: the linguistic environment of second language instruction, patterns of participation in the language classroom, and error treatment. Also reviewed are recent applications of introspective (mentalistic) research to the problem of describing the second language classroom experience.

Journal ArticleDOI
TL;DR: This article investigated metaphoric understanding and its relationship to a cognitive task of combinatorial reasoning in preadolescent children (x age = 10:7) who were diagnosed as lan...
Abstract: This study was designed to investigate metaphoric understanding and its relationship to a cognitive task of combinatorial reasoning in preadolescent children (x age =10:7) who were diagnosed as lan...

Proceedings Article
31 Oct 1983
TL;DR: A functional overview of a new kind of natural language interface that goes far in overcoming both the "ease-of-use" and the "costly" problems of building and maintaining natural language interfaces to databases.
Abstract: Natural language interfaces to databases are not in couunon use today for two main reasons: they are difficult to use and they are expensive to build and maintain. This paper presents a functional overview of a new kind of natural language interface that goes far in overcoming both of these problems. The “ease-of-use” problem is solved by wedding a menu-based interaction technique to a traditional semantic graaauar-driven natural language system. Using this approach, all user queries are “understood” by the system. The “creation and maintenance problem” is solved by designing a core grannnar with parameters supplied by the data dictionary and then automatically generating semantic graumars covering some selected subpart of the user’s data. Automatically generated natural language interfaces offer the user an attractive way to group semantically related tables together, to model a user’s access rights, and to model a user's view of supported joins paths in a database.

Journal ArticleDOI
TL;DR: Current second language acquisition theory as well as case history reports of the “din” are consistent with the hypothesis that the din in the head is a result of stimulation of the language acquisition device, and is “set off” when the acquirer receives significant amounts of comprehensible input.
Abstract: This paper discusses a phenomenon familiar to many language acquirers, an involuntary rehearsal of second language words, sounds, and phrases. Current second language acquisition theory as well as case history reports of the “din” are consistent with the hypothesis that the din in the head is a result of stimulation of the language acquisition device, and is “set off” when the acquirer receives significant amounts of comprehensible input. The din may have practical value; it may tell us when we are providing input for real language acquisition in our classes.