scispace - formally typeset
Search or ask a question

Showing papers on "Natural language understanding published in 1989"


Proceedings ArticleDOI
22 Nov 1989
TL;DR: In this experiment, certain issues encountered in natural language understanding are discussed and the NLQ presented cannot detect semantic contradictions, as it does not use any data restriction/dependency information to detect semantic errors in conversation.
Abstract: A description is given of the implementation of a natural language query (NLQ) processor based on the pattern matching paradigm (PMP) in standard Prolog on an IBM-PC/XT. In the light of this experiment, certain issues encountered in natural language understanding are discussed. PMP is a simplistic syntactic approach to natural language understanding and works well in limited applications. However, the NLQ presented cannot detect semantic contradictions, as it does not use any data restriction/dependency information to detect semantic errors in conversation. >

112 citations


Journal ArticleDOI
TL;DR: Observation of convergence in human-computer dialogue suggests that the technique can be incorporated in user interfaces to improve communication and other applications of the technique in HCI are outlined.

31 citations


Journal ArticleDOI
01 May 1989-Infor
TL;DR: This article argued that the concept of natural language understanding systems should be extended to include non-verbal types of dialogue and made the claim that such interfaces are, in many way, more natural than verbal interfaces.
Abstract: The argument is made that the concept of “natural language understanding systems” should be extended to include non-verbal types of dialogue. The claim is made that such interfaces are, in many way...

30 citations


Proceedings ArticleDOI
26 Jun 1989
TL;DR: In BBN's natural language understanding and generation system (Janus), a hybrid approach to representation is used, employing an intensional logic for the representation of the semantics of utterances and a taxonomic language with formal semantics for specification of descriptive constants and axioms relating them.
Abstract: In BBN's natural language understanding and generation system (Janus), we have used a hybrid approach to representation, employing an intensional logic for the representation of the semantics of utterances and a taxonomic language with formal semantics for specification of descriptive constants and axioms relating them. Remarkably, 99.9% of 7,000 vocabulary items in our natural language applications could be adequately axiomatized in the taxonomic language.

26 citations


Proceedings ArticleDOI
03 Jan 1989
TL;DR: The authors compare and contrast three approaches to delivering systems with semantic access, present a theoretical basis for an FLBC (speech act theory), and describe a prototype implementation.
Abstract: Electronic messaging in an office environment is normally carried out in natural language. For a variety of reasons it would be useful if electronic messaging systems could have semantic access to (i.e. access to the meanings and contents of) the messages they process. Given that natural language understanding is not a practicable alternative, there remain three approaches to delivering systems with semantic access: electronic data interchange, tagged messages, and the development of a formal language for business communication (FLBC). The authors compare and contrast these three approaches, present a theoretical basis for an FLBC (speech act theory), and describe a prototype implementation. >

18 citations


Proceedings ArticleDOI
21 Feb 1989
TL;DR: A plan is presented for evaluating natural language processing systems that have focused on the issues of text understanding as exemplified in short texts from military messages, and definition of a simulated database update task that requires NLP systems to fill a template with information found in the texts.
Abstract: A plan is presented for evaluating natural language processing (NLP) systems that have focused on the issues of text understanding as exemplified in short texts from military messages. The plan includes definition of bodies of text to use as development and test data, namely the narrative lines from one type of naval message, and definition of a simulated database update task that requires NLP systems to fill a template with information found in the texts. Documentation related to the naval messages and examples of filled templates have been prepared to assist NLP system developers. It is anticipated that developers of a number of different NLP systems will participate in the evaluation and will meet afterwards to present and interpret the results and to critique the test design.

17 citations


Journal Article
TL;DR: A gastroenterological expert system application is briefly demonstrated and can be seen to be useful in the research of gastrointestinal cytoprotection, including the plan of different compounds with cy toprotective effect, experimental and clinical medical research.
Abstract: The emergence of the artificial intelligence (AI) in computer technology and its application in the medical field enables the researchers to carry out such intelligent activities like image processing, medical reasoning systems, clinical decision supporting and natural language understanding, etc. A gastroenterological expert system application is briefly demonstrated in this paper. Similar expert systems can be seen to be useful in the research of gastrointestinal cytoprotection, including the plan of different compounds with cytoprotective effect, experimental and clinical medical research.

16 citations



Proceedings ArticleDOI
01 Mar 1989
TL;DR: One need not create a natural language understanding system in order to create a hypertext data base that can be traversed with unconstrained natural language.
Abstract: One need not create a natural language understanding system in order to create a hypertext data base that can be traversed with unconstrained natural language. The task is simplified because the computer creates a constrained context, imposes a non-negotiable topic, and elicits simple questions. Two small hypertext data bases describing the authors' organization and the terms and rules of baseball were implemented on an IBM PC. When ten untrained people were allowed to search through these data bases, 59 per cent of their queries were answered correctly by the first data base and 64 per cent by the second.

15 citations


01 Jan 1989
TL;DR: This dissertation describes a new approach to lexical ambiguity resolution during sentence understanding which is implemented in a program called ATLAST, and provides a solution to the problem of error recovery which is compatible with current psycholinguistic theories of lexical disambiguation.
Abstract: Author(s): Eiselt, Kurt Paul | Abstract: Solving the mysteries of human language understanding inevitably requires an answer to the question of how the language understander resolves ambiguity, for human language is certainly ambiguous. But ambiguity leads to choices between possible explanations, and choice opens the door for mistakes. Unless we are willing to believe that the human language understander always makes the correct choice, any explanation of ambiguity resolution must be considered incomplete if it does not also account for recovery from an incorrect decision.This dissertation describes a new approach to lexical ambiguity resolution during sentence understanding which is implemented in a program called ATLAST. Many computational models of natural language understanding have dealt with lexical ambiguity resolution, but ATLAST is one of the few models to address the associated problem of error recovery. ATLAST's ability to recover from an incorrect lexical inference decision stems from its ability to retain unchosen word meanings for a period of time after it selects the apparently context-appropriate meaning of an ambiguous word. The short-term retention of possible lexical inferences permits ATLAST to recover from incorrect decisions without backtracking and reprocessing text, and without keeping a record of possible choices indefinitely.The principle of retention provides a solution to the problem of error recovery which is compatible with current psycholinguistic theories of lexical disambiguation. Furthermore, the existence of some form of retention in lexical disambiguation is supported by the results of experiments with human subjects. This dissertation includes a discussion of these results and speculation on how the principle of retention might be extended to account for recovery from erroneous higher-level inference decisions.

13 citations


Journal ArticleDOI
TL;DR: In this paper, the task of creating a hypertext data base that can be traversed with unconstrained natural language is simplified by creating a natural language understanding system for the task.
Abstract: One need not create a natural language understanding system in order to create a hypertext data base that can be traversed with unconstrained natural language. The task is simplified because the co...

Journal ArticleDOI
Ali Farghaly1
TL;DR: The view is that Computer Assisted Language Instruction software should be developed as a natural language processing system that offers an interactive environment for language learners and a model for intelligent CALI software (MICALI) is proposed.
Abstract: This paper presents the view that Computer Assisted Language Instruction (CALI) software should be developed as a natural language processing system that offers an interactive environment for language learners. A description of Artificial Intelligence tools and techniques, such as parsing, knowledge representation and expert systems is presented. Their capabilities and limitations are discussed and a model for intelligent CALI software (MICALI) is proposed. MICALI is highly interactive and communicative and can initiate conversation with a student or respond to questions on a previously defined domain of knowledge. In the present state of the art, MICALI can only operate in limited parsing and domain-specific knowledge representation.

Journal ArticleDOI
TL;DR: This paper describes in detail a connectionist disambiguation system and discusses proposed connectionist approaches towards parsing and case role assignment, and suggests some directions for future research.
Abstract: We will discuss various connectionist schemes for natural language understanding (NLU). In principle, massively parallel processing schemes, such as connectionist networks, are well-suited for modelling highly integrated forms of processing. The connectionist approach towards natural language processing is motivated by the belief that a NLU system should process knowledge from many different sources, e.g. semantic, syntactic, and pragmatic, in just this sort of integrated manner. The successful use of spreading activation for various disambiguation tasks in natural language processing models lead to the first connectionist NLU systems. In addition to describing in detail a connectionist disambiguation system, we will also discuss proposed connectionist approaches towards parsing and case role assignment. This paper is intended to introduce the reader to some of the basic ideas behind the connectionist approach to NLU. We will also suggest some directions for future research.

Book
01 Apr 1989
TL;DR: This is an excerpt from the Handbook of Artificial Intelligence, a compendium of hundreds of articles about AI ideas, techniques, and programs being prepared at Stanford University by AI researchers and students from across the country.
Abstract: This is an excerpt from the Handbook of Artificial Intelligence, a compendium of hundreds of articles about AI ideas, techniques, and programs being prepared at Stanford University by AI researchers and students from across the country. In addition to articles describing the specifics of various AI programming methods, the Handbook contains dozens of overview articles like this one, which attempt to give historical and scientific perspective to work in the different areas of AI research. This article is from the Handbook chapter on natural language understanding. Cross-references to other articles in the handbook have been removed-terms discussed in more detail elsewhere are italicized. Many people have contributed to this chapter, including especially Anne Gardner, James Davidson, and Terry Winograd. Avron Barr and Edward A. Feigenbaum are the Handbook's general editors.

Journal ArticleDOI
TL;DR: Otte such approach is described in this paper, for the communication of the content and structure of natural language sentences, which has been implemented in Common LISP on a VAX workstation.
Abstract: Natural language communication interfaces have usually employed linear strings of words for man-machine communication. A lot of ‘intelligence’—in the form of semantic, syntactic and other information—is used to analyse these strings, and to puzzle out their structures. However, use of linear strings of words, while appropriate for communication between humans, seems inappropriate for communication with a machine using video displays, keyboards and a mouse. One need not demand too much out of machines in this area of analysis of natural language input; one could bypass these problems by using alternative approaches to man-machine communication. Otte such approach is described in this paper, for the communication of the content and structure of natural language sentences. The basic idea is that the human user of the interface should use the two dimensional screen, mouse and keyboard to create structures for input, guided by appropriate software. Another key idea is the use of a high degree of interaction to avoid some problems usually encountered in natural language understanding. Based on this approach, a system called ScreenTalk has been implemented in Common LISP on a VAX workstation. The man-machine interface is used to interactively input both the content and the structure of sentences. Users may then ask questions, which are answered using stored information. ScreenTalk now operates on a database of brief news items. It has been designed to be fairly domain independent, and is expected to be used soon in other applications. The conceptual framework for such an approach, the design of the experimental interface used to test this framework and the authors' experience with this interface are presented.

ReportDOI
01 Sep 1989
TL;DR: IRUS-II is the understanding subsystem of the Janus natural language interface and contains domain-independent algorithms, a large grammar of English, domain- independent semantic interpretation rules, and a domain- Independent discourse component.
Abstract: : IRUS-II is the understanding subsystem of the Janus natural language interface. IRUS-II is a natural language understanding (NLU) shell. That is, it contains domain-independent algorithms, a large grammar of English, domain- independent semantic interpretation rules, and a domain-independent discourse component. In addition, several software aids are provided to customize the system to particular application domains. These software aids output the four knowledge bases necessary for IRUS-II to correctly interpret English utterances and generate appropriate code for simultaneous access to multiple application systems. Natural language interfaces, User interfaces, Knowledge bases.

Proceedings ArticleDOI
21 Feb 1989
TL;DR: CRL's contribution to DARPA's program is to bring to bear on natural language understanding two closely-related belief and context mechanisms: dynamic generation of nested belief structures (ViewGen) and hypotheses for reasoning and problem-solving (MGR).
Abstract: CRL's contribution to DARPA's program is to bring to bear on natural language understanding two closely-related belief and context mechanisms: dynamic generation of nested belief structures (ViewGen) and hypotheses for reasoning and problem-solving (MGR).

Proceedings ArticleDOI
01 Jun 1989

Proceedings Article
01 Aug 1989
TL;DR: The Parallel Expert Parser is a natural language analysis model belonging to the interactive model paradigm that stresses the parallel interaction of relatively small distributed knowledge components to arrive at the meaning of a fragment of text.
Abstract: The Parallel Expert Parser (PEP) is a natural language analysis model belonging to the interactive model paradigm that stresses the parallel interaction of relatively small distributed knowledge components to arrive at the meaning of a fragment of text. It borrows the idea of words as basic dynamic entities triggering a set of interactive processes from the Word Expert Parser (Small 1980), but tries to improve on the clarity of interactive processes and on the organization of lexically-distributed knowledge. As of now, especially the procedural aspects have received attention: instead of having wild-running uncontrollable interactions, PEP restricts the interactions to explicit communications on a structured blackboard; the communication protocols are a compromise betwenn maximum parallelism and controllability. At the same time, it is no longer just words that trigger processes; words create larger units (constituents), that are in turn interacting entities on a higher level. Lexical experts contribute their associated knowledge, create higher-level experts, and die away. The linguists define the levels to be considered, and write expert processes in a language that tries to hide the procedural aspects of the parallel-interactive model from them. Problems include the possiblity of deadlock situations when processes wait infinitely for each other, the way to efficiently pursue different alternatives (as of now, the system just uses don’t-care determinism), and testing whether the protocols allow linguists to fully express their needs. PEP has been implemented in Flat Concurrent Prolog, using the Logix programming environment. Current research is oriented more towards the problem of distributed knowledge representation. Abstractions and generalizations across lexical experts could be made using principles from object-oriented programming (introducing generic, prototypical experts; cp. Hahn 1987). Thoughts also go in the direction of an integration of the coarse-grained parallelism with knowledge representation in a fine-grained parallel (connectionist) way.

Book ChapterDOI
01 Jan 1989
TL;DR: The Basic Research Action DYANA (“Dynamic Interpretation of Natural Language”) is concerned with foundational research towards the development of an integrated computational model of language interpretation, covering the spectrum from speech to reasoning.
Abstract: The Basic Research Action DYANA (“Dynamic Interpretation of Natural Language”) is concerned with foundational research towards the development of an integrated computational model of language interpretation, covering the spectrum from speech to reasoning. The programme of work focuses on the following themes in natural language understanding:

Journal ArticleDOI
TL;DR: The jacket notes to this book by James Allen say it is the most comprehensive, in-depth book to date covering all major aspects of natural language processing.
Abstract: The jacket notes to this book by James Allen say it is the most comprehensive, in-depth book to date covering all major aspects of natural language processing. This claim is probably realistic.


01 Oct 1989
TL;DR: A parallel machine, consisting of a network of transputers with content addressable memory (CAM), a rapid global communications network and runtime support for Lisp and Prolog is presented, and a natural language understanding system based on the blackboard model is described, and shown to exhibit parallelism at a number of levels.
Abstract: A parallel machine, consisting of a network of transputers with content addressable memory (CAM), a rapid global communications network and runtime support for Lisp and Prolog is presented. A natural language understanding system based on the blackboard model is described, and shown to exhibit parallelism at a number of levels. A suitable topology for the system is presented, and it is shown how this is mapped onto the transputer network. The use of the CAM to accelerate various natural language processing algorithms is discussed. Introduction The GEC Hirst Research Centre is developing a speech understanding system as an example of an application for a parallel architecture. The first section of the paper describes the hardware architecture, the system software and the supported programming languages. The second section describes a sequential implementation of the speech understanding software. The final section describes how the application software is adapted to the parallel architecture.

01 Jan 1989
TL;DR: In this article, the authors argue in favour of the treatment of eventualities as individuals, which are structured along different lines, and investigate these possibilities in parallel with objects, and obtain a rather symmetric structuring of the domain of individuals.
Abstract: This paper focuses on the discussion of suitable representations of eventualities in a formal language and of the possibilities to draw inferences from representations. We argue in favour of the treatment of eventualities as individuals, which are structured along different lines. When reifying eventualities, there are different possibilities of individualization. This is similarily true for the domain of objects. Thus we investigate these possibilities in parallel with objects, and obtain a rather symmetric structuring of the domain of individuals, i.e. a sort hierarchy which is sensible for different kinds of eventualities and objects respectively.

BookDOI
01 Apr 1989
TL;DR: The integration of representation and generalization in the domain of NLP is the subject of this article, which focuses on conceptual representation of objects based on the semantic interpretation of natural language input.
Abstract: This article surveys a portion of the field of natural language processing. The main areas considered are those dealing with representation schemes, particularly work on physical object representation, and generalization processes driven by natural language understanding The emphasis of this article is on conceptual representation of objects based on the semantic interpretation of natural language input. Six programs serve as case studies for guiding the course of the article. Within the framework of describing each of these programs, several other programs, ideas, and theories that are relevant to the program in focus are presented. RECENT ADVANCES in natural language processing [NLP] have generated considerable interest within the Artificial Intelligence [AI] and Cognitive Science communities. Within NLP, researchers are trying to produce intelligent computer systems that can read, understand, and respond to various human-oriented texts. Terrorism stories, airline flight schedules, and how to fill ice cube trays are all domains that have been used for NLP programs. In order to understand these texts and others, some way of representing information is needed. A complete understanding of human-oriented prose requires the ability to combine the meanings of many readings in an intelligent manner. Learning through the process of generalization is one such mechanism. The integration of representation and generalization in the domain of NLP is the subject of this article. Physical object understanding is an area in which a variety of representation schemes and generalization methods have been used. In past years, researchers have devised various representation systems for objects that range from very simple

Proceedings ArticleDOI
Patti Price1
15 Oct 1989
TL;DR: A multi-modal interface to an air travel database that will permit cooperative planning via interactive human-machine problem solving and speaker-independent understanding of spontaneously spoken natural language in a restricted domain is developed.
Abstract: SRI is developing a multi-modal interface to an air travel database that will permit cooperative planning via interactive human-machine problem solving. The project goals are real-time, large vocabulary (3000 words), high semantic accuracy (90%), speaker-independent understanding of spontaneously spoken natural language in a restricted domain. The grammar should be shown to be habitable and the system should be robust to individual differences in dialect and speaking style. The design should be easily portable to a variety of applications.

Journal ArticleDOI
TL;DR: It is shown that machine learning is of use in constructing this modifiable memory and systems applying these methodologies to natural language understanding are outlined.
Abstract: The problem of natural language understanding is discussed and the need for a modifiable memory is established. It is shown that machine learning is of use in constructing this modifiable memory. Machine learning methodologies are evaluated and systems applying these methodologies to natural language understanding are outlined. Suggestions for further research are presented.

01 Aug 1989
TL;DR: QATT, a natural language interface developed for the Qualitative Process Engine (QPE) system is presented and it is shown that the use of the preexisting system made possible the development of a reasonably useful interface in a few months.
Abstract: QATT, a natural language interface developed for the Qualitative Process Engine (QPE) system is presented. The major goal was to evaluate the use of a preexisting natural language understanding system designed to be tailored for query processing in multiple domains of application. The other goal of QATT is to provide a comfortable environment in which to query envisionments in order to gain insight into the qualitative behavior of physical systems. It is shown that the use of the preexisting system made possible the development of a reasonably useful interface in a few months.


Book ChapterDOI
23 Oct 1989
TL;DR: The paper will tell about artificial intelligence as it is practised at the Department of Communication at Aalborg University, Denmark.
Abstract: In this paper we will tell about artificial intelligence as it is practised at the Department of Communication at Aalborg University, Denmark. We are working with Natural Language Understanding (NLU) and Knowledge Based Systems (KBS), and these topics play a major role within a study program in Humanistic Informatics. The concept of Humanistic Informatics has grown out of a dialog between a humanistic science and a natural science tradition. It is thus conceived as an interdisciplinary academic discipline, partly rooted in a humanistic tradition with connection to logic, linguistics and philosophy, and partly rooted in a conventional computer science tradition. This implies that our research practise is based on an attempt to challenge established paradigms, when they appear to be inappropriate with respect to explaining and overcoming theoretical and practical deficiencies. In the following we Will present our NLU and KBS research activities separately, and finally we will show how they are adopted to a study program in Humanistic Informetics.