scispace - formally typeset
Search or ask a question

Showing papers on "Natural language understanding published in 1991"


Book ChapterDOI
01 Jan 1991
TL;DR: This work proposes to formalize interface metaphors by algebraic specifications, which provides a comprehensive formalization for the essential aspects of metaphorical user interfaces.
Abstract: Sound engineering approaches to user interface design require the formalization of key interaction concepts, one of them being metaphor. Work on interface metaphors has, however, been largely non-formal so far. The few existing formal theories of metaphor have been developed in the context of natural language understanding, learning, or reasoning. We propose to formalize interface metaphors by algebraic specifications. This approach provides a comprehensive formalization for the essential aspects of metaphorical user interfaces. Specifically, metaphor domains are being formalized by algebras, metaphorical mappings by morphisms, and image-schemas by categories. The paper explains these concepts and the approach, using examples of spatial and spatializing metaphors.

102 citations


ReportDOI
01 May 1991
TL;DR: The TRAINS project serves as an umbrella for research that involves pushing the state of the art in real-time planning, planning in uncertain worlds, plan monitoring and execution, natural language understanding techniques applicable to spoken language, and natural language dialog and discourse modelling.
Abstract: : The TRAINS project is a long-term research effort on building an intelligent planning assistant that is conversationally proficient in natural language The TRAINS project serves as an umbrella for research that involves pushing the state of the art in real-time planning, planning in uncertain worlds, plan monitoring and execution, natural language understanding techniques applicable to spoken language, and natural language dialog and discourse modelling Significant emphasis is being put on the knowledge representation issues that arise in supporting the tasks in the domain This report describes the general goals of the TRAINS project and the particular research directions that we are pursuing Planning, Natural language understanding, Dialog systems

77 citations


Proceedings ArticleDOI
19 Feb 1991
TL;DR: The most useful forms of pre-processing for text interpretation use fairly superficial analysis that complements the style of ordinary parsing but uses much of the same knowledge base, and Lexico-semantic pattern matching is a good method for this form of analysis.
Abstract: Ordinarily, one thinks of the problem of natural language understanding as one of making a single, left-to-right pass through an input, producing a progressively refined and detailed interpretation. In text interpretation, however, the constraints of strict left-to-right processing are an encumbrance. Multi-pass methods, especially by interpreting words using corpus data and associating units of text with possible interpretations, can be more accurate and faster than single-pass methods of data extraction. Quality improves because corpus-based data and global context help to control false interpretations; speed improves because processing focuses on relevant sections.The most useful forms of pre-processing for text interpretation use fairly superficial analysis that complements the style of ordinary parsing but uses much of the same knowledge base. Lexico-semantic pattern matching, with rules that combine lexical analysis with ordering and semantic categories, is a good method for this form of analysis. This type of pre-processing is efficient, takes advantage of corpus data, prevents many garden paths and fruitless parses, and helps the parser cope with the complexity and flexibility of real text.

46 citations


Journal ArticleDOI
01 Jun 1991-Language
TL;DR: In this paper, the authors define the notion of non-truth-conditional meaning of Propositional Content, and present anaphora as an indicator of the Indeterminacy of Sense.
Abstract: Contents: What Is Pragmatics, and Why Do I Need to Know, Anyway? Indexicals and Anaphora: Contextually Identifiable Indeterminacies of Reference. Reference and Indeterminacy of Sense. Non-Truth-Conditional Meaning: Interpreting the Packaging of Propositional Content. Implicature. Pragmatics and Syntax. Conversational Interaction. Perspective.

45 citations


01 Jan 1991
TL;DR: The Bermuda Triangle: Natural language semantics between linguistics, knowledge representation, and knowledge processing.
Abstract: Introducing LILOG.- Text understanding - The challenges to come.- A formalism for natural language - STUF.- The language of STUF.- Chart-parsing of STUF grammars.- The STUF workbench.- Unification-ID/LP grammars: Formalization and parsing.- A flexible parser for a Linguistic Development Environment.- Gap-Handling mechanisms in categorial grammars.- Outlines of the LEU/2 lexicology.- Morphological processing in the two-level paradigm.- Representing word meanings.- Sortal information in lexical concepts.- Incremental vocabulary extensions in text understanding systems.- Managing lexical knowledge in LEU/2.- The grammars of LILOG.- An alternative phrase structure account of symmetric coordination.- Verb order and head movement.- The Bermuda Triangle: Natural language semantics between linguistics, knowledge representation, and knowledge processing.- Presupposition, anaphora, and lexical content.- Anaphora and domain restriction.- On representing the temporal structure of texts.- The treatment of plurality in L LILOG.- The knowledge representation language LLILOG.- Knowledge packets and knowledge packet structures.- Deductive aspects of three-valued logic.- The LILOG inference engine.- Knowledge based control of the LILOG inference engine: Kinds of metaknowledge.- Attributive description formalisms ... and the rest of the world.- The background knowledge of the LILOG system.- The LILOG ontology from a linguistic point of view.- A knowledge engineering environment for LILOG.- Knowledge engineering in the context of related fields of research.- LILOG-DB: Database support for knowledge based systems.- Processing of spatial expressions in LILOG.- Phenomena of localization.- Verbs of motion and position: On the optionality of the local argument.- Why a hill can't be a valley: Representing gestalt and position properties of objects with object schemata.- Object-oriented representation of depictions on the basis of cell matrices.- Integrating a generation component into a natural language understanding system.- From knowledge structures to text structures.- The formulator.- Constructing a context for LEU/2.- The text understanding system LEU/2.- The trace of building a large AI system.

42 citations


Book ChapterDOI
01 Jan 1991
TL;DR: Mark Steedman (1989) has suggested that each of the following assumptions about human language understanding is appealing, but that they lead to a paradox when taken together.
Abstract: Mark Steedman (1989) has suggested that each of the following assumptions about human language understanding is appealing, but that they lead to a paradox when taken together: 1. Incremental comprehension: Human natural language understanding is serial and incremental: the words of a sentence are interpreted rapidly as they are heard or read. 2. Right-branching syntactic structures: English, like other SVO and SOV languages, has predominantly right-branching syntactic structures. 3. Strong competence hypothesis: The principles of the competence grammar are directly used by the human language processor in constructing a syntactic structure and interpreting it.

36 citations


Book
26 Feb 1991
TL;DR: This book offers a two-level approach to semantic interpretation and proves that it works by means of a precise computer implementation, which in turn is applied to support a task-independent knowledge representation system.
Abstract: On the basis of a semantic analysis of dimension terms, this book develops a theory about knowledge of spatial objects, which is significant for cognitive linguistics and artificial intelligence. This new approach to knowledge structure evolves in a three-step process: - adoption of the linguistic theory with its elements, principles and representational levels, - implementation of the latter in a Prolog prototype, and - integration of the prototype into a large natural language understanding system. The study documents interdisciplinary research at work: the model of spatial knowledge is the fruit of the cooperative efforts of linguists, computational linguists, and knowledge engineers, undertaken in that logical and chronological order. The book offers a two-level approach to semantic interpretation and proves that it works by means of a precise computer implementation, which in turn is applied to support a task-independent knowledge representation system. Each of these stages is described in detail, and the links are made explicit, thus retracing the evolution from theory to practice.

32 citations


Journal ArticleDOI
TL;DR: A language-independent morphological component for the recognition and generation of word forms is presented that allows for a natural description of some nonconcatena-tive morphological phenomena as well as morphonological phenomena that are restricted to certain word classes in their applicability.
Abstract: A language-independent morphological component for the recognition and generation of word forms is presented. Based on a lexicon of morphs, the approach combines two-level morphology and a feature-based unification grammar describing word formation. To overcome the heavy use of diacritics, feature structures are associated with the two-level rules. These feature structures function as filters for the application of the rules. That way information contained in the lexicon and the morphological grammar can guide the application of the two-level rules. Moreover, information can be transmitted from the two-level part to the grammar part. This approach allows for a natural description of some nonconcatena-tive morphological phenomena as well as morphonological phenomena that are restricted to certain word classes in their applicability. The approach is applied to German inflectional and derivational morphology. The component may easily be incorporated into natural language understanding systems and can be espe...

16 citations


Journal ArticleDOI
TL;DR: The knowledge representation language LLILOG is sketched, an overview of the internal architecture of the LILOG inference engine is given, and how the inference Engine is embedded into the natural language understanding system LEU/2 is shown.
Abstract: The LILOG knowledge representation system is part of LEU/2 - the LILOG Experimentier Umgebung1 - a natural language understanding system for German. The knowledge representation system comprises a sophisticated knowledge representation system comprises a sophisticated knowledge representation language based on order-sorted predicate logic enriched by a type system of KL-ONE like languages, default reasoning, and the capability to delegate inferences to external deductive components. The inference engine processing LLILOG can be considered as an experimental theorem proving shell since, for example, we are able to exchange inference calculi and search strategies very easily. We sketch the knowledge representation language LLILOG, give an overview of the internal architecture of the LILOG inference engine, and show how the inference engine is embedded into the natural language understanding system LEU/2.

13 citations


Proceedings ArticleDOI
01 Jul 1991
TL;DR: Analysis of the performance of a trained network suggests that low level, natural language understanding is one form of text processing which promises to become an important application area for neural model-based computing.
Abstract: A simulated neural network was developed with APL on an 80386 microcomputer. The network was configured to associate task descriptions with 10 categories of military occupational specialties. The number of processing elements in the problem was varied. Increasing the number of processors increased the speed of learning in the simulation. Generalization was not significantly different for various numbers of processing elements except for one intermediate number at which generalization occurred about 15 percent higher. Analysis of the performance of a trained network suggests that low level, natural language understanding is one form of text processing which promises to become an important application area for neural model-based computing.

9 citations


Book ChapterDOI
TL;DR: A method used by two recent machine learning programs for control of inference that is relevant to the design of IIR systems is presented, and several suggestions are made on how this machine learning framework can be integrated with existing information retrieval methods.
Abstract: Intelligent information retrieval (IIR) requires inference. The number of inferences that can be drawn by even a simple reasoner is very large, and the inferential resources available to any practical computer system are limited. This problem is one long faced by AI researchers. In this paper, we present a method used by two recent machine learning programs for control of inference that is relevant to the design of IIR systems. The key feature of the approach is the use of explicit representations of desired knowledge, which we call knowledge goals. Our theory addresses the representation of knowledge goals, methods for generating and transforming these goals, and heuristics for selecting among potential inferences in order to feasibly satisfy such goals. In this view, IIR becomes a kind of planning: decisions about what to infer, how to infer and when to infer are based on representations of desired knowledge, as well as internal representations of the system's inferential abilities and current state. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel newspaper stories, and a differential diagnosis program that improves its accuracy with experience. We conclude by making several suggestions on how this machine learning framework can be integrated with existing information retrieval methods.

Journal ArticleDOI
TL;DR: The Basic Agent is an attempt to create a conscious, mindlike AI artifact embedded in time and functioning in a simulated dynamic environment that integrates temporal planning, temporal reasoning, reactive replanning, action execution, limited natural language understanding and generation, symbolic perception, episodic memory and reflection.
Abstract: The Basic Agent is an attempt to create a conscious, mindlike AI artifact embedded in time and functioning in a simulated dynamic environment. The agent integrates temporal planning, temporal reasoning, reactive replanning, action execution, limited natural language understanding and generation, symbolic perception, episodic memory and reflection, and some general knowledge associated with the vocabulary items. A state-change style semantics for action verbs, based on a set of primitive relations, allows convenient integration of the natural language and planning components. The architectural elements of the system and their interaction are described.


Proceedings ArticleDOI
19 Feb 1991
TL;DR: The Penn Treebank as mentioned in this paper is a data set of written and transcribed spoken American English annotated with detailed grammatical structure, which is used to train a wide variety of approaches to automatic language acquisition.
Abstract: To construct a data base (the "Penn Treebank") of written and transcribed spoken American English annotated with detailed grammatical structure. This data base will serve as a national resource, providing training material for a wide variety of approaches to automatic language acquisition, a reference standard for the rigorous evaluation of some components of natural language understanding systems, and a research tool for the investigation of the grammar of naturally spoken English.

Book ChapterDOI
01 Mar 1991
TL;DR: The Multimedia Database Management System (MDBMS) described in this paper incorporates the capabilities as mentioned above and proposes an intelligent approach to approximate match by integrating both object-oriented and natural language understanding techniques.
Abstract: Manipulation of multimedia data is not straightforward as in conventional databases. One main problem is the retrieval of multimedia data from the database with the need to match the contents of multimedia data to a user query. In order to achieve a content based retrieval, in our approach, we use natural language captions which allow the user to describe the contents of multimedia data In a similar manner, users will specify their queries on multimedia data contents in natural language form. A problem is that different or even the same user describe the same thing differently at different times which results in the descriptions of the contents of multimedia data to rarely exactly match the descriptions of the user queries. Hence, partial or approximate match between descriptions of multimedia data and user queries is generally required during multimedia data retrieval. We propose an intelligent approach to approximate match by integrating both object-oriented and natural language understanding techniques. In order to make the query specification process easier we also develop a graphical user interface supporting incremental query specification and a natural way of expressing joins. The Multimedia Database Management System (MDBMS) described in this paper incorporates the capabilities as mentioned above.

Book ChapterDOI
01 Jan 1991
TL;DR: A simple algorithm for thinning down an existing set of distributed concept representations which form the lexicon in a prototype story paraphrase system which exploits both conventional and connectionist approaches to Artificial Intelligence (AI).
Abstract: In a Natural Language Understanding system, be it connectionist or otherwise, it is often desirable for representations to be as compact as possible. In this paper we present a simple algorithm for thinning down an existing set of distributed concept representations which form the lexicon in a prototype story paraphrase system which exploits both conventional and connectionist approaches to Artificial Intelligence (AI). We also present some performance measures for evaluating a lexicon’s performance. The main result is that the algorithm appears to work well — we can use it to balance the level of detail in a lexicon against the amount of space it requires. There are also interesting ramifications concerning meaning in natural language.

Journal ArticleDOI
TL;DR: A sound engineering approach to user interface design requires the formalization of key interaction concepts, one of them being metaphor, which has been largely non-formal so far.
Abstract: A sound engineering approach to user interface design requires the formalization of key interaction concepts, one of them being metaphor. Few formal theories of metaphor exist and most of them have been developed in the context of natural language understanding, learning, or reasoning. Work on metaphors in human-computer interaction has been largely non-formal so far.

Book ChapterDOI
17 Jun 1991
TL;DR: The range of phenomena the model is intended to describe and give an outline of the way in which the interpretation process may determine the referential potential of words by the integration and evaluation of a variety of factors are described.
Abstract: The lexicon of a natural language understanding system that is not restricted to one single application but should be adaptable to a whole range of different tasks has to provide a flexible mechanism for the determination of word meaning. The reason for such a mechanism is the semantic variability of words, i.e. their potential to denote different things in different contexts. The goal of our project is a model that makes these phenomena explicit. We approach this goal by defining word meaning as a complex function resulting from the interaction of processes operating on knowledge elements. In the following we characterize the range of phenomena our model is intended to describe and give an outline of the way in which the interpretation process may determine the referential potential of words by the integration and evaluation of a variety of factors.



Book ChapterDOI
01 Jan 1991
TL;DR: Four strategies appropriate for goal-directed vocabulary extensions — ‘contextual’ specification, use of morphological rules, access to external lexica and interactive user input — are outlined, directing the principal attention to the exploitation of machine-readable versions of conventional dictionaries.
Abstract: Natural language understanding systems which have to prove good in interesting applications cannot manage without some mechanism for dealing with lexical gaps. In this paper four strategies appropriate for goal-directed vocabulary extensions — ‘contextual’ specification, use of morphological rules, access to external lexica and interactive user input — are outlined, directing the principal attention to the exploitation of machine-readable versions of conventional dictionaries (MRDs). These mechanisms complement each other as they are geared to different types of unknown words and may be used recursively. A temporary lexicon is introduced to cope with the uncertainty and partiality of the so-gained lexical information. The treatment of unknown words presented in this paper contributes to overcoming the bottleneck concerning lexical and, to some degree, conceptual knowledge.

Book ChapterDOI
01 Jan 1991
TL;DR: The difficulties that were faced when planning, implementing, and finishing this project are described and the problems expected are compared to the problems experienced.
Abstract: This is a report about the successful development of a complex natural language understanding system. It describes the difficulties that were faced when planning, implementing, and finishing this project. After outlining the planned proceeding, it is described what really happened during the realization, in other words: the problems expected are compared to the problems experienced. Finally, those topics and details are discussed which were helpful and thus responsible for the success of the project, and experiences transferable to other projects are emphasized.

ReportDOI
01 Feb 1991
TL;DR: This paper proposes an intelligent approach to approximate match by integrating both object-oriented and natural language understanding techniques in the retrieval of multimedia data from the database.
Abstract: : Manipulation of multimedia data in multimedia databases is not straightforward as in conventional databases because of the complex structure of the multimedia data such as image, or sound. The issue in the retrieval of multimedia data from the database is the matching of the contents of multimedia data to a user query. The common solution is to use either keywords or natural language descriptions to describe both the contents of multimedia data and user queries. The major problem is that different users, or the same user at different times, describe the same thing differently which results in the descriptions of the contents of multimedia data to rarely exactly match the descriptions of the user queries. Hence, partial or approximate match between descriptions of multimedia data and user queries is often required during multimedia data retrieval. In this paper, we propose an intelligent approach to approximate match by integrating both object-oriented and natural language understanding techniques.

Journal ArticleDOI
TL;DR: It is believed that the artificial separation between these cognitive components that pervades AI research has led to the development of underconstrained theories of intelligence.
Abstract: That which is commonly known as "human intelligence" is surely the result of the interaction of a great number of cognitive abilities such as motor control, vision, learning, problem solving, and language use, just to name a few. Yet most AI researchers study these component skills in isolation, without regard to how the individual components may use the knowledge or processes commonly associated with other different components. For example, much of the AI research on problem solving has focused on knowledge-based methods for solving complex problems without regard to the perceptual or linguistic processes involved. Similarly, work on natural language understanding typically views the language understander as an entity unto itself and ignores how language is used. We believe that the artificial separation between these cognitive components that pervades AI research has led to the development of underconstrained theories of intelligence.


01 Mar 1991
TL;DR: It is sketched how Montague semantics and discourse representation theory can be simulated in a logic-based natural language understanding system using a meta-programming technique.
Abstract: We introduce a semantic interface into logic grammars to give a natural link to the semantic component of natural language understanding systems. The semantic interface contains five parameters, i.e. input, semantic information, new devices, generator, and output. These parameters specify the procedure of compiling a desired semantic theory into its corresponding logic programs. The semantic interface can thus achieve a metalevel representation of required semantic theories using a meta-programming technique. We sketch how Montague semantics and discourse representation theory can be simulated in a logic-based natural language understanding system.

Proceedings ArticleDOI
30 Apr 1991
TL;DR: The authors have designed a parallel architecture called the semantic network array processor (SNAP) for natural language understanding (NLU) and other artificial intelligence applications capable of performing NLU operations within subsecond response time.
Abstract: The authors have designed a parallel architecture called the semantic network array processor (SNAP) for natural language understanding (NLU) and other artificial intelligence applications. It is capable of executing large marker-passing programs and generating results in real-time. The design features 32 processing clusters with four to five functionally dedicated digital signal processors in each cluster. Processors within clusters share a marker-processing memory while communication between clusters is implemented by a buffered message-passing scheme. Throughout the machine, overlapping groups of multiport memories provide a direct yet visible interconnection network. The result is a low cost, flexible, and observable parallel processor capable of performing NLU operations within subsecond response time. >



Journal ArticleDOI
TL;DR: This knowledge engineer module enhances the capability and improves the performance of the expert system DOMES by providing an effective means for knowledge acquisition based on natural language understanding.
Abstract: In order to enhance the knowledge acquisition capability of the expert system DOMES (Design Of Mechanisms by an Expert System), which is developed at the University of Rhode Island for creative type synthesis of mechanisms, methodologies have been developed to build a knowledge engineer module based on natural language understanding. Specifically, artificial intelligence concepts and Lisp programming techniques have been incorporated in this module to implement the following: (1) analysing and understanding design criteria supplied by human designers in the form of English sentences; and (2) transforming these design criteria into Lisp code and storing them in the knowledge base of DOMES. This knowledge engineer module enhances the capability and improves the performance of the expert system DOMES by providing an effective means for knowledge acquisition based on natural language understanding. The concepts and implementation techniques in developing this module are general and can also be utilized for other knowledge-based systems.