scispace - formally typeset
Search or ask a question

Showing papers on "Natural language understanding published in 1984"


Patent
27 Jan 1984
TL;DR: In this paper, a natural language interface for a computer system restricts inputs from a user to those which generate valid input sentences, and the user selects a next input word from a menu showing which words are valid.
Abstract: A natural language interface for a computer system restricts inputs from a user to those which generate valid input sentences. The user selects a next input word from a menu showing which words are valid. When an input is selected, a parser uses a predefined grammar to determine which words are valid continuations after the selected word, and whether a complete sentence has been entered. Valid next words are presented to the user in menu form so that a next selection can be made. If an ambiguous complete sentence is entered, the interface allows the user to select the desired meaning therefor. The grammer can be context free grammar, and can use an associated lexicon to define elements therein.

205 citations


Book
01 Jan 1984
TL;DR: In this paper, a Gentzen-type formalization of the deductive model of belief is presented, and soundness and completeness theorems for a deductive belief logic are proven.
Abstract: Reasoning about the knowledge and beliefs of computer and human agents is assuming increasing importance in Artificial Intelligence systems for natural language understanding, planning, and knowledge representation. A natural model of belief for robot agents is the deduction model: an agent is represented as having an initial set of beliefs about the world in some internal language and a deduction process for deriving some (but not necessarily all) logical consequences of these beliefs. Because the deduction model is an explicitly computational model, it is possible to take into account limitations of an agent's resources when reasoning. This thesis is an investigation of a Gentzen-type formalization of the deductive model of belief. Several original results are proven. Among these are soundness and completeness theorems for a deductive belief logic; a correspondence result that relates our deduction model to competing possible-worlds models; and a modal analog to Herbrand's Theorem for the belief logic. Specialized techniques for automatic deduction based on resolution are developed using this theorem. Several other topics of knowledge and belief are explored in the thesis from the viewpoint of the deduction model, including a theory of introspection about self-beliefs, and a theory of circumscriptive ignorance, in which facts an agent doesn't know are formalized by limiting or circumscribing the information available to him.

72 citations


01 Jan 1984
TL;DR: The authors argue that natural language understanding should be integrated, in the sense that syntactic and semantic processing should take place at the same time, but instead of mixing syntactical and semantic knowledge together in the knowledge base of a parser, they argue that power can be gained by organizing syntax and semantics as two largely separate bodies of knowledge, which are combined only at the time of processing.
Abstract: A controversy has existed over the interaction of syntax and semantics in natural language understanding systems. On the one hand, theories of integrated parsing have argued that syntactic and semantic processing must take place at the same time. In addition, these theories have also argued that syntactic and semantic knowledge should be mixed together, and that the role of syntax should be completely subservient to semantic processing. On the other hand, opponents of this theory argue that parsing should be more modular, with syntactic and semantic processing taking place separately. Along with this processing modularity, these opponents also argue that syntactic and semantic knowledge should be more modular, and that syntax, since it is largely autonomous from semantics, plays a more important role in natural language understanding. This thesis presents a theory of natural language understanding which is a compromise between these two views. I argue that natural language understanding should be integrated, in the sense that syntactic and semantic processing should take place at the same time. However, instead of mixing syntactic and semantic knowledge together in the knowledge base of a parser, I argue that power can be gained by organizing syntax and semantics as two largely separate bodies of knowledge, which are combined only at the time of processing. The result is a parser which retains the predictive power which is gained by using semantic information during syntactic processing, but which is more robust in parsing complex syntactic constructions, and which is more amenable to the organization of knowledge about more than one language.

18 citations


Proceedings ArticleDOI
09 Jul 1984
TL;DR: Menu-Based Natural Language Understanding is a new approach to building natural language interfaces that retains the main goals of natural language systems: flexibility, expressive power, learnability and mnemonicity, but solves most of the problems inherent to conventionalnatural language systems.
Abstract: Menu-Based Natural Language Understanding is a new approach to building natural language interfaces. It retains the main goals of natural language systems: flexibility, expressive power, learnability and mnemonicity. However, it solves most of the problems inherent to conventional natural language systems. All queries are understood by the system, interface generation is much simpler, and less computing power is required. Many interfaces have been built using the menu-based natural language technology.

11 citations


Journal Article
TL;DR: A model for experimental performance evaluation is presented, and a measure of performance that allows the basic input-output characteristics of a system to be evaluated is introduced first at an abstract level and then at concrete level.
Abstract: The task of evaluating the performance of a natural language understanding system, despite its largely recognized relevance, is still poorly defined. It mostly relies on intuitive reasoning and lacks a sound theoretical foundation. This paper sets a formal and quantitative proposal for this task. In particular, a measure of performance that allows the basic input-output characteristics of a system to be evaluated is introduced first at an abstract level. The definition of concrete measures is then obtained by assigning actual values to the functional parameters of the abstract definition; some particular cases are shown and discussed in detail. Finally, the task of measuring performance in practice is considered, and a model for experimental performance evaluation is presented. Comparison with related works is also briefly discussed; open problems and promising directions for future research are outlined. A limited case study experimentation with the model proposed is presented in the appendix.

10 citations


Journal ArticleDOI
TL;DR: The emphasis of this article is on conceptual representation of objects based on the semantic interpretation of natural language input, particularly work on physical object representation and generalization processes driven by natural language understanding.
Abstract: This article surveys a portion of the field of natural language processing The main areas considered are those dealing with representation schemes, particularly work on physical object representation, and generalization processes driven by natural language understanding The emphasis of this article is on conceptual representation of objects based on the semantic interpretation of natural language input Six programs serve as case studies for guiding the course of the article Within the framework of describing each of these programs, several other programs, ideas, and theories that are relevant to the program in focus are presented

9 citations



01 Jan 1984
TL;DR: The central thesis of this dissertation is that a guided approach to using limited natural language allows several problems associated with conventional natural language interfaces to databases to be avoided.
Abstract: Recently, a new paradigm for natural language interfaces has been developed called "menu-based natural language understanding". An implemented system called the NLMENU system has been designed to explore this approach. The approach has the advantage of guiding the user to use a limited subset of natural language in specifying commands and queries so that all inputs to the NLMENU system are understood. The central thesis of this dissertation is that a guided approach to using limited natural language allows several problems associated with conventional natural language interfaces to databases to be avoided. Three important problems are considered. The first problem involves "value recognition". NLMENU "interaction experts" provide a simple, inexpensive way to recognize, disambiguate and validate database values that occur in user queries or commands and support the user in specifying values in queries at query composition time in a way that conventional natural language systems cannot. The second problem involves allowing end users to automatically generate usable natural language interfaces to database applications. In contrast to conventional natural language interface generators, automatically generated NLMENU interfaces are immediately usable without a long empirical lexicon acquisition phase. The third problem involves showing that the tight control possible in the NLMENU approach can allow expressive database updates while disallowing semantically anomalous ones.

4 citations