scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 1984"


Proceedings Article
06 Aug 1984
TL;DR: This paper point out deficiencies in current semantic treatments of knowledge and belief and suggest a new analysis in the form of a logic that avoids these shortcomings and is also more viable computationally.
Abstract: As part of an on-going project to understand the foundations of Knowledge Representation, we are attempting to characterize a kind of belief that forms a more appropriate basis for Knowledge Representation systems than that captured by the usual possible-world formalizations begun by Hintikka. In this paper, we point out deficiencies in current semantic treatments of knowledge and belief (including recent syntactic approaches) and suggest a new analysis in the form of a logic that avoids these shortcomings and is also more viable computationally.

637 citations


BookDOI
01 Jan 1984
TL;DR: This paper presents a meta-modelling framework for conceptual modelling of knowledge representation and semantic data models and some examples of models used in this framework have been developed.
Abstract: Keywords: Knowledge representation ; semantic data models ; conceptual modelling Reference Record created on 2005-06-20, modified on 2016-08-08

437 citations


Proceedings Article
06 Aug 1984
TL;DR: Evidence is presented as to how the cost of computing one kind of inference is directly related to the expressiveness of the representation language.
Abstract: A knowledge representation system provides an important service to the rest of a knowledge-based system: it computes automatically a set of inferences over the beliefs encoded within it. Given that the knowledge-based system relies on these inferences in the midst of its operation (i.e., its diagnosis, planning, or whatever), their computational tractability is an important concern. Here we present evidence as to how the cost of computing one kind of inference is directly related to the expressiveness of the representation language. As it turns out, this cost is perilously sensitive to small changes in the representation language. Even a seemingly simple frame-based description language can pose intractable computational obstacles.

415 citations


Journal ArticleDOI
TL;DR: A domain-independent planning program that supports both automatic and interactive generation of hierarchical, partially ordered plans is described, and an improved formalism makes extensive use of constraints and resources to represent domains and actions more powerfully.

411 citations


Journal ArticleDOI
TL;DR: A new approach to knowledge representation where knowledge bases are characterized not in terms of the structures they use to represent knowledge, but functionally, in Terms of what they can be asked or told about some domain, which cleanly separates functionality from implementation structure.

385 citations


Book
01 Jan 1984
TL;DR: This book explains what automated reasoning is and what it can do and then demonstrates how to use it to solve complex problems with applications in logic circuit design, circuit validation, real-time system design and expert systems.
Abstract: This book explains what automated reasoning is and what it can do and then demonstrates how to use it to solve complex problems with applications in logic circuit design, circuit validation, real-time system design and expert systems. A diskette containing the automated reasoning program "Otter" is included. It is available for the first time to PCs and workstations. The book has input files, commentary, examples in "Otter" notation and a user's manual that enables readers to experiment with the material presented as well as with ideas of their own. Other features include: various challenge problems that allow readers to test, compare and evaluate new ideas and techniques; techniques for answering open questions and finding shorter proofs; methods for finding such proofs; examples and puzzles to aid in the understanding of parallel versions of an automated reasoning program; and problems.

363 citations


Journal ArticleDOI
TL;DR: A knowledge representation model of prototype theory is outlined, based on work in schema theory and AI knowledge representation, and it is argued that if this model is model concepts as knowledge representations of a certain kind, it is possible to answer prototype theory's critics, but to address more fundamental issues in the theory of concepts.

249 citations


Journal ArticleDOI
TL;DR: A language is defined providing a language providing a means for defining nondeterministic information and deduction methods for the language are developed.

238 citations


Journal ArticleDOI
Daniel G. Bobrow1
TL;DR: This volume brings together current work on qualitative reasoning, and presents knowledge bases for a number of very different domains, from heat flow, to transistors, to digital computation.

218 citations


Journal ArticleDOI
TL;DR: The model abstraction structure is introduced as a vehicle for model representation which supports both heuristic and deterministic inferencing as well as the conceptual/external schema notion familiar to database management.
Abstract: This paper examines the concept of a model management system, what its functions are, and how they are to be achieved in a decision support context The central issue is model representation which involves knowledge representation and knowledge management within a database environment The model abstraction structure is introduced as a vehicle for model representation which supports both heuristic and deterministic inferencing as well as the conceptual/external schema notion familiar to database management The model abstraction is seen as a special instance of the frame construct in artificial intelligence Model management systems are characterized as frame-systems and a database implementation of this approach is described

206 citations


Book
01 Jan 1984
TL;DR: The expert systems phenomenon, expert systems and the knowledge revolution, and how to build an inferencing engine: inside the inference engine and beyond.
Abstract: The expert systems phenomenon An introduction to expert systems Expert systems: where are we and where are we going?. Reasoning. Inside the inference engine How to build an inferencing engine Uncertainty management in expert systems Representation. From data to knowledge Knowledge representation in man and machine Issues and applications. Building an expert system Debugging knowledge bases Inductive learning for expert systems Expert systems and the knowledge revolution.

03 Dec 1984
TL;DR: The benefits of limiting knowledge representation systems in these ways will be discussed in the context of a frame-based knowledge-representation system, called KANDOR, that has been developed at FLAIR, and its use as the knowledge representation component of ARGON, an interactive information retrieval system.
Abstract: Almost all knowledge representation systems subscribe to the thesis that big is beautiful. There are, however, some important advantages to limiting knowledge representation systems in a number of ways. For example, a limited and well-defined interface can prevent a knowledge representation system from being just a low-level utility for manipulating data structures. Instead, such a system can only be used in restricted ways, and so can be given a semantics independent of its implementation. Further, limiting the expressive power of a knowledge representation system can guarantee that all its operations terminate in reasonable time; this makes the system usable as part of larger systems that are constrained in time. The benefits of limiting knowledge representation systems in these ways will be discussed in the context of a frame-based knowledge-representation system, called KANDOR, that has been developed at FLAIR, and its use as the knowledge representation component of ARGON, an interactive information retrieval system.

Journal ArticleDOI
TL;DR: This article shows that information about objects provided by a system is given up to an indiscernibility relation determined by the system and hence it is incomplete in a sense, and develops a logic in which properties of knowledge representation systems related to definability can be expressed and proved.
Abstract: In this article we attempt to clarify some aspects of expressive power of knowledge representation systems. We show that information about objects provided by a system is given up to an indiscernibility relation determined by the system and hence it is incomplete in a sense. We discuss the influence of this kind of incompleteness on definability of concepts in terms of knowledge given by a system. We consider indiscernibility relations as a tool for representing expressive power of systems, and develop a logic in which properties of knowledge representation systems related to definability can be expressed and proved. We present a complete set of axioms and inference rules for the logic.

Book
01 Jan 1984
TL;DR: In this paper, a Gentzen-type formalization of the deductive model of belief is presented, and soundness and completeness theorems for a deductive belief logic are proven.
Abstract: Reasoning about the knowledge and beliefs of computer and human agents is assuming increasing importance in Artificial Intelligence systems for natural language understanding, planning, and knowledge representation. A natural model of belief for robot agents is the deduction model: an agent is represented as having an initial set of beliefs about the world in some internal language and a deduction process for deriving some (but not necessarily all) logical consequences of these beliefs. Because the deduction model is an explicitly computational model, it is possible to take into account limitations of an agent's resources when reasoning. This thesis is an investigation of a Gentzen-type formalization of the deductive model of belief. Several original results are proven. Among these are soundness and completeness theorems for a deductive belief logic; a correspondence result that relates our deduction model to competing possible-worlds models; and a modal analog to Herbrand's Theorem for the belief logic. Specialized techniques for automatic deduction based on resolution are developed using this theorem. Several other topics of knowledge and belief are explored in the thesis from the viewpoint of the deduction model, including a theory of introspection about self-beliefs, and a theory of circumscriptive ignorance, in which facts an agent doesn't know are formalized by limiting or circumscribing the information available to him.

Journal ArticleDOI
TL;DR: This paper discusses the formal connection between possibility distributions (Zadeh [21] and the theory of random sets via Choquet's theorem and suggests that plausible inferences and modeling of common sense can be derived from the statistics ofrandom sets.

Journal ArticleDOI
TL;DR: Despite INTERNIST-1’s apparent success in dealing with complex cases involving multiple diagnoses in the same patient, many shortcomings in both its knowledge representation schemes and its diagnostic algorithms still remain.
Abstract: INTERNIST-1 is an experimental computer program for consultation in general internal medicine. On a series of test cases, its performance has been shown to be similar to that of staff physicians at a university hospital. Despite INTERNIST-1’s apparent success in dealing with complex cases involving multiple diagnoses in the same patient, many shortcomings in both its knowledge representation schemes and its diagnostic algorithms still remain. Among the known problems are lack of anatomical and temporal reasoning, inadequate representation of degrees of severity of findings and illnesses, and failure to reason properly about causality. These drawbacks must be corrected before INTERNIST-1’s successor program, CADUCEUS, can be used. It is estimated that CADUCEUS will not be ready for release to the general medical community for five to ten years. Broader problems faced by all medical diagnostic consultant systems are: design of an efficient human interface; development and completion of medical knowledge bases; expansion of diagnostic algorithms from simple heuristic rules to include a range of complex reasoning strategies, and development of a method for validating computer programs for clinical use.

Journal Article
TL;DR: This study represents an exploration of the phenomenon of non-literal language ("metaphors") and an approach that lends itself to computational modeling based on Ortony's theories of the way in which salience and asymmetry function in human metaphor processing.
Abstract: This study represents an exploration of the phenomenon of non-literal language ("metaphors") and an approach that lends itself to computational modeling. Ortony's theories of the way in which salience and asymmetry function in human metaphor processing are explored and expanded on the basis of numerous examples. A number of factors appear to be interacting in the metaphor comprehension process. In addition to salience and asymmetry, of major importance are incongruity, hyperbolicity, inexpressibility, prototypicality, and probable value range. Central to the model is a knowledge representation system incorporating these factors and allowing for the manner in which they interact. A version of KL-ONE (with small revisions) is used for this purpose.

Journal ArticleDOI
TL;DR: This article describes the initial experience with building applications programs in a hybrid AI tool environment based on five major AI methodologies: frame-based knowledge representation with inheritance, rule-based reasoning, LISP, interactive graphics, and active values.
Abstract: This article describes our initial experience with building applications programs in a hybrid AI tool environment. Traditional AI systems developments have emphasized a single methodology, such as frames, rules or logic programming, as a methodology that is natural, efficient, and uniform. The applications we have developed suggest that natural-ness, efficiency and flexibility are all increased by trading uniformity for the power that is provided by a small set of appropriate programming and representation tools. The tools we use are based on five major AI methodologies: frame-based knowledge representation with inheritance, rule-based reasoning, LISP, interactive graphics, and active values. Object-oriented computing provides a principle for unifying these different methodologies within a single system.

Journal ArticleDOI
TL;DR: In this article, the authors compare four knowledge representation schemes: a simple production system, a structured production system and a logic system, and observe how the structure of the domain knowledge affects the implementation of expert systems and their run time efficiency.
Abstract: Many techniques for representing knowledge have been proposed, but there have been few reports that compare their application. This article presents an experimental comparison of four knowledge representation schemes: a simple production system, a structured production system. A frame system, and a logic system. We built four pilot expert systems to solve the same problem: risk management of a large construction project. Observations are made about hoe the structure of the domain knowledge affects the implementation of expert systems and their run time efficiency.

Proceedings Article
01 Jan 1984
TL;DR: Computer-aided environments are evolving to facilitate allows for the generalization of a set of integrity rules for the specification and development of large-scale inforlanguage specifications and target system descriptions.
Abstract: Dynamics in the use of metasystems in the development ofinformation systemsis discussed. An axiomatic level of specification is used to allow dynamic specification of"median" level metasystems which are, in turn, used in information systems specification, analysis and design. Existing metasystems are reviewed and principles for metasystem evaluation are considered The implementation and use of dynamic metasystems in the Plexsys system is overviewed The Plexsys system implements generalized integrity analysis at alllevels of logic and mechanisms to insure the mutual integrity of these levels over time. Introduction tor of the integrity of models, The integrity of a model concerns its semantic completeness. An alternate metaparadigm is proposed. The paradigm draws on the A science is a well made language. -Condillac relationship of the me tasystems concept to semantics and knowledge representation in linquisucs and artiScial intelligence. The meta approach and system presented Computer-aided environments are evolving to facilitate allows for the generalization of a set of integrity rules for the specification and development of large-scale inforlanguage specifications and target system descriptions. mation systems. 'Ibols to support enterprise analysis, The implementation, which provides for dynamism of logical data and process modeling, database design, the overall three-tier model, is discussed in the final process organization, automatic code generation, and section. other design activities exhibit a variety of models, semantics, andterminology. A degree of dynamicsmustbe it*roducedifthedesignsupporttoolsaretobeeffective. These dynamics are fundamentally important as both The Metasystem Concept language definitions and target models change over time. That is, as more is learned about the organization In describing information systems, a large set of often and about the development process itself, the developdisjoint terminology is used among development settings ment environment must support the modification of In many cases, several terms are used to name a given " , . 44 relalanguage and target model definitions such that the term or concept For example, "record, group, models are internally and mumally complete and tion," and"data structure" have all been used to name a consistent conceptually analogous term Conceptual underpinnings, as well as the structure and One major drawback of many computer-aided methofunction of metasystems, are discussed in this paper. A dologies is thatthe predefined terms used inthe methometasystem framework of three basic definitionallevels dology may not be the same as the terms used by target is developed. Requirements foran effective metasystem system developers in any given setting. This drawback are outlined, including succinctness, dynamism, scope, leads to one of two outcomes; namely, the computerand granularity. Three metasystems used in information aided methodology will not be adopted, or it is adopted systems specifications-SEM, SDLA, and SDS-are withtheaccompanyingcostofreorientingallindividuals analyzed. The emphasis of the analysis is anassessment involved in systems development In the second outof the degree to which the metasystem can be a guarancome, extensive training of developers with respect to


01 Jan 1984
TL;DR: The data model suggested below is a step towards bridging the gap between database theory and AI databases.
Abstract: Data models used in database management have not been built with AI applications in mind. The entities and their relationships in an AI environment transcend in complexity the data semantics of most other databases, so that the expressive power of the "usual" data models becomes insufficient. In AI community databases are viewed as a possible application area ("database front ends") but in AI research itself the databases used tend to be ~~ and are not specified in terms of data models and DBMS based on such. The data model suggested below is a step towards bridging the gap between database theory and AI databases. Int roduct ion The complexity of problems encountered by AI researchers designing both the knowledge-based expert systems and theoretical models of cognitive entities is notoriously high. The overall task involves many components, of which the most widely studied are knowledge representation languages, various "inference engines" (models of informal reasoning), automatic deduction, models of planning and cognitive processes, parsers and grammars. Although it is currently recognized that databases are integral parts of practically any AI system, and although a majority of AI systems employs the database concept, it is a fact that most such databases, however ingeniously built, are inherently ~ ~ and, more importantly, do not, as a rule, reflect the developments in the theory of database management. A very good example is the approach described in the influential book by Charniak, McDermott and Riesbeck (1980) • These authors devote much space to describing the AI databases (predicate calculus-based, slot-and-filler ones,

Journal ArticleDOI
TL;DR: The RESEDA project is concerned with the construction of Artificial Intelligence management systems working on factual databases consisting of biographical data, and this data is described using a particular Knowledge Representation language based on the Artificial Intelligence understanding of a “Case Grammar” approach.
Abstract: The RESEDA project is concerned with the construction of Artificial Intelligence (AI) management systems working on factual databases consisting of biographical data; this data is described using a particular Knowledge Representation language (“meta-language”) based on the Artificial Intelligence understanding of a “Case Grammar” approach. The “computing kemel” of the system consists of an inference interpreter. Where it is not possible to find a direct response to the (formal) question posed, RESEDA tries to answer indirectly by using a first stage of inference procedures (“transformations”). Moreover, the system is able to establish automatically new causal links between the statements represented in the base, on the ground of “hypotheses”, of a somewhat general nature, about the class of possible relationships. In this case, the result of the inference operations can thus modify, at least in principle, the original content of the database.




Journal ArticleDOI
TL;DR: The purpose of this correspondence is to show the design considerations in the choice of mechanisms when a flexible querysystem of visual scenes is being constructed.
Abstract: The purpose of this correspondence is to show the design considerations in the choice of mechanisms when a flexible querysystem of visual scenes is being constructed. More concretely, the issues are: ? flexibility in adding new information to the knowledge base; ? power of inferencing; ? avoiding unnecessary generation of hypotheses where a great deal of image processing has to be perfected in order to test it; ? having the power of automatic generation of recognition strategies.


01 Jan 1984
TL;DR: This paper examines a single representation language, SRL, and its applications to determine utility of its ideas, and what distinguishes SRL is its evolution from a research engine to a "production level" language.