scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 1988"


Journal ArticleDOI
Ralph Linsker1
TL;DR: It is shown that even a single developing cell of a layered network exhibits a remarkable set of optimization properties that are closely related to issues in statistics, theoretical physics, adaptive signal processing, the formation of knowledge representation in artificial intelligence, and information theory.
Abstract: The emergence of a feature-analyzing function from the development rules of simple, multilayered networks is explored. It is shown that even a single developing cell of a layered network exhibits a remarkable set of optimization properties that are closely related to issues in statistics, theoretical physics, adaptive signal processing, the formation of knowledge representation in artificial intelligence, and information theory. The network studied is based on the visual system. These results are used to infer an information-theoretic principle that can be applied to the network as a whole, rather than a single cell. The organizing principle proposed is that the network connections develop in such a way as to maximize the amount of information that is preserved when signals are transformed at each processing stage, subject to certain constraints. The operation of this principle is illustrated for some simple cases. >

1,469 citations


Book
01 Jan 1988
TL;DR: Qualitative Probabilistic Reasoning and Cognitive models, Dempster-Shafer Theory in Knowledge Representation, and Possibility Theory: Semantics and Applications.
Abstract: Qualitative Probabilistic Reasoning and Cognitive Models. Exploiting Functional Dependencies in Qualitative Probabilistic Reasoning (M.P. Wellman). Qualitative Propagation and Scenario-Based Scheme for Explaining Probabilistic Reasoning (M. Henrion, M.J. Druzdel). Propagating Uncertainty in Rule Based Cognitive Modeling (T.R. Shultz). Context-Dependent Similarity (Y. Cheng). Abductive Probabilistic Reasoning and KB Development. Similarity Networks for the Construction of Multiple-Faults Belief Networks (D. Heckerman). Separable and Transitive Graphoids (D. Geiger, D. Heckerman). Integrating Probabilistic, Taxonomic and Causal Knowledge in Abductive Diagnosis (D. Lin, R. Goebel). What is the Most Likely Diagnosis (D. Poole, G.M. Provan). Probabilistic Evaluation of Candidate Sets for Multidisorder Diagnosis (T.D. Wu). Kutato: An Entropy-Driven System for Construction of Probabilistic Expert Systems from Databases (E. Herskovits, G. Cooper). Problem Formulation and Control of Reasoning. Ideal Reformulation of Belief Networks (J.S. Breese, E.J. Horvitz). Computationally-Optimal Real-Resource Strategies for Independent, Uninterruptible Methods (D. Einav, M.R. Fehling). Problem Formulation as the Reduction of a Decision Model (D.E. Heckerman, E.J. Horvitz). Dynamic Construction of Belief Networks (R.P. Goldman, E. Charniak). A New Algorithm for Finding MAP Assignments to Belief Networks (S.E. Shimony, E. Charniak). Belief Network Decomposition. Directed Reduction Algorithms and Decomposable Graphs (R.D. Shachter, S.K. Andersen, K.L. Poh). Optimal Decomposition of Belief Networks (W.X. Wen). Pruning Bayesian Networks for Efficient Computation (M. Baker, T.E. Boult). On Heuristics for Finding Loop Cutsets in Multiply-Connected Belief Networks (J. Stillman). A Combination of Cutset Conditioning with Clique-Tree Propagation in the Pathfinder System (H.J. Suermondt, G.F. Cooper, D.E. Heckerman). Equivalence and Synthesis of Causal Models (T.S. Verma, J. Pearl). Possibility Theory: Semantics and Applications. Possibility as Similarity: The Semantics of Fuzzy Logic (E. Ruspini). Integrating Case-Based and Rule-Based Reasoning: the Possibilistic Connection (S. Dutta, P.P. Bonissone). Credibility Discounting in the Theory of Approximate Reasoning (R.R. Yager). Updating with Belief Functions, Ordinal Conditional Functions and Possibility Measures (D. Dubois, H. Prade). A Hierarchical Approach to Designing Approximate Reasoning-Based Controllers for Dynamic Physical Systems (H.R. Berenji, et al.). Dempster-Shafer: Graph Decomposition, FMT, and Interpretations. A New Approach to Updating Beliefs (R. Fagin, J.Y. Halpern). The Transferable Belief Model and Other Interpretations of Dempster-Shafer's Model (P. Smets). Valuation-Based Systems for Discrete Optimization (P.P. Shenoy). Computational Aspects of the Mobius Transformation (R. Kennes, P. Smets). Using Dempster-Shafer Theory in Knowledge Representation (A. Saffiotti).

1,407 citations


Journal ArticleDOI
TL;DR: The higraph, a general kind of diagramming object, forms a visual formalism of topological nature that is suited for a wide array of applications to databases, knowledge representation, and the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.
Abstract: The higraph, a general kind of diagramming object, forms a visual formalism of topological nature. Higraphs are suited for a wide array of applications to databases, knowledge representation, and, most notably, the behavioral specification of complex concurrent systems using the higraph-based language of statecharts.

1,332 citations


Journal Article
TL;DR: In this article, an ontology based on such notions as causation and consequence is proposed, rather than on purely temporal primitives, and a central notion in the ontology is that of an elementary event-complex called a "nucleus."
Abstract: A semantics of temporal categories in language and a theory of their use in defining the temporal relations between events both require a more complex structure on the domain underlying the meaning representations than is commonly assumed. This paper proposes an ontology based on such notions as causation and consequence, rather than on purely temporal primitives. A central notion in the ontology is that of an elementary event-complex called a "nucleus." A nucleus can be thought of as an association of a goal event, or "culmination," with a "preparatory process" by which it is accomplished, and a "consequent state," which ensues. Natural-language categories like aspects, futurates, adverbials, and when-clauses are argued to change the temporal/aspectual category of propositions under the control of such a nucleic knowledge representation structure. The same concept of a nucleus plays a central role in a theory of temporal reference, and of the semantics of tense, which we follow McCawley, Partee, and Isard in regarding as an anaphoric category. We claim that any manageable formalism for natural-language temporal descriptions will have to embody such an ontology, as will any usable temporal database for knowledge about events which is to be interrogated using natural language.

809 citations


Journal ArticleDOI
TL;DR: Connectionist networks can be used as expert system knowledge bases and can be constructed from training examples by machine learning techniques, giving a way to automate the generation of expert systems for classification problems.
Abstract: Connectionist networks can be used as expert system knowledge bases. Furthermore, such networks can be constructed from training examples by machine learning techniques. This gives a way to automate the generation of expert systems for classification problems.

676 citations


Journal ArticleDOI
TL;DR: The generic properties of semantic data models are described and a representative selection of models that have been proposed since the mid-1970s are presented.
Abstract: Semantic data models have emerged from a requirement for more expressive conceptual data models. Current generation data models lack direct support for relationships, data abstraction, inheritance, constraints, unstructured objects, and the dynamic properties of an application. Although the need for data models with richer semantics is widely recognized, no single approach has won general acceptance. This paper describes the generic properties of semantic data models and presents a representative selection of models that have been proposed since the mid-1970s. In addition to explaining the features of the individual models, guidelines are offered for the comparison of models. The paper concludes with a discussion of future directions in the area of conceptual data modeling.

567 citations


Proceedings Article
Mukesh Dalal1
21 Aug 1988
TL;DR: This paper formulates some desirable principles of knowledge revision, and investigates a new theory of revision that realizes these principles, and illustrates its application through examples and compare it with several other approaches.
Abstract: A fundamental problem in knowledge representation is how to revise knowledge when new, contradictory information is obtained. This paper formulates some desirable principles of knowledge revision, and investigates a new theory of knowledge revision that realizes these principles. This theory of revision can be explained at the knowledge level, in purely model-theoretic terms. A syntactic characterization of the proposed approach is also presented. We illustrate its application through examples and compare it with several other approaches.

558 citations


Book ChapterDOI
12 Jun 1988
TL;DR: A mechanism for automatically inventing and generalising first-order Horn clause predicates is presented and implemented in a system called CIGOL, which uses incremental induction to augment incomplete clausal theories.
Abstract: It has often been noted that the performance of existing learning systems is strongly biased by the vocabulary provided in the problem description language. An ideal system should be capable of overcoming this restriction by defining its own vocabulary. Such a system would be less reliant on the teacher's ingenuity in supplying an appropriate problem representation. For this purpose we present a mechanism for automatically inventing and generalising first-order Horn clause predicates. The method is based on inverting the mechanism of resolution. The approach has its roots in the Duce system for induction of propositional Horn clauses. We have implemented the new mechanism in a system called CIGOL. CIGOL uses incremental induction to augment incomplete clausal theories. A single, uniform knowledge representation allows existing clauses to be used as background knowledge in the construction of new predicates. Given examples of a high-level predicate CIGOL generates related sub-concepts which it then asks its human teacher to name. Generalisations of predicates are tested by asking questions of the human teacher. CIGOL generates new concepts and generalisations with a preference for simplicity. We illustrate the operation of CIGOL by way of various sessions in which auxiliary predicates are automatically introduced and generalised.

511 citations


Journal ArticleDOI
TL;DR: Gains and Boose as discussed by the authors, Machine Learning and Uncertain Reasoning 3, pages 227-242, 1990; see also: International Journal of Man Machine Studies 29 (1988) 81-85
Abstract: W: B. Gains and J. Boose, editors, Machine Learning and Uncertain Reasoning 3, pages 227-242. Academic Press, New York, NY, 1990. see also: International Journal of Man Machine Studies 29 (1988) 81-85

431 citations


Proceedings Article
07 Mar 1988
TL;DR: A representation theorem is proved which says that a revision method for a knowledge system satisfies the set of rationality postulates, if and only if, there exists an ordering of epistemic entrenchment satisfying the appropriate constraints such that this ordering determines the retraction priority of the facts of the knowledge system.
Abstract: A major problem for knowledge representation is how to revise a knowledge system in the light of new information that is inconsistent with what is already in the system. Another related problem is that of contractions, where some of the information in the knowledge system is taken away. Here, the problems of modelling revisions and contractions are attacked in two ways. First, two sets of rationality postulates or integrity constraints are presented, one for revisions and one for contractions. On the basis of these postulates it is shown that there is a natural correspondence between revisions and contractions. Second, a more constructive approach is adopted based on the "epistemic entrenchment" of the facts in a knowledge system which determines their priority in revisions and contractions. We introduce a set of computationally tractable constraints for an ordering of epistemic entrenchments. The key result is a representation theorem which says that a revision method for a knowledge system satisfies the set of rationality postulates, if and only if, there exists an ordering of epistemic entrenchment satisfying the appropriate constraints such that this ordering determines the retraction priority of the facts of the knowledge system. We also prove that the amount of information needed to uniquely determine the required ordering is linear in the number of atomic facts of the knowledge system.

426 citations


Journal ArticleDOI
TL;DR: The problem of strategy and hyperresolution some obstacles to the automation of reasoning, and the basic research problems: larry wos.
Abstract: automated reasoning 33 basic research problems dcnx automated reasoning 33 basic research problems automated reasoning 33 basic research problems automated reasoning 33 basic research problems automated reasoning 33 basic research problems ebook basic research problems citeseerx the problem of strategy and hyperresolution some obstacles to the automation of reasoning, and the basic research problems: larry wos a the problem of definition expansion and contraction springer the problem of choosing between logic programming and automated proofs in group theory with otter automated theorem finding by forward deduction based on interpersonal communication a goals based approach ebook research problems in discrete geometry the theory grid and grid theorists researchgate az murder goes artful papers of the conference gilak automated reasoning with otter beaconac getting started guide autocad 2015 ebook | browserfame physics solutions manual young hulot color life library of photography pixmax answer study guide intro to econometrics luger honda fit 2010 owners manual pwcgba gre quantitative practice test answers elosuk handbook of automated reasoning volume 1 ajkp thermochemistry problems number 2 answers knitson handbook of automated reasoning 2 volume set cvpi automated theorem finding by forward deduction based on profiting from the worlds economic crisis finding 0460 11 may june 2014 marking scheme durts check light on volvo vnl pdf kugauk sugar among the freaks wenyen solving quadratic equations by graphs and factoring answer key manually install flash ubuntu compax gator xuv 620i service manual mmmrsn sense of touch mifou cls 350 owner manual budgieuk tony duquette iwsun

Book
01 Mar 1988
TL;DR: A first Glance at Expert Systems Logic Knowledge Knowledge Representation with Frames Uncertainty and the Inference Process Building Expert Systems Knowledge Acquisition and Validation.
Abstract: Background on Expert Systems A First Glance at Expert Systems Logic Knowledge Knowledge Representation with Frames Uncertainty The Inference Process Building Expert Systems Knowledge Acquisition and Validation Summary Appendix A: Search Appendix B: General-Purpose Reasoning Appendix C: Propositional Logic Appendix D: Relational Databases.

Book
01 Feb 1988
TL;DR: This book discussesMeta-Level Extensions of Logic and Machine Learning, a Meta-Level Architecture for Expert Systems, and Applications of Metaknowledge in AI Systems.
Abstract: Checking Proofs in the Metamathematics of First Order Logic (M. Aiello, R. Weyhrauch). Foundations. Issues in Computational Reflection (P. Maes). Meta in Logic (D. Perlis). Meaning in Knowledge Representation (L. Steels). Reasoning by Introspection (K. Konolige). Introspective Fidelity (M. Genesereth). Commonsense Set Theory (D. Perlis). Implementations. Control-Related Meta-Level Facilities in LISP (J. des Rivieres). The Mystery of the Tower Revealed: A Non-Reflective Description of the Reflective Tower (M. Wand, D. Friedman). Communication between LISP and Horn Clauses by Mutual Reflection (R. Ghislanzoni, L. Spampinato, G. Tornielli). The ObjVlisp Kernel: A Reflective Lisp Architecture to Define a Uniform Object-Oriented System (P. Cointe). Conceptual Reflection and Actor Languages (J. Ferber). Evaluation and Reflection in FOL (D. Nardi). OMEGA: An Integrated Reflective Framework (M. Simi, E. Motta). Meta-Levels in SOAR (P. Rosenbloom, J. Laird, A. Newell). Applications. The Uses of Metaknowledge in AI Systems (L. Aiello, G. Levi). Reasoning about Self-Control (J. Batali). A Multi-Context Monotonic Axiomatization of Inessential Non-Monotonicity (F. Giunchiglia, R. Weyhrauch). Declaratively Programmable Interpreters and Meta-Level Inference (B. Welham). A Meta-Level Architecture for Expert Systems (L. Sterling). Object Level Reflection of Inference Rules by Partial Evaluation (P. Coscia et al.). Functional Meta-Level for Logic Programming (P. Mancarella, D. Pedreschi, F. Turini). Meta-Level Extensions of Logic and Machine Learning (P. Brazdil).

Journal ArticleDOI
TL;DR: This paper presents an approach to default reasoning based on an extension to classical first-order logic augmented with a "variable conditional" operator for representing default statements that is argued to be superior to the first.

Journal ArticleDOI
TL;DR: It is proven that a complete inference algorithm for the BACK system would be computationally intractable, and it is shown that terminological reasoning is intracted for any system using a nontrivial description language.

Journal ArticleDOI
TL;DR: SALT was used to build VT and provides an analysis of VT's knowledge base to assess its potential for convergence on a solution and provides the basis for a knowledge representation that is used by SALT, an automated knowledge-acquisition tool.
Abstract: VT (vertical transportation) is an expert system for handling the design of elevator systems that is currently in use at Westinghouse Elevator Company Although VT tries to postpone each decision in creating a design until all information that constrains the decision is known, for many decisions this postponement is not possible In these cases, VT uses the strategy of constructing a plausible approximation and successively refining it VT uses domain-specific knowledge to guide its backtracking search for successful refinements The VT architecture provides the basis for a knowledge representation that is used by SALT, an automated knowledge-acquisition tool SALT was used to build VT and provides an analysis of VT's knowledge base to assess its potential for convergence on a solution

Journal ArticleDOI
01 May 1988
TL;DR: This paper shows that the difficulties McDermott described are a result of insisting on using logic as the language of commonsense reasoning, and if (Bayesian) probability is used, none of the technical difficulties found in using logic arise.
Abstract: The paper examines issues connected with the choice of the best method for representing and reasoning about common sense. McDermott (1978) has shown that a direct translation of common sense reasoning into logical form leads to insurmountable difficulties. It is shown, in the present work, that if Bayesian probability is used instead of logic as the language of such reasoning, none of the technical difficulties found in using logic arise. Bayesian inference is applied to a simple example of linguistic information to illustrate the potential of this type of inference for artificial intelligence.

Book ChapterDOI
TL;DR: UC (UNIX Consultant) is an intelligent, natural language interface that allows naive users to learn about the UNIX2 operating system and makes use of knowledge represented in a knowledge representation system called KODIAK.
Abstract: UC (UNIX Consultant) is an intelligent, natural language interface that allows naive users to learn about the UNIX2 operating system. UC was undertaken because the task was thought to be both a fertile domain for artificial intelligence (AI) research and a useful application of AI work in planning, reasoning, natural language processing, and knowledge representation.The current implementation of UC comprises the following components: a language analyzer, called ALANA, produces a representation of the content contained in an utterance; an inference component, called a concretion mechanism, that further refines this content; a goal analyzer, PAGAN, that hypothesizes the plans and goals under which the user is operating; an agent, called UCEgo, that decides on UC's goals and proposes plans for them; a domain planner, called KIP, that computes a plan to address the user's request; an expression mechanism, UCExpress, that determines the content to be communicated to the user, and a language production mechanism, UCGen, that expresses UC's response in English.UC also contains a component, called KNOME, that builds a model of the user's knowledge state with respect to UNIX. Another mechanism, UCTeacher, allows a user to add knowledge of both English vocabulary and facts about UNIX to UC's knowledge base. This is done by interacting with the user in natural language.All these aspects of UC make use of knowledge represented in a knowledge representation system called KODIAK. KODIAK is a relation-oriented system that is intended to have wide representational range and a clear semantics, while maintaining a cognitive appeal. All of UC's knowledge, ranging from its most general concepts to the content of a particular utterance, is represented in KODIAK.

Journal ArticleDOI
TL;DR: The paper presents a connectionist realization of semantic networks, that is, it describes how knowledge about concepts, their properties, and the hierarchical relationship between them may be encoded as an interpreter-free massively parallel network of simple processing elements that can solve an interesting class of inheritance and recognition problems extremely fast—in time proportional to the depth of the conceptual hierarchy.

Journal ArticleDOI
TL;DR: In this paper, the authors draw attention to certain aspects of causal reasoning which are pervasive in ordinary discourse yet, based on the author's scan of the literature, have not received due treatment by logical formalisms of common-sense reasoning.

Book ChapterDOI
01 Jan 1988
TL;DR: SALT6 is a knowledge-acquisition tool for generating expert systems that use a propose-and-revise problem-solving strategy that provides the basis for SALT’s knowledge representation.
Abstract: SALT6 is a knowledge-acquisition tool for generating expert systems that use a propose-and-revise problem-solving strategy. The SALT-assumed method constructs a design incrementally by proposing values for design parameters, identifying constraints on design parameters as the design develops, and revising decisions in response to constraint violations in the proposal. This problem-solving strategy provides the basis for SALT’s knowledge representation. SALT uses its knowledge of the intended problem-solving strategy in identifying relevant domain knowledge, in detecting weaknesses in the knowledge base in order to guide its interrogation of the domain expert, in generating an expert system that performs the task and explains its line of reasoning, and in analyzing test case coverage. The strong commitment to problem-solving strategy that gives SALT its power also defines its scope.

Journal ArticleDOI
Wai K. Yeap1
TL;DR: A computational theory of cognitive maps is developed which can explain some of the current findings about cognitive maps in the psychological literature and which provides a coherent framework for future development.

Journal ArticleDOI
TL;DR: Various approaches to inductive reasoning such as probability kinematics based on information measures, and the combination of uncertain or default information as studied in the field of Artificial Intelligence are discussed.

Proceedings Article
07 Mar 1988
TL;DR: This work argues, contrary to the prevailing view, that integrity constraints are epistemic in nature, and formalizes this notion in the language KFOPCE due to Levesque and defines the concept of a knowledge base satisfying its integrity constraints.
Abstract: We address the concept of a static integrity constraint as it arises in databases and Artificial Intelligence knowledge representation languages. Such constraints are meant to characterize the acceptable states of a knowledge base, and are used to enforce these legal states. We adopt the perspective that a knowledge base is a set of first order sentences, but argue, contrary to the prevailing view, that integrity constraints are epistemic in nature. Rather than being statements about the world, constraints are statements about what the knowledge base can be said to know. We formalize this notion in the language KFOPCE due to Levesque and define the concept of a knowledge base satisfying its integrity constraints. We investigate constraint satisfaction for closed world knowledge bases. We also show that Levesque's axiomatization of KFOPCE provides the correct logic for reasoning about integrity constraints. Finally, we show how to determine whether a knowledge base satisfies its constraints for a restricted, but important class of knowledge bases and constraints.

Journal ArticleDOI
B. Korel1
TL;DR: A prototype of the error localization assistant system which guides a programmer during debugging of Pascal programs is described, which makes use of the knowledge of program structure, which is derived automatically.
Abstract: Error localization in program debugging is the process of identifying program statements which cause incorrect behavior. A prototype of the error localization assistant system which guides a programmer during debugging of Pascal programs is described. The system is interactive: it queries the programmer for the correctness of the program behavior and uses answers to focus the programmer's attention on an erroneous part of the program (in particular, it can localize a faulty statement). The system differs from previous approaches in that it makes use of the knowledge of program structure, which is derived automatically. The knowledge of program structure is represented by the dependence network which is used by the error-locating reasoning mechanism to guide the construction, evaluation, and modification of hypothesis of possible causes of the error. Backtracking reasoning has been implemented in the reasoning mechanism. >

Journal ArticleDOI
TL;DR: This paper argues that pictorial versus descriptional, and analog versus digital distinctions are not central in understanding the role of imagery in cognition, and moreover do not correctly capture the difference between visual perception and language.

Journal ArticleDOI
TL;DR: A generic model of animal-habitat interaction and a specific model of moose-, Alces alces L., forest interactions in Finland are described that are event-driven and behavior-based.

Book ChapterDOI
01 Jan 1988
TL;DR: Artificial Intelligence (AI) as we use the term here, is part of Cognitive Science as mentioned in this paper. But there is no consensus on how knowledge should be represented, there are many approaches which seem well-suited for our purposes.
Abstract: Artificial Intelligence (AI), as we use the term here, is part of Cognitive Science. Cognitive Science is, by definition, the interdisciplinary study of cognition. Participating disciplines are AI, psychology (in particular, cognitive psychology), linguistics, philosophy, and neurobiology. The study of emotion not only includes cognitive aspects but also physiological and expressive ones, as well as subjective experience. Thus emotion is a field for cognitive science research par excellence. If several disciplines have to work together there must be a common language. This language is provided by AI. One of the major and most basic research strands in AI for the last 30 years has been the problem of knowledge representation. Although there is no consensus on how knowledge should be represented, there are many approaches which seem well-suited for our purposes. The concepts of goals, plans, or complex knowledge structures serve as useful metaphors for understanding emotion.


Book
01 Aug 1988
TL;DR: This perspective presents cognitive science as a synthesis of psychology's empirical analysis of human cognitive processing and the computational modeling of those processes, with the formalisms for knowledge representation and process mechanism provided by AI.
Abstract: From the Publisher: This readings volume collects the best papers in cognitive science from a unique perspective: the interface between artificial intelligence and cognitive psychology. This perspective presents cognitive science as a synthesis of psychology's empirical analysis of human cognitive processing and the computational modeling of those processes, with the formalisms for knowledge representation and process mechanism provided by AI. The editors have selected papers that are frequently referenced and representative of major research, relating theoretical ideas to empirical results. An introductory section collects classic papers laying out the foundational issues in cognitive science and the general assumptions of computational modeling and cognitive architecture.Major sections, introduced by the editors, address representation, categorization, learning, thinking, and perception. Accessible to a broad audience of cognitive scientists, researchers and students in AI, psychology, cognitive science, and related areas. An excellent principal or supplemental text, Readings in Cognitive Science offers a challenging perspective on one of the richest interdisciplinary collaborations in research today.