scispace - formally typeset
Search or ask a question

Showing papers on "Abductive reasoning published in 1989"


Journal ArticleDOI
01 May 1989
TL;DR: This paper elaborates on the idea that reasoning should be viewed as theory formation where logic tells us the consequences of the authors' assumptions, and proposes an architecture to combine explanation and prediction into one coherent framework.
Abstract: Although there are many arguments that logic is an appropriate tool for artificial intelligence, there has been a perceived problem with the monotonicity of classical logic. This paper elaborates on the idea that reasoning should be viewed as theory formation where logic tells us the consequences of our assumptions. The two activities of predicting what is expected to be true and explaining observations are considered in a simple theory formation framework. Properties of each activity are discussed, along with a number of proposals as to what should be predicted or accepted as reasonable explanations. An architecture is proposed to combine explanation and prediction into one coherent framework. Algorithms used to implement the system as well as examples from a running implementation are given.

230 citations


Proceedings Article
20 Aug 1989
TL;DR: A new definition of abduction is considered that makes it depend on an underlying formal model of belief and it is proved that something is believed in the implicit sense iff repeatedly applying a limited abduction operator eventually yields something that is belief in the explicit sense.
Abstract: In this paper, we consider a new definition of abduction that makes it depend on an underlying formal model of belief. In particular, different models of belief will give rise to different forms of abductive reasoning. Based on this definition, we then prove three main theorems: first, that when belief is closed under logical implication, the corresponding form of abduction is precisely what is performed by the ATMS as characterized by Reiter and de Kleer; second, that with the more limited "explicit" belief defined by Levesque, the required abduction is computationally tractable in certain cases where the ATMS is not; and finally, that something is believed in the implicit sense iff repeatedly applying a limited abduction operator eventually yields something that is believed in the explicit sense. This last result relates deduction and abduction as well as limited and unlimited reasoning all within the context of a logic of belief.

186 citations


Proceedings Article
20 Aug 1989
TL;DR: A formal theory of causal diagnostic reasoning is proposed, dealing with different forms of incompleteness both in the general causal knowledge (missing or abstracted knowledge) and in the data describing a specific case under examination.
Abstract: One of the problems of the recent approaches to problem solving based on deep knowledge is the lack of a formal treatment of incomplete knowledge. However, dealing with incomplete models is fundamental to many real-world domains. In this paper we propose a formal theory of causal diagnostic reasoning, dealing with different forms of incompleteness both in the general causal knowledge (missing or abstracted knowledge) and in the data describing a specific case under examination. Different forms of nonmonotonic reasoning (hypothetical and circumscriptive reasoning) are used in order to draw and confirm conclusions from incomplete knowledge. Multiple fault solutions are treated in a natural way and parsimony criteria arc used to rank alternative solutions.

140 citations


Journal ArticleDOI
TL;DR: This article shows how a logic‐based architecture for both prediction and explanation is proposed and an implementation is outlined, and how such a hypothetical reasoning system can be used to solve recognition, diagnostic, and prediction problems.
Abstract: This paper investigates two different activities that involve making assumptions: predicting what one expects to be true and explaining observations. In a companion paper, a logic-based architecture for both prediction and explanation is proposed and an implementation is outlined. In this paper, we show how such a hypothetical reasoning system can be used to solve recognition, diagnostic and prediction problems. As part of this is the assumption that the default reasoner must be ``programmed'''' to get the right answer and it is not just a matter of ``stating what is true'''' and hoping the system will magically find the right answer. A number of distinctions have been found in practice to be important: between predicting whether something is expected to be true versus explaining why it is true; and between conventional defaults (assumptions as a communication convention), normality defaults (assumed for expediency) and conjectures (assumed only if there is evidence). The effects of these distinctions on recognition and prediction problems are presented. Examples from a running system are given.

112 citations



Book ChapterDOI
13 Oct 1989
TL;DR: An approach which provides a common solution to problems recently addressed in two different research areas: nonmonotonic reasoning and theory revision by defining a framework for default reasoning based on the notion of preferred maximal consistent subsets of the premises.
Abstract: We present an approach which provides a common solution to problems recently addressed in two different research areas: nonmonotonic reasoning and theory revision. We define a framework for default reasoning based on the notion of preferred maximal consistent subsets of the premises. Contrary to other formalizations of default reasoning, this framework is able to handle also unanticipated inconsistencies. This makes it possible to handle revisions of default theories by simply adding the new information. Contractions require the introduction of constraints, i.e. formulas used to determine preferred maximal consistent subsets but not used to determine the derivable formulas. Both, revisions and contractions, are totally incremental, i.e. old information is never forgotten and may be recovered after additional changes. Moreover, the order of changes is — in a certain sense — unimportant.

62 citations


Book ChapterDOI
09 May 1989
TL;DR: In this article, a Prolog-like inference system is proposed to compute minimum-cost explanations for these abductive reasoning methods, based on the assumption costs of literals in the logical form and functions attached to the antecedents of the implications.
Abstract: By determining those added assumptions sufficient to make the logical form of a natural-language sentence provable, abductive inference can be used in the interpretation of sentences to determine the information to be added to the listener's knowledge, i.e., what the listener should learn from the sentence. Some new forms of abduction are more appropriate to the task of interpreting natural language than those used in the traditional diagnostic and design synthesis applications of abduction. In one new form, least specific abduction, only literals in the logical form of the sentence can be assumed. The assignment of numeric costs to axioms and assumable literals permits specification of preferences on different abductive explanations. Least specific abduction is sometimes too restrictive. Better explanations can sometimes be found if literals obtained by backward chaining can also be assumed. Assumption costs for such literals are determined by the assumption costs of literals in the logical form and functions attached to the antecedents of the implications. There is a new Prolog-like inference system that computes minimum-cost explanations for these abductive reasoning methods.

38 citations



Journal ArticleDOI
01 Apr 1989-Synthese
TL;DR: The authors construe reasoning sociologically, as a process of linguistic interaction; and show how both reasoning in the psychologistic sense and logic are related to that process, in the sense that reasoning is psychological, a procedure for revising one's beliefs.
Abstract: Gilbert Harman, in ‘Logic and Reasoning’ (Synthese 60 (1984), 107–127) describes an “unsuccessful attempt ... to develop a theory which would give logic a special role in reasoning”. Here reasoning is psychological, “a procedure for revising one's beliefs”. In the present paper, I construe reasoning sociologically, as a process of linguistic interaction; and show how both reasoning in the psychologistic sense and logic are related to that process.

33 citations


Book ChapterDOI
01 Oct 1989
TL;DR: This work specifies a form of analogical reasoning in terms of a system of hypothetical reasoning based on mathematical logic as well as its relationship to existing logical models of nonmonotonic reasoning.
Abstract: We specify a form of analogical reasoning in terms of a system of hypothetical reasoning based on mathematical logic. Our primary motivation is a deeper understanding of analogical reasoning as well as its relationship to existing logical models of nonmonotonic reasoning.

29 citations


Journal ArticleDOI
TL;DR: This paper focuses on the use of abductive reasoning in diagnostic systems in which there may be more than one underlying cause for the observed symptoms, and reviews and compares several different approaches, including Binary Choice Bayesian, SequentialBayesian, Causal Model Based Abduction, Parsimonious Set Covering and First Order Logic.
Abstract: Abductive reasoning involves generating an explanation for a given set of observations about the world. Abduction provides a good reasoning framework for many AI problems, including diagnosis, plan recognition and learning. This paper focuses on the use of abductive reasoning in diagnostic systems in which there may be more than one underlying cause for the observed symptoms. In exploring this topic, we will review and compare several different approaches, including Binary Choice Bayesian, Sequential Bayesian, Causal Model Based Abduction, Parsimonious Set Covering, and the use of First Order Logic. Throughout the paper we will use as an example a simple diagnostic problem involving automotive troubleshooting.

Journal ArticleDOI
TL;DR: In this article, the connection between self-deception and practical reasoning has been explored, and the connections between self deception and rational action have been explored in the context of self deception.
Abstract: Self-deception is commonly viewed as a condition that bespeaks irrationality. This paper challenges that view. I focus specifically on the connection between self-deception and practical reasoning, an area which, despite its importance for understanding self-deception, has not been systematically explored. I examine both how self-deception influences practical reasoning and how this influence affects the rationality of actions produced by practical reasoning. But what is self-deception? There are many accounts, yet there is probably none sufficiently well known and compelling to serve as an adequate background given my purposes. Hence, I shall briefly present my own account of self-deception and, on that basis, explore its connections with practical reasoning and rational action.

Book ChapterDOI
01 May 1989
TL;DR: A method of nonmonotonic reasoning in which the notion of inference from specific bodies of evidence plays a fundamental role, and the formalization is based on autoepistemic logic, but introduces additional structure, a hierarchy of evidential spaces.
Abstract: Nonmonotonic logics are meant to be a formalization of nonmonotonic reasoning. However, for the most part they fail to embody two of the most important aspects of such reasoning: the explicit computational nature of nonmonotonic inference, and the assignment of preferences among competing inferences. We propose a method of nonmonotonic reasoning in which the notion of inference from specific bodies of evidence plays a fundamental role. The formalization is based on autoepistemic logic, but introduces additional structure, a hierarchy of evidential spaces. The method offers a natural formalization of many different applications of nonmonotonic reasoning, including reasoning about action, speech acts, belief revision, and various situations involving competing defaults.

Book ChapterDOI
01 Dec 1989
TL;DR: This chapter explores the incomplete-theory problem in which a learning system has an explicit domain theory that cannot generate an explanation for every example, and presents the implementation of a prototype system that is able to extend its domain theory this way.
Abstract: Publisher Summary This chapter explores the incomplete-theory problem in which a learning system has an explicit domain theory that cannot generate an explanation for every example. The general method is to use the existing domain theory to generate a plausible explanation of the example and to extract from it one or more rules that may then be added to the domain theory. This method is an application of abductive reasoning in that it is attempting to account for a known conclusion (the goal concept) by proposing various hypotheses, which, together with the existing domain theory, may account for it. If a complete explanation can be created for an example, the domain theory is adequate and need not be extended. If the example cannot be completely explained, there are usually many partial explanations that can be generated to explain it. The chapter presents the implementation of a prototype system that is able to extend its domain theory this way. The goal of this method is to increase the explanatory power of the domain theory rather than to acquire a specific way of recognizing instances of the goal concept.

Proceedings ArticleDOI
14 Nov 1989
TL;DR: Results of efforts to extract knowledge for an expert whose job is to detect the errors made by practicing technologies are examined and these errors are discussed in terms of possible cognitive biases.
Abstract: Results of efforts to extract knowledge for an expert whose job is to detect the errors made by practicing technologies are examined. These errors are discussed in terms of possible cognitive biases. The example examined involves students, technologies, and experts performing antibody identification tasks in order to construct a critiquing and intelligent tutoring system. >

Journal ArticleDOI
TL;DR: This paper analyzes the traditional concepts of logic and reasoning from the perspective of radical behaviorism and in the terms of Skinner’s treatment of verbal behavior.
Abstract: This paper analyzes the traditional concepts of logic and reasoning from the perspective of radical behaviorism and in the terms of Skinner’s treatment of verbal behavior. The topics covered in this analysis include the proposition, premises and conclusions, logicality and rules, and deductive and inductive reasoning.


Book ChapterDOI
01 Dec 1989
TL;DR: O'Rorke and Morris as discussed by the authors described a general approach to automating theory revision based upon computational methods for theory formation by abduction, which is based on the idea that, when an anomaly is encountered, the best course is often to suppress parts of the original theory thrown into question by the contradiction and to derive an explanation of the anomalous observation based on relatively solid, basic principles.
Abstract: Author(s): O'Rorke, Paul; Morris, Steven; Schulenburg, David | Abstract: Abduction is the process of constructing explanations. This chapter suggests that automated abduction is a key to advancing beyond the "routine theory revision" methods developed in early AI research towards automated reasoning systems capable of "world model revision" — dramatic changes in systems of beliefs such as occur in children's cognitive development and in scientific revolutions. The chapter describes a general approach to automating theory revision based upon computational methods for theory formation by abduction. The approach is based on the idea that, when an anomaly is encountered, the best course is often simply to suppress parts of the original theory thrown into question by the contradiction and to derive an explanation of the anomalous observation based on relatively solid, basic principles. This process of looking for explanations of unexpected new phenomena can lead by abductive inference to new hypotheses that can form crucial parts of a revised theory. As an illustration, the chapter shows how some of Lavoisier's key insights during the Chemical Revolution can be viewed as examples of theory formation by abduction.

Book ChapterDOI
Gerhard Jäger1
TL;DR: This chapter discusses the forms of non-monotonic reasoning that are induced by default operators and closed-world assumption (CWA) is also discussed.
Abstract: Publisher Summary Non-monotonic reasoning is the modern name for a variety of scientific activities that are characterized by the idea that the traditional deductive approach to inference systems is too narrow for many applications and that new formalisms are required that make arrangements for default reasoning, common sense reasoning, and autoepistemic reasoning. The recent interest in this field is caused by questions in artificial intelligence (AI) and computer science, which has led to a series of ad hoc methods and isolated case studies. This chapter reviews connections between the sets of formulae and individual formulae. The chapter discusses the forms of non-monotonic reasoning that are induced by default operators. In addition, closed-world assumption (CWA) is also discussed. Advantages of the CWA are its clear methodological conception and its efficiency for elementary database. Disadvantages are the restricted range of applicability and its complicated proof procedure.

Proceedings Article
01 Jun 1989

Proceedings ArticleDOI
V. Lifschitz1
05 Jun 1989
TL;DR: The author defines nonmonotonic consequence relations and discusses their importance.
Abstract: Summary form only given. Research on applications of logic to artificial intelligence has led to the invention of a few useful consequence relations that are not monotonic. They are needed for default reasoning formalization, reasoning about action, introspective reasoning, and negation by failure. The author defines nonmonotonic consequence relations and discusses their importance. >

Journal ArticleDOI
TL;DR: Parsimonious covering theory, first formulated to model the abductive inference underlying medical diagnostic problem solving, is examined here as a method for automating natural language processing for medical expert system interfaces.

31 Jan 1989
TL;DR: A layered-abduction model of perception is presented which unifies bottom-up and top-down processing in a single logical and information-processing framework, treating perception as a kind of compiled cognition.
Abstract: A layered-abduction model of perception is presented which unifies bottom-up and top-down processing in a single logical and information-processing framework. The process of interpreting the input from each sense is broken down into discrete layers of interpretation, where at each layer a best explanation hypothesis is formed of the data presented by the layer or layers below, with the help of information available laterally and from above. The formation of this hypothesis is treated as a problem of abductive inference, similar to diagnosis and theory formation. Thus this model brings a knowledge-based problem-solving approach to the analysis of perception, treating perception as a kind of compiled cognition. The bottom-up passing of information from layer to layer defines channels of information flow, which separate and converge in a specific way for any specific sense modality. Multi-modal perception occurs where channels converge from more than one sense. This model has not yet been implemented, though it is based on systems which have been successful in medical and mechanical diagnosis and medical test interpretation.

Proceedings Article
20 Aug 1989
TL;DR: This paper considers a class of associative inference to which marker passing is often applied, variously called abductive inference, schema selection, or pattern completion, and proposes a proposal for more strictly regulated marker propagation.
Abstract: Potentially, the advantages of marker-passing over local connectionist techniques for associative inference are (1) the ability to differentiate variable bindings, and (2) reduction in the search space and/or number of processing elements. However, the latter advantage has mostly been realized at the expense of accuracy and predictability. In this paper we consider a class of associative inference to which marker passing is often applied, variously called abductive inference, schema selection, or pattern completion. Analysis of marker semantics in a standard semantic net representation leads to a proposal for more strictly regulated marker propagation. An implementation strategy employing an augmented relaxation network is outlined.



Book ChapterDOI
01 Jan 1989
TL;DR: The manager who makes decisions can often form new ideas through the observation of meetings where the views of the participants collide.
Abstract: We usually collect opinions from several people in order to make decisions. The manager who makes decisions can often form new ideas through the observation of meetings where the views of the participants collide.