scispace - formally typeset
Search or ask a question

Showing papers on "Abductive reasoning published in 1992"




Journal ArticleDOI
TL;DR: It is shown that the consistency based approach can emulate abductive reasoning by adding closure axioms to a causal theory; and that abductive techniques can be used in place of the consistencybased method in the domain of logic based diagnosis.

128 citations


Journal ArticleDOI
TL;DR: In this article, the full structure of abductive argumentation, as viewed by the mature Peirce, is clarified, and every inferential step in the process can be seen to dissolve into familiar forms of deductive and inductive reasoning.
Abstract: Essential to Peirce's distinction among three kinds of reasoning, deduction, induction and abduction, is the claim that each is correlated to a unique species of validity irreducible to that of the others. In particular, abductive validity cannot be analyzed in either deductive or inductive terms, a consequence of considerable importance for the logical and epistemological scrutiny of scientific methods. But when the full structure of abductive argumentation — as viewed by the mature Peirce — is clarified, every inferential step in the process can be seen to dissolve into familiar forms of deductive and inductive reasoning. Specifically, the final stage is a special type of practical inference which, if correct, is deductively valid, while the creative phase, surprisingly, is not inferential at all. In neither is abduction a type of inference to the best explanation. The result is a major reassessment of the relevance of Peirce's views to contemporary methodological studies.

92 citations


Journal ArticleDOI
TL;DR: This work shows how the model of plan ascription developed by Konolige and Pollack can be recast in the framework of weighted abduction, and discusses the potential advantages and disadvantages of this encoding.
Abstract: We describe an approach to abductive reasoning calledweighted abduction, which uses inference weights to compare competing explanations for observed behavior. We present an algorithm for computing a weighted-abductive explanation, and sketch a model-theoretic semantics for weighted abduction. We argue that this approach is well suited to problems of reasoning about mental state. In particular, we show how the model of plan ascription developed by Konolige and Pollack can be recast in the framework of weighted abduction, and we discuss the potential advantages and disadvantages of this encoding.

83 citations


Proceedings ArticleDOI
31 Mar 1992
TL;DR: This work describes how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully.
Abstract: It is often assumed that when natural language processing meets the real world, the ideal of aiming for complete and correct interpretations has to be abandoned. However, our experience with TACITUS; especially in the MUC-3 evaluation, has shown that principled techniques for syntactic and pragmatic analysis can be bolstered with methods for achieving robustness. We describe three techniques for making syntactic analysis more robust-an agenda-based scheduling parser, a recovery technique for failed parses, and a new technique called terminal substring parsing. For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. Each of these techniques have been evaluated and the results of the evaluations are presented.

44 citations


Book ChapterDOI
01 Jan 1992
TL;DR: In this paper, a unified epistemological model of medical reasoning is proposed, which can be described in terms of abduction (selective), deduction, and induction (Sect. 2).
Abstract: The aim of this paper is to emphasize the significance of abduction in order to illustrate the problem solving process and to propose a unified epistemological model of medical reasoning. The paper introduces an epistemological model (Select and Test Model) of medical reasoning (diagnosis, therapy, monitoring) which can be described in terms of abduction (selective), deduction, and induction (Sect. 2). This model first describes the different roles played by these basic inference types in developing the various kinds of medical reasoning (Sect. 3), then is connected with cognitive models of medical reasoning (Sect. 4), and finally provides an abstract representation—an epistemological architecture (STMODEL)—of the control knowledge embedded in a medical Knowledge-Based System (KBS) (Sects. 3 and 5). Moreover, four meanings of the word abduction (creative, selective, automatic, to the best explanation) are discussed in order to clarify their significance in epistemology, psychological experimental research, and AI. In my opinion the controversial status of abduction is related to a confusion between the epistemological and cognitive levels, and to a lack of explanation as to why people sometimes deviate from normative epistemological principles.

42 citations


01 Jan 1992
TL;DR: It is shown how plausible default assumptions can be used to obtain more complete information (by filling in details) thereby providing a computational advantage in commonsense reasoning systems, the first rigorous demonstration of the claim.
Abstract: Commonsense knowledge is often incomplete, and reasoning with incomplete information is generally computationally intractable. For example, the statement "John may come by train or by plane" can force a system to reason by cases. We show how plausible default assumptions can be used to obtain more complete information (by filling in details), thereby providing a computational advantage in commonsense reasoning systems. Although it has been previously hypothesized that default reasoning might provide such an advantage, this is the first rigorous demonstration of the claim. A central aspect of this work is the analysis of the computational complexity of default reasoning. We characterize the various sources of intractability, and show how tractable forms of default reasoning can be obtained. Model-preference default theories are defined to provide a general default reasoning mechanism. We give a detailed analysis of the tradeoff between expressiveness and tractability as we place various syntactic restrictions on such theories. A similar analysis is given for Reiter's default logic, with an emphasis on efficiently computable default theories that include strict Horn theories. Our results indicate that the various tractable default theories are closely related. Our analysis of default logic also reveals the inherent intractability of abductive reasoning. Next, we consider the complexity of defeasible inheritance--a more specialized default reasoning formalism used in the representation of hierarchically organized information in semantic networks. Our central result is that inheritance reasoning based on Touretzky's inferential distance is NP-hard. Again, we identify the source of the intractability and delineate tractable subsystems. We conclude with a discussion of some of the computational limitations of the use of defaults in commonsense reasoning.

38 citations


01 Jan 1992
TL;DR: This dissertation studies the computational complexity of answering queries in logical databases containing indefinite information arising from two sources: facts stated in terms of defined relations, and incomplete information about linearly ordered domains.
Abstract: This dissertation studies the computational complexity of answering queries in logical databases containing indefinite information arising from two sources: facts stated in terms of defined relations, and incomplete information about linearly ordered domains. First, we consider databases consisting of (1) a DATALOG program and (2) a description of the world in terms of the predicates defined by the program as well as the basic predicates. The query processing problem in such databases is related to issues in database theory, including view updates and DATALOG optimization, and also to the Artificial Intelligence problems of reasoning in circumscribed theories and skeptical abductive reasoning. If the program is non-recursive, the meaning of the database can be represented by Clark's Predicate Completion, and standard first order theorem proving technology may be used to evaluate queries. However, with recursive definitions such databases are intrinsically second order, and query processing is not even semi-decidable. Nevertheless, the basic queries, which do not contain defined predicates, are decidable. We show that under certain conditions querying this second order form of indefinite information is no more complex than querying indefinite information expressible in first order logic. We also consider the influence of negation and inequality on complexity. Next, we study databases containing basic atomic facts and facts asserting order relations between points in a linearly ordered domain. Incomplete information about a linearly ordered domain means that the data provide only a partial order, and a query answering requires reasoning about all the compatible linear orders. We show the complexity of this inference problem is in general intractable, but identify a variety of natural conditions under which queries may be answered in polynomial time. Finally, we consider the effect of combining the two sorts of indefinite information: we study databases containing facts defined using recursive rules with linear order constraints. Applications of such rules include reasoning about concurrent, repetitive actions. In general, even the basic queries are undecidable in this context, but by restricting definitions to a reasonable class, we are able to recapture decidability. Further, under a constraint of "bounded concurrency," query processing is in polynomial time.

36 citations


Book ChapterDOI
07 Jul 1992
TL;DR: This chapter describes spatial analogy and subsumption, which is a form of abductive inference, relying on a good ability to perform analogical classification.
Abstract: Publisher Summary This chapter describes spatial analogy and subsumption. A concept image may only have other concepts as parts. An unclassified or instance image can only have other instances as parts. A classification of an image is the inference of a link to a concept. It has been suggested that the notion of classification can be extended to images. There are a number of spatio-analogical inferences that can be made about a target image. The image can be classified, the location of parts can be predicted, unknown parts can be identified, or a pattern can be recognized in an image. The latter three tasks can be viewed as a form of abductive inference, relying on a good ability to perform analogical classification. The computation of similarity is a central process of analogical inference. The similarity of two images can be measured in terms of the transformations needed to bring them into equivalence. Many types of transformations are possible such as replacing, deleting, or moving a part, or moving all parts simultaneously.

29 citations


Proceedings Article
12 Jul 1992
TL;DR: It is argued that a basic tenet of qualitative reasoning practice--the separation of modeling and simulation--obviates many of the difficulties faced by previous attempts to formalize reasoning about change.
Abstract: The development of a formal logic for reasoning about change has proven to be surprisingly difficult. Furthermore, the logics that have been developed have found surprisingly little application in those fields, such as Qualitative Reasoning, that are concerned with building programs that emulate human common-sense reasoning about change. In this paper, we argue that a basic tenet of qualitative reasoning practice--the separation of modeling and simulation--obviates many of the difficulties faced by previous attempts to formalize reasoning about change. Our analysis helps explain why the QR community has been nonplussed by some of the problems studied in the nonmonotonic reasoning community. Further, the formalism we present provides both the beginnings of a formal foundation for qualitative reasoning, and a framework in which to study a number of open problems in qualitative reasoning.

Journal ArticleDOI
TL;DR: This work presents an alternative computation method that employs backward chaining in a kind of abductive reasoning, thus requiring much less computation and memory than COVADIS and suitable for handling the more powerful knowledge base form represented by Horn claause bases.
Abstract: We present a new computational approach to the problem of detection of potential inconsistencies in knowledge bases. For such inconsistencies, we characterize the sets of possible input facts that will allow the knowledge based system to derive the contradiction. the state-of-the-art approach to a solution of this problem is represented by the COVADIS system which checks simple rule bases. the COVADIS approach relies on forward chaining and is strongly related to the way ATMS computes labels for deducible facts. Here, we present an alternative computation method that employs backward chaining in a kind of abductive reasoning. This approach gives a more focused reasoning, thus requiring much less computation and memory than COVADIS. Further, since our method is very similar to SLD-resolution, it is suitable for handling the more powerful knowledge base form represented by Horn claause bases. Finally, our method is easily extended to uncertain knowledge bases, assuming that the uncertainty calculus is modeled by possibilistic logic. This extension allows us to model the effect of user defined belief thresholds for inference chains.

Journal ArticleDOI
TL;DR: A probabilistic model of text understanding is developed, using probability theory to handle the uncertainty which arises in this abductive inference process, and all aspects of natural language processing are treated in the same framework, allowing to integrate syntactic, semantic and pragmatic constraints.
Abstract: We discuss a new framework for text understanding. Three major design decisions characterize this approach. First, we take the problem of text understanding to be a particular case of the general problem of abductive inference. Second, we use probability theory to handle the uncertainty which arises in this abductive inference process. Finally, all aspects of natural language processing are treated in the same framework, allowing us to integrate syntactic, semantic and pragmatic constraints. In order to apply probability theory to this problem, we have developed a probabilistic model of text understanding. To make it practical to use this model, we have devised a way of incrementally constructing and evaluating belief networks. We have written a program,wimp3, to experiment with this framework. To evaluate this program, we have developed a simple ‘single-blind’ testing method.

Journal ArticleDOI
TL;DR: A connectionist mechanism for an inference problem alternative to the usual chaining method is described and a modified relaxation method is proposed to improve the computational inefficiencies associated with the optimization process.
Abstract: A connectionist mechanism for an inference problem alternative to the usual chaining method is described. The inference problem is within the scope of propositional logic that contains no variables and with some enhanced knowledge representation facilities. The method is an application of mathematical programming where knowledge and data are transformed into constraint equations. In the network, the nodes represent propositions and constraint equations, and the violation of constraints is formulated as an energy function. The inference is realized as a minimization process of the energy function using the relaxation method to search for a truth value distribution that achieves the optimum consistency with the given knowledge and data. A modified relaxation method is proposed to improve the computational inefficiencies associated with the optimization process. The behavior of the method is analyzed through examples of deductive and abductive inference and of inference with unorganized knowledge. >

Book ChapterDOI
01 Sep 1992


Journal ArticleDOI
TL;DR: A new approach to learning diagnostic knowledge is presented, based on the use of a causal model of the domain and abductive reasoning, which serves both to focus the search during one-step learning and to localize failures and propose changes during knowledge refinement.
Abstract: This paper presents a new approach to learning diagnostic knowledge, based on the use of a causal model of the domain and abductive reasoning. Justifications supplied by the casual model serve both to focus the search during one-step learning and to localize failures and propose changes during knowledge refinement.

Proceedings Article
01 Jan 1992
TL;DR: This work gives formal semantics to a notion of approximate observations, and deene two types of entail-ment for a knowledge base with imprecise information: a cautious notion which allows only completely justiied conclusions, and a bold one, which allows jumping to conclusions.
Abstract: We investigate the problem of reasoning with imprecise quantitative information. We give formal semantics to a notion of approximate observations, and deene two types of entail-ment for a knowledge base with imprecise information: a cautious notion, which allows only completely justiied conclusions, and a bold one, which allows jumping to conclusions. Both versions of the entailment relation are shown to be decidable. We investigate the behavior of the two alternatives on various examples, and show that the answers obtained are intuitively desirable. The behavior of these two entailment relations is completely characterized for a certain sublan-guage, in terms of the logic of true equality. We demonstrate various properties of the full logic, and show how it applies to many situations of interest.

DOI
01 Jan 1992
TL;DR: The specification of a domain independent propositional abductive reasoning system is the main achievement of this thesis, which can free designers from repeatedly building specialized abductive inference engines, and instead allow them to concentrate their effort on knowledge engineering and problem solving.
Abstract: Abduction is a logical inference technique used in explanation finding and a variety of consequence finding. One application domain that stands out in utilizing abduction is automated diagnostic reasoning. This thesis provides a formal specification and methods of computation for a domain independent propositional abductive reasoning system. On the competence level, specifications are defined for domain independent abductive reasoning in terms of finding assumption-based explanations, direct consequences, extensions and a protocol for revising assumptions. On the performance level, computational strategies for performing abduction according to the defined specifications are studied. The computational framework for a propositional abductive inference engine, the Clause Management System (CMS), is presented. The computational framework of the CMS uses the notion of prime implicates to represent its knowledge base. As a result, the algorithm to update the CMS knowledge base is an incremental algorithm for generating prime implicates--the first reported. Coupled with the notion of reasoning with assumptions, the abduction framework is extended to include inquiry about defeasible assumptions. The notion of assumption-based reasoning presented includes finding assumption-based explanations, direct consequences and extensions. Extending the computational framework of the CMS, an Assumption-based Clause Management System (ACMS) that computes the above functions, is presented. A simple protocol for use by domain specific applications interacting with the ACMS is proposed. Included in the protocol is a method to perform revision of assumptions. The first algorithm to perform incremental deletion of prime implicates is also presented. Additionally, a new notion of approximated abduction together with a set of approximation strategies, namely knowledge guided and resource-bounded approximation, are proposed. The goal of these studies is to propose a framework for incorporating knowledge-guided and resource-bounded approximation into computational abduction. The potential benefit might be the discovery of a useful and tractable approximation strategy. The specification of a domain independent propositional abductive reasoning system is the main achievement of this thesis. The resulting abductive reasoning system, the ACMS, is adaptable to a wide spectrum of domain specific applications. The ACMS can free designers from repeatedly building specialized abductive inference engines, and instead allow them to concentrate their effort on knowledge engineering and problem solving.



Book ChapterDOI
07 Oct 1992
TL;DR: A new proposal for the computation of hypotheses explaining the presence of illegality by exploiting abductive reasoning is presented, based on an suitable manipulation of minimal three-valued models of the logic program.
Abstract: Some steps of the design of a data dictionary with the use of a particular methodology are represented by means of logic rules augumented with integrity constraints defining illegal data design The presence of concepts incompatible among them is easily revealed by asking for satisfiability of integrity constraints Furthermore, it is possible to obtain the hypotheses explaining the presence of illegality by exploiting abductive reasoning To this end a new proposal for the computation of such hypotheses, based on an suitable manipulation of minimal three-valued models of the logic program, is presented

Book ChapterDOI
05 Oct 1992
TL;DR: It is claimed that inductive reasoning can be characterised in a way similar to plausible reasoning, and there are strong relations between belief revision and explanatory reasoning.
Abstract: In this paper, we discuss and relate characterisations of different forms of ‘jumping to conclusions’: Kraus, Lehmann & Magidor's analysis of plausible reasoning, the present author's characterisation of inductive reasoning, Zadrozny's account of abductive reasoning, and Gardenfors' theory of belief revision. Our main claims are that (i) inductive reasoning can be characterised in a way similar to plausible reasoning; (ii) inductive and abductive reasoning are special cases of explanatory reasoning; and (iii) there are strong relations between belief revision and explanatory reasoning. The ultimate goal of this research is a general account of jumping to conclusions.

Book ChapterDOI
01 Jan 1992
TL;DR: This paper describes how the system uses abductive reasoning to guide its search for artifacts that satisfy some of the requirements and how it can assemble these artifacts in a new configuration and how this description of freshly assembled artifacts generated by the system can be used by a routine design problem solver for the detailed design of the individual sub-parts.
Abstract: In this paper we describe a system being developed by us to solve the problem of configurational design, a kind of design that generates descriptions of artifacts containing a number of sub-parts. It maps the functional requirements to a structural description to be used in the detailed routine design of the artifact. The approach used by us relies upon the functional description of the requirements and on our part representation technique. Our representation strengthens the hierarchical representation used in routine design systems by introducing explicit information on connectivity between various parts. This paper describes how the system uses abductive reasoning to guide its search for artifacts that satisfy some of the requirements and how it can assemble these artifacts in a new configuration. It also describes how this description of freshly assembled artifacts generated by the system can be used by a routine design problem solver for the detailed design of the individual sub-parts. The abductive reasoning used by us differs from abductive hypothesis assembly since the causalities and assembly techniques for hypothesis and those for parts differ. The work reported in this paper forms a part of an attempt to tackle large real life design problems in an intelligent way. We have described how this work forms the basis of relating multiple levels of deep and shallow knowledge and generating novel failure recovery mechanisms. We further envisage that this work builds a foundation for using structural and functional analogy to solve the problem of configurational design.

Journal ArticleDOI
TL;DR: This work identifies and studies 2 types of computational heuristics used in order to tackle abductive complexity, and analyzes the role a particular type of knowledge called Essentials can have in reaching an acceptable abductive explanation.
Abstract: Abduction is a ubiquitous reasoning task in human problem solving. Its ubiquity is somewhat in contradiction with its computational complexity which has been shown to be in general NP. The fact that humans perform abductive reasoning tasks in a "reasonable" time and within the limits of their cognitive architecture led us to hypothesize the existence of computational heuristics used in order to tackle abductive complexity. In this work we identify and study 2 types of such heuristics. First, working from protocols gathered from expert blood bankers performing the abductive task of alloantibody identification, we compare the conclusion of the protocol analysis to the results of the formal computational complexity analysis of abduction. This comparison yields interesting explanations of cognitive phenomena based on a computational theory argument. We then analyze the role a particular type of knowledge called Essentials can have in reaching an acceptable abductive explanation. For each of these heuristics we implement a computational model and discuss its theoretical and cognitive aspects. We also discuss the relevancy of using computational complexity arguments as a tool to explain human behavior and to generate hypotheses in cognitive science research. Finally the present work is related to previous work and future extensions are proposed.

01 Jan 1992
TL;DR: A new approach to modeling abductive reasoning which admits an extremely efficient implementation and is able to handle difficult problems such as alternative explanations, continuous random variables, consistency, partial covering and cyclicity which are commonly encountered in abductive domains.
Abstract: Abductive explanation has been formalized in AI as the process of searching for a set of assumptions that can prove a given observation. A basic problem which naturally arises is that there may be many different possible sets available. Thus, some preferential ordering on the explanations is necessary to precisely determine which one is best. Unfortunately, any model with sufficient representational power is in general NP-hard. Causal trees and scAND/ scOR graphs are among the most commonly used for representing causal knowledge. Consequently, finding a best explanation has been treated as some heuristic search through the graph. However, this approach exhibits an expected exponential run-time growth rate. In this thesis, we present a new approach to modeling abductive reasoning which admits an extremely efficient implementation. We treat the problem in terms of constrained optimization instead of graph traversal. Our approach models knowledge using linear constraints and finds a best explanation by optimizing some measure within these constraints. Although finding the best explanation remains NP-hard, our approach allows us to utilize the highly efficient tools developed in operations research. Such tools as the Simplex method and Karmarkar's projective scaling algorithm form the foundations for the practical realization of our approach. Experimental results strongly indicate that our linear constraint satisfaction approach is quite promising. Studies comparing our approach against heuristic search techniques has shown our approach to be superior in both time and space, and actually exhibiting an expected polynomial run-time growth rate. Our goal is to show that our framework is both flexible and representationally powerful. We can model both cost-based abduction and Bayesian networks. Furthermore, it is possible for us to handle difficult problems such as alternative explanations, continuous random variables, consistency, partial covering and cyclicity which are commonly encountered in abductive (diagnostic) domains.

Journal Article
TL;DR: In this article, the authors identify and study two types of heuristics used in order to tackle abductive complexity and discuss its theoretical and cognitive aspects, and discuss the relevancy of using computational complexity arguments as a tool to explain human behavior and to generate hypotheses in cognitive science research.
Abstract: Abduction is a ubiquitous reasoning task in human problem solving. Its ubiquity is somewhat in contradiction with its computational complexity which has been shown to be in general NP. The fact that humans perform abductive reasoning tasks in a \"reasonable\" time and within the limits of their cognitive architecture led us to hypothesize the existence of computational heuristics used in order to tackle abductive complexity. In this work we identify and study 2 types of such heuristics. First, working from protocols gathered from expert blood bankers performing the abductive task of alloantibody identification, we compare the conclusion of the protocol analysis to the results of the formal computational complexity analysis of abduction. This comparison yields interesting explanations of cognitive phenomena based on a computational theory argument. We then analyze the role a particular type of knowledge called Essentials can have in reaching an acceptable abductive explanation. For each of these heuristics we implement a computational model and discuss its theoretical and cognitive aspects. We also discuss the relevancy of using computational complexity arguments as a tool to explain human behavior and to generate hypotheses in cognitive science research. Finally the present work is related to previous work and future extensions are proposed.

01 Jan 1992
TL;DR: An algorithm that combines the paradigms of belief revision and abduction and a system called Brace that is a preliminary implementation of this algorithm and the applicability of the Brace approach to a wide range of domains is shown.
Abstract: This proposal presents an approach to explanation that incorporates the paradigms of belief revision and abduction. We present an algorithm that combines these techniques and a system called Brace that is a preliminary implementation of this algorithm. We show the applicability of the Brace approach to a wide range of domains including scientic discovery, device diagnosis and plan recognition. Finally, we describe our proposals for a new implementation, new application domains for our system and extensions to this approach.

Book ChapterDOI
01 Jan 1992
TL;DR: It is hoped to develop a formal methodology for integrating case-based databases with rule-based expert systems in the legal domain through the integration of database and expert system technologies.
Abstract: We are currently developing concepts and software tools designed to aid legal practitioners in the process of statutory interpretation: the process of determining the meaning of a statute or regulation and applying it to a particular set of facts. This is being attempted through the integration of database and expert system technologies. Case-Based Reasoning (CBR) is being used to model legal precedents while Rule-Based Reasoning (RBR) modules are being used to model the legislation and other types of causal knowledge. It is hoped to generalise these findings and to develop a formal methodology for integrating case-based databases with rule-based expert systems in the legal domain.

01 Jan 1992
TL;DR: A new model for abductive reasoning called generalized cost-based abduction for general causal knowledge bases is presented and an approach for solving this model by using linear constraint satisfaction is provided.
Abstract: Abductive reasoning (or explanation) is basically a backward-chaining process on a collection of causal rules. Given an observed event, we attempt to determine the set of causes that brought about this event. {\em Cost-based abduction} is a model for abductive reasoning which provides a concrete formulation of the explanation process. However, it restricts itself to acyclic causal knowledge bases. The existence of cyclicity results in anamolous behavior by the model. For example, assume in our knowledge base that $A$ can cause $B$ and $B$ can cause $A$. Faced with having to explain the occurrence of $A$, we could postulate $B$. Now, since $A$ already exists, we can use it to explain $B$. Our backward- explanation can certainly fall into this trap. In this paper, we present a new model called {\em generalized cost-based abduction} for general causal knowledge bases. Furthermore, we provide an approach for solving this model by using {\em linear constraint satisfaction.}