scispace - formally typeset
Search or ask a question
Topic

Abductive reasoning

About: Abductive reasoning is a research topic. Over the lifetime, 1917 publications have been published within this topic receiving 44645 citations. The topic is also known as: abduction & abductive inference.


Papers
More filters
BookDOI
05 Feb 2004
TL;DR: In this article, a man named Albert is in a bit of a hurry but could probably arrive on time going either way, and decides to take the scenic eastern route to get to Boston.
Abstract: Albert thinks about what route to take to get to Boston. He thinks that, while the direct western route is faster, the scenic eastern route is longer but more enjoyable with less traffic. He is in a bit of a hurry but could probably arrive on time going either way. He eventually reaches a decision. The reasoning Albert goes through in settling on what route to take is practical. He is deciding what to do. At about the same time, Albert's friend Betty tries to decide what route Albert will take. She thinks about what Albert has done before, what Albert likes in a route, and how much of a hurry Albert is in. Betty's reasoning is theoretical. She is trying to arrive at a belief about what Albert will do. Practical reasoning in this more or less technical sense leads to (or modifies) intentions, plans, and decisions. Theoretical reasoning in the corresponding technical sense leads to (or modifies) beliefs and expectations. There is also the possibility that reasoning of either sort leaves things unchanged. Any given instance of reasoning may combine both theoretical and practical reasoning. In deciding which route to take, Albert may have to reach theoretical conclusions about how long it will take to go by the eastern route. In thinking about

38 citations

01 Jan 1992
TL;DR: It is shown how plausible default assumptions can be used to obtain more complete information (by filling in details) thereby providing a computational advantage in commonsense reasoning systems, the first rigorous demonstration of the claim.
Abstract: Commonsense knowledge is often incomplete, and reasoning with incomplete information is generally computationally intractable. For example, the statement "John may come by train or by plane" can force a system to reason by cases. We show how plausible default assumptions can be used to obtain more complete information (by filling in details), thereby providing a computational advantage in commonsense reasoning systems. Although it has been previously hypothesized that default reasoning might provide such an advantage, this is the first rigorous demonstration of the claim. A central aspect of this work is the analysis of the computational complexity of default reasoning. We characterize the various sources of intractability, and show how tractable forms of default reasoning can be obtained. Model-preference default theories are defined to provide a general default reasoning mechanism. We give a detailed analysis of the tradeoff between expressiveness and tractability as we place various syntactic restrictions on such theories. A similar analysis is given for Reiter's default logic, with an emphasis on efficiently computable default theories that include strict Horn theories. Our results indicate that the various tractable default theories are closely related. Our analysis of default logic also reveals the inherent intractability of abductive reasoning. Next, we consider the complexity of defeasible inheritance--a more specialized default reasoning formalism used in the representation of hierarchically organized information in semantic networks. Our central result is that inheritance reasoning based on Touretzky's inferential distance is NP-hard. Again, we identify the source of the intractability and delineate tractable subsystems. We conclude with a discussion of some of the computational limitations of the use of defaults in commonsense reasoning.

38 citations

Book
01 Aug 1995
TL;DR: The book points out that customary expositions of common-sense reasoning are based on a flawed non-monotonic reasoning paradigm and that the resulting solutions proposed for major problems, such as the frame problem, are either ad hoc or inadequate.
Abstract: This book presents the foundations of reasoning with partial information and a theory of common sense reasoning based on monotonic logic and partial structures. This theory was designed specifically for the needs of practicing computer scientists and provides easily implementable algorithms. Starting from first principles, following the logic of discovery of Karl Popper and Imre Lakatos, and the semantics of programming languages, the book develops a system of reasoning with partial information, and applies it to a comprehensive study of the problem examples from the literature of common sense reasoning. Proof-theoretic and model-theoretic views are considered in the applications, as well as logical problems of theoretical physics, such as issues related to Heisenberg's uncertainty principle. The book points out that customary expositions of common-sense reasoning are based on a flawed non-monotonic reasoning paradigm and that the resulting solutions proposed for major problems, such as the frame problem, are either ad hoc or inadequate. It is shown that non-monotonicity results from hiding information that should not be hidden. The essential research in common-sense reasoning has been developed in isolation from the disciplines of theoretical computer science and classical logic. This work breaks the isolation and establishes deep links. The book will be of interest to computer scientists, mathematicians, logicians, and philosophers interested in the foundations and applications of reasoning with partial information.

38 citations

Book ChapterDOI
09 May 1989
TL;DR: In this article, a Prolog-like inference system is proposed to compute minimum-cost explanations for these abductive reasoning methods, based on the assumption costs of literals in the logical form and functions attached to the antecedents of the implications.
Abstract: By determining those added assumptions sufficient to make the logical form of a natural-language sentence provable, abductive inference can be used in the interpretation of sentences to determine the information to be added to the listener's knowledge, i.e., what the listener should learn from the sentence. Some new forms of abduction are more appropriate to the task of interpreting natural language than those used in the traditional diagnostic and design synthesis applications of abduction. In one new form, least specific abduction, only literals in the logical form of the sentence can be assumed. The assignment of numeric costs to axioms and assumable literals permits specification of preferences on different abductive explanations. Least specific abduction is sometimes too restrictive. Better explanations can sometimes be found if literals obtained by backward chaining can also be assumed. Assumption costs for such literals are determined by the assumption costs of literals in the logical form and functions attached to the antecedents of the implications. There is a new Prolog-like inference system that computes minimum-cost explanations for these abductive reasoning methods.

38 citations

17 Jun 2004
TL;DR: Knowledge representation is at the very core of a radical idea for understanding intelligence as discussed by the authors, which is to understand and build intelligent behavior from the top down, putting the focus on what an agent needs to know in order to behave intelligently, how this knowledge can be represented symbolically, and how automated reasoning procedures can make this knowledge available as needed.
Abstract: Knowledge representation is at the very core of a radical idea for understanding intelligence. Instead of trying to understand or build brains from the bottom up, its goal is to understand and build intelligent behavior from the top down, putting the focus on what an agent needs to know in order to behave intelligently, how this knowledge can be represented symbolically, and how automated reasoning procedures can make this knowledge available as needed. This landmark text takes the central concepts of knowledge representation developed over the last 50 years and illustrates them in a lucid and compelling way. Each of the various styles of representation is presented in a simple and intuitive form, and the basics of reasoning with that representation are explained in detail. This approach gives readers a solid foundation for understanding the more advanced work found in the research literature. The presentation is clear enough to be accessible to a broad audience, including researchers and practitioners in database management, information retrieval, and object-oriented systems as well as artificial intelligence. This book provides the foundation in knowledge representation and reasoning that every AI practitioner needs. *Authors are well-recognized experts in the field who have applied the techniques to real-world problems * Presents the core ideas of KR&R in a simple straight forward approach, independent of the quirks of research systems *Offers the first true synthesis of the field in over a decade Table of Contents 1 Introduction * 2 The Language of First-Order Logic *3 Expressing Knowledge * 4 Resolution * 5 Horn Logic * 6 Procedural Control of Reasoning * 7 Rules in Production Systems * 8 Object-Oriented Representation * 9 Structured Descriptions * 10 Inheritance * 11 Numerical Uncertainty *12 Defaults *13 Abductive Reasoning *14 Actions * 15 Planning *16 A Knowledge Representation Tradeoff * Bibliography * Index

38 citations


Network Information
Related Topics (5)
Natural language
31.1K papers, 806.8K citations
82% related
Ontology (information science)
57K papers, 869.1K citations
79% related
Inference
36.8K papers, 1.3M citations
76% related
Heuristics
32.1K papers, 956.5K citations
76% related
Social network
42.9K papers, 1.5M citations
75% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022103
202156
202059
201956
201867