scispace - formally typeset
Search or ask a question
Topic

Abductive reasoning

About: Abductive reasoning is a research topic. Over the lifetime, 1917 publications have been published within this topic receiving 44645 citations. The topic is also known as: abduction & abductive inference.


Papers
More filters
Book ChapterDOI
01 Jan 2016
TL;DR: In this paper, it is argued that in real scientific practise both the distinction between a properly speaking abductive phase and an inductive phase of hypothesis testing and evaluation are blurred and reinforce Magnani’s (2014) view on abduction and its function in scientific practise.
Abstract: Hintikka (1997, 1998) argues that abduction is ignorance-preserving in the sense that the hypothesis that abduction delivers and which attempts to explain a set of phenomena is not, epistemologically speaking, on a firmer ground than the phenomena it purports to explain; knowledge is not enhanced until the hypothesis undergoes a further inductive process that will test it against empirical evidence. Hintikka, therefore, introduces a wedge between the abductive process properly speaking and the inductive process of hypothesis testing. Similarly, Minnameier (2004) argues that abduction differs from the inference to the best explanation (IBE) since the former describes the process of generation of theories, while the latter describes the, inductive, process of their evaluation. As Hintikka so Minnameier traces this view back to Peirce’s work on abduction. Recent work on abduction (Gabbay and Wood 2005) goes as far as to draw a distinction between abducting an hypothesis that is considered worth conjecturing and the decision either to use further this hypothesis to do some inferential work in the given domain of enquiry, or to test it experimentally. The latter step, when it takes place, is an inductive mode of inference that should be distinguished from the abductive inference that led to the hypothesis. In this paper, I argue that in real scientific practise both the distinction between a properly speaking abductive phase and an inductive phase of hypothesis testing and evaluation, and the distinction between testing an hypothesis that has been discovered in a preceding abduction and releasing or activating the same hypothesis for further inferential work in the domain of enquiry in which the ignorance problem arose in the first place are blurred because all these processes form an inextricable whole of theory development and elaboration and this defies and any attempt to analyze this intricate process into discrete well defined steps. Thus, my arguments reinforce Magnani’s (2014) view on abduction and its function in scientific practise.

4 citations

Journal ArticleDOI
TL;DR: This work identifies and studies 2 types of computational heuristics used in order to tackle abductive complexity, and analyzes the role a particular type of knowledge called Essentials can have in reaching an acceptable abductive explanation.
Abstract: Abduction is a ubiquitous reasoning task in human problem solving. Its ubiquity is somewhat in contradiction with its computational complexity which has been shown to be in general NP. The fact that humans perform abductive reasoning tasks in a "reasonable" time and within the limits of their cognitive architecture led us to hypothesize the existence of computational heuristics used in order to tackle abductive complexity. In this work we identify and study 2 types of such heuristics. First, working from protocols gathered from expert blood bankers performing the abductive task of alloantibody identification, we compare the conclusion of the protocol analysis to the results of the formal computational complexity analysis of abduction. This comparison yields interesting explanations of cognitive phenomena based on a computational theory argument. We then analyze the role a particular type of knowledge called Essentials can have in reaching an acceptable abductive explanation. For each of these heuristics we implement a computational model and discuss its theoretical and cognitive aspects. We also discuss the relevancy of using computational complexity arguments as a tool to explain human behavior and to generate hypotheses in cognitive science research. Finally the present work is related to previous work and future extensions are proposed.

4 citations

Posted Content
TL;DR: In this article, a polysyllogistic approach is used to describe the underlying structure of legal reasoning, revealing that the base minor premises of legal arguments consist of items of evidence of what the law is, while the base major premises are the categories of legal argument that may be legitimately made.
Abstract: At one time law was considered to be a science; this belief was associated with the concept of “natural law” And just as law was considered a science, legal reasoning was considered to be a species of deductive logic However, it is now recognized that the purpose of legal reasoning is not to prove to others the truth of a statement of fact, but is rather to persuade others about how the law ought to be interpreted and applied Although legal reasoning is logical in form, in substance it is evaluative There are two types of hard cases: cases where a rule of law is ambiguous, and cases where the validity of a rule is in question Questions of ambiguity arise when the minor premise of a proposition of law is challenged, while questions of validity arise when the major premise of a proposition of law is challenged Hard cases are cases where two or more valid legal arguments lead to contradictory conclusions The brief of a case is not a single syllogism of deductive logic; rather, it consists of strands or chains of syllogisms - "polysyllogisms" The polysyllogistic approach is a useful means for describing the underlying structure of legal reasoning This approach reveals that the base minor premises of legal arguments consist of items of evidence of what the law is, while the base major premises are the categories of legal arguments that may be legitimately made Deductive logic plays a central role in legal reasoning, but logic alone cannot solve hard cases When we attempt to reduce the decision of a case to an argument of deductive logic, the aspects of legal reasoning that are not deductive are exposed A system of pure logic works only in easy cases, ie cases where the validity of the rule of law is unchallenged and the terms of rule are unambiguous Hard cases are resolved by a complex balancing of intramodal and intermodal arguments, in which the court evaluates not only the strength of individual arguments, but also the relative weight of the values that support our legal system, as implicated in the particular case

4 citations

Journal ArticleDOI
TL;DR: This paper compares publicly available general purpose tools, established Horn reasoning engines, as well as new variations of known methods as a means for abduction, and focuses on Horn representations, which provide a suitable language to describe most diagnostic scenarios.
Abstract: Abductive inference derives explanations for encountered anomalies and thus embodies a natural approach for diagnostic reasoning. Yet its computational complexity, which is inherent to the expressiveness of the underlying theory, remains a disadvantage. Even when restricting the representation to Horn formulae the problem is NP-complete. Hence, finding procedures that can efficiently solve abductive diagnosis problems is of particular interest from a research as well as practical point of view. In this paper, we aim at providing guidance on choosing an algorithm or tool when confronted with the issue of computing explanations in propositional logic-based abduction. Our focus lies on Horn representations, which provide a suitable language to describe most diagnostic scenarios. We illustrate abduction via two contrasting problem formulations: direct proof methods and conflict-driven techniques. While the former is based on determining logical consequences, the later searches for suitable refutations involving possible causes. To reveal runtime performance trends we conducted a case study, in which we compared publicly available general purpose tools, established Horn reasoning engines, as well as new variations of known methods as a means for abduction.

4 citations

01 Jan 1992
TL;DR: A new approach to modeling abductive reasoning which admits an extremely efficient implementation and is able to handle difficult problems such as alternative explanations, continuous random variables, consistency, partial covering and cyclicity which are commonly encountered in abductive domains.
Abstract: Abductive explanation has been formalized in AI as the process of searching for a set of assumptions that can prove a given observation. A basic problem which naturally arises is that there may be many different possible sets available. Thus, some preferential ordering on the explanations is necessary to precisely determine which one is best. Unfortunately, any model with sufficient representational power is in general NP-hard. Causal trees and scAND/ scOR graphs are among the most commonly used for representing causal knowledge. Consequently, finding a best explanation has been treated as some heuristic search through the graph. However, this approach exhibits an expected exponential run-time growth rate. In this thesis, we present a new approach to modeling abductive reasoning which admits an extremely efficient implementation. We treat the problem in terms of constrained optimization instead of graph traversal. Our approach models knowledge using linear constraints and finds a best explanation by optimizing some measure within these constraints. Although finding the best explanation remains NP-hard, our approach allows us to utilize the highly efficient tools developed in operations research. Such tools as the Simplex method and Karmarkar's projective scaling algorithm form the foundations for the practical realization of our approach. Experimental results strongly indicate that our linear constraint satisfaction approach is quite promising. Studies comparing our approach against heuristic search techniques has shown our approach to be superior in both time and space, and actually exhibiting an expected polynomial run-time growth rate. Our goal is to show that our framework is both flexible and representationally powerful. We can model both cost-based abduction and Bayesian networks. Furthermore, it is possible for us to handle difficult problems such as alternative explanations, continuous random variables, consistency, partial covering and cyclicity which are commonly encountered in abductive (diagnostic) domains.

4 citations


Network Information
Related Topics (5)
Natural language
31.1K papers, 806.8K citations
82% related
Ontology (information science)
57K papers, 869.1K citations
79% related
Inference
36.8K papers, 1.3M citations
76% related
Heuristics
32.1K papers, 956.5K citations
76% related
Social network
42.9K papers, 1.5M citations
75% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022103
202156
202059
201956
201867