scispace - formally typeset
Search or ask a question
Topic

Abductive reasoning

About: Abductive reasoning is a research topic. Over the lifetime, 1917 publications have been published within this topic receiving 44645 citations. The topic is also known as: abduction & abductive inference.


Papers
More filters
Book ChapterDOI
01 Jan 2002
TL;DR: The experimental results reveal that the problem of partial abductive inference is difficult to solve by exact computation.
Abstract: Partial abductive inference in Bayesian belief networks has been usually expressed as an extension of total abductive inference (abduction over all the variables in the network). In this paper we study the transformation of the partial problem in a total one, analyzing and trying to improve the method previously appeared in the literature. We also outline an alternative approach, and compare both methods by means of experimentation. The experimental results reveal that the problem of partial abductive inference is difficult to solve by exact computation.

16 citations

Book ChapterDOI
01 Feb 2015
TL;DR: In this paper, a survey of semantically-guided or model-based methods for reasoning in first-order logic is presented, focusing on hierarchical and locality-based approaches for solving the goal-sensitive reasoning problem.
Abstract: Reasoning semantically in first-order logic is notoriously a challenge. This paper surveys a selection of semantically-guided or model-based methods that aim at meeting aspects of this challenge. For first-order logic we touch upon resolution-based methods, tableaux-based methods, DPLL-inspired methods, and we give a preview of a new method called SGGS, for Semantically-Guided Goal-Sensitive reasoning. For first-order theories we highlight hierarchical and locality-based methods, concluding with the recent Model-Constructing satisfiability calculus.

16 citations

Posted Content
TL;DR: In the exposition of this preliminary framework, relatively straightforward image classification examples and a variety of choices on initial configuration of a deep model building scenario are used to expose classification outcomes of deep models using visualization, and also show initial results for one potential application of interpretability.
Abstract: The practical impact of deep learning on complex supervised learning problems has been significant, so much so that almost every Artificial Intelligence problem, or at least a portion thereof, has been somehow recast as a deep learning problem. The applications appeal is significant, but this appeal is increasingly challenged by what some call the challenge of explainability, or more generally the more traditional challenge of debuggability: if the outcomes of a deep learning process produce unexpected results (e.g., less than expected performance of a classifier), then there is little available in the way of theories or tools to help investigate the potential causes of such unexpected behavior, especially when this behavior could impact people's lives. We describe a preliminary framework to help address this issue, which we call "deep visual explanation" (DVE). "Deep," because it is the development and performance of deep neural network models that we want to understand. "Visual," because we believe that the most rapid insight into a complex multi-dimensional model is provided by appropriate visualization techniques, and "Explanation," because in the spectrum from instrumentation by inserting print statements to the abductive inference of explanatory hypotheses, we believe that the key to understanding deep learning relies on the identification and exposure of hypotheses about the performance behavior of a learned deep model. In the exposition of our preliminary framework, we use relatively straightforward image classification examples and a variety of choices on initial configuration of a deep model building scenario. By careful but not complicated instrumentation, we expose classification outcomes of deep models using visualization, and also show initial results for one potential application of interpretability.

16 citations

Proceedings Article
11 Aug 1986
TL;DR: It is suggested that deliberation, or "practical reasoning," is a form of normative reasoning and that the understanding and construction of reasoning systems that can deliberate and act intentionally presupposes a theory of normative Reasoning.
Abstract: Deliberation typically involves the formation of a plan or intention from a set of values and beliefs. I suggest that deliberation, or "practical reasoning," is a form of normative reasoning and that the understanding and construction of reasoning systems that can deliberate and act intentionally presupposes a theory of normative reasoning. The language and semantics of a deontic logic is used to develop a theory of defeasible reasoning in normative systems and belief systems. This theory may be applied in action theory and to artificial intelligence by identifying expressions of values, beliefs, and intentions with various types of modal sentences from the language.

16 citations


Network Information
Related Topics (5)
Natural language
31.1K papers, 806.8K citations
82% related
Ontology (information science)
57K papers, 869.1K citations
79% related
Inference
36.8K papers, 1.3M citations
76% related
Heuristics
32.1K papers, 956.5K citations
76% related
Social network
42.9K papers, 1.5M citations
75% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022103
202156
202059
201956
201867