scispace - formally typeset
Search or ask a question
Topic

Abductive reasoning

About: Abductive reasoning is a research topic. Over the lifetime, 1917 publications have been published within this topic receiving 44645 citations. The topic is also known as: abduction & abductive inference.


Papers
More filters
Book ChapterDOI
01 Jan 2020
TL;DR: Abe et al. as discussed by the authors discuss machine learning models based on abductive learning techniques and their implications to artificial reasoning, including the applicability of abductive reasoning to artificial intelligence and machine learning.
Abstract: There has been much research in recent years in the applicability of abductive reasoning to artificial intelligence and machine learning. Abductive learning involves finding the best explanation for a set of observations, based on creating a set of possible explanatory hypotheses. Formal models have been created (Abe, Proceedings of the IJCAI97 Workshop on Induction, 1997), which are utilized to analyze the properties and computational efficiencies of abductive reasoning to various artificial intelligence applications. Here we discuss machine learning models based on abductive learning techniques and their implications to artificial reasoning.

4 citations

Proceedings ArticleDOI
01 Jan 2011
TL;DR: A general-purpose distributed abductive logic programming system for multi-agent hypothetical reasoning with confidentiality, which computes consistent conditional answers for a query over a set of distributed normal logic programs with possibly unbound domains and arithmetic constraints, preserving the private information within the logic programs.
Abstract: In the context of multi-agent hypothetical reasoning, agents typically have partial knowledge about their environments, and the union of such knowledge is still incomplete to represent the whole world. Thus, given a global query they collaborate with each other to make correct inferences and hypothesis, whilst maintaining global constraints. Most collaborative reasoning systems operate on the assumption that agents can share or communicate any information they have. However, in application domains like multi-agent systems for healthcare or distributed software agents for security policies in coalition networks, confidentiality of knowledge is an additional primary concern. These agents are required to collaborately compute consistent answers for a query whilst preserving their own private information. This paper addresses this issue showing how this dichotomy between "open communication" in collaborative reasoning and protection of confidentiality can be accommodated. We present a general-purpose distributed abductive logic programming system for multi-agent hypothetical reasoning with confidentiality. Specifically, the system computes consistent conditional answers for a query over a set of distributed normal logic programs with possibly unbound domains and arithmetic constraints, preserving the private information within the logic programs. A case study on security policy analysis in distributed coalition networks is described, as an example of many applications of this system.

4 citations

Book ChapterDOI
01 Jan 2014
TL;DR: In this article, the authors examined analogical reasoning in comparison and combination with other methods, mainly inductive and causal reasoning, and examined the relation between these two methods, and the relevant connections with analogy.
Abstract: Clinical reasoning combines a wide range of strategies: deduction, induction and mainly different kinds of abductive reasoning (analogy, case studies, causal reasoning, etc.). This paper will focus on analogical reasoning, although it will be examined in comparison and combination with other methods, mainly inductive and causal reasoning. Analogy, induction and other kinds of reasoning play different roles in each phase. Analogy is one of the most powerful tools in everyday clinical practice, since physicians must compare their patients’ symptoms with their past professional experience and their theoretical knowledge of medicine, and analogical reasoning plays a central role in this task. Young doctors and experimented physicians tend to reason differently, so this point will be explored. However, it is not the only way of reaching a diagnosis and confirming/rejecting hypotheses. Since physicians want to know the possible cause of their patients’ symptoms, analogy combines well with causal models, and therefore the relation between these two methods will be examined. Induction and the relevant connections with analogy will be considered too since it plays a decisive role both in the generation and the confirmation of hypotheses.

4 citations

Proceedings ArticleDOI
16 Nov 2018
TL;DR: Focusing on logic models formulated in propositional Horn-clauses, examples are provided that show the attractiveness of the concept, drawing on the flexibility and ease-of-use of such a spectrum-based concept.
Abstract: When obtaining a full-fledged model for diagnostic and debugging purposes is out of reach, abstract logic models might allow us to fall back to abductive reasoning for isolating faults. Such models often only aggregate knowledge about which inputs and faults would have this or that effect on the system. Like in property-based system design or formal verification, we have that the quality of the resulting reasoning process depends heavily on this logic model. Since logic descriptions are not entirely intuitive to formulate and automated processes to derive them are prone to be incomplete, we'd certainly be interested in assessing a model's quality and isolate issues. In this paper, we're proposing to use test cases and spectrum-based fault localization for this task, drawing on the flexibility and ease-of-use of such a spectrum-based concept. Focusing on logic models formulated in propositional Horn-clauses, we provide examples that show the attractiveness of our concept.

4 citations

Posted Content
TL;DR: In this article, the authors extend the abduction task to utilize partially specified examples, along with declarative background knowledge about the missing data, and show that when a small explanation exists, it is possible to obtain a much improved guarantee in the challenging exception tolerant setting.
Abstract: Juba recently proposed a formulation of learning abductive reasoning from examples, in which both the relative plausibility of various explanations, as well as which explanations are valid, are learned directly from data. The main shortcoming of this formulation of the task is that it assumes access to full-information (i.e., fully specified) examples; relatedly, it offers no role for declarative background knowledge, as such knowledge is rendered redundant in the abduction task by complete information. In this work, we extend the formulation to utilize such partially specified examples, along with declarative background knowledge about the missing data. We show that it is possible to use implicitly learned rules together with the explicitly given declarative knowledge to support hypotheses in the course of abduction. We observe that when a small explanation exists, it is possible to obtain a much-improved guarantee in the challenging exception-tolerant setting. Such small, human-understandable explanations are of particular interest for potential applications of the task.

4 citations


Network Information
Related Topics (5)
Natural language
31.1K papers, 806.8K citations
82% related
Ontology (information science)
57K papers, 869.1K citations
79% related
Inference
36.8K papers, 1.3M citations
76% related
Heuristics
32.1K papers, 956.5K citations
76% related
Social network
42.9K papers, 1.5M citations
75% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022103
202156
202059
201956
201867