scispace - formally typeset
Search or ask a question
Topic

Abductive reasoning

About: Abductive reasoning is a research topic. Over the lifetime, 1917 publications have been published within this topic receiving 44645 citations. The topic is also known as: abduction & abductive inference.


Papers
More filters
Posted Content
TL;DR: A novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains that elicits explanations by constructing a weighted graph of relevant facts for each candidate answer and extracting the facts that satisfy certain structural and semantic constraints.
Abstract: We propose a novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains. This paper frames question answering as an abductive reasoning problem, constructing plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer. Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer and extracting the facts that satisfy certain structural and semantic constraints. To extract the explanations, we employ a linear programming formalism designed to select the optimal subgraph. The graphs' weighting function is composed of a set of parameters, which we fine-tune to optimize answer selection performance. We carry out our experiments on the WorldTree and ARC-Challenge corpus to empirically demonstrate the following conclusions: (1) Grounding-Abstract inference chains provides the semantic control to perform explainable abductive reasoning (2) Efficiency and robustness in learning with a fewer number of parameters by outperforming contemporary explainable and transformer-based approaches in a similar setting (3) Generalisability by outperforming SOTA explainable approaches on general science question sets.

7 citations

Journal ArticleDOI
TL;DR: Differences in use of the so-called ‘logical’ elements of language such as quantifiers and conditionals are explored, and this is used to explain differences in performance in reasoning tasks across subject groups with different educational backgrounds.
Abstract: In this paper we explore differences in use of the so-called `logical' elements of language such as quantifiers and conditionals, and use this to explain differences in performance in reasoning tasks across subject groups with different educational backgrounds. It is argued that quantified sentences are difficult natural bases for reasoning, and hence more prone to elicit variation in reasoning behaviour, because they are chiefly used with a pre-determined domain in everyday speech. By contrast, it is argued that conditional sentences form natural premises because of the function they serve in everyday speech. Implications of this for the role of logic in modelling human reasoning behaviour are briefly considered.

7 citations

Book ChapterDOI
01 Dec 2000
TL;DR: This chapter provides a brief introduction to the field of Logic-Based Artificial Intelligence (LBAI), and discusses contributions to LBAI contained in the chapters and some of the highlights that took place at the workshop on LBAi.
Abstract: In this chapter I provide a brief introduction to the field of Logic-Based Artificial Intelligence (LBAI). I then discuss contributions to LBAI contained in the chapters and some of the highlights that took place at the workshop on LBAI from which the papers are drawn. The areas of LBAI represented in the book are: commonsense reasoning; knowledge representation; nonmonotonic reasoning; abductive and inductive reasoning; logic, probability and decision making; logic for causation and actions; planning and problem solving; logic, planning and high-level robotics; logic for agents and actions; theory of beliefs; logic and language; computational logic; system implementations; and logic applications to mechanical checking and data integration.

7 citations

Proceedings ArticleDOI
08 Nov 1999
TL;DR: The inference proposed tries to generate missing hypotheses that are placed on the middle of the inference path by both abductive inference and deductive inference using analogical mapping.
Abstract: In general, if a knowledge base lacks the necessary knowledge, abductive reasoning cannot explain an observation. Therefore, it is necessary to generate missing hypotheses. CMS can generate missing hypotheses, but it can only generate short-cut hypotheses or hypotheses that will not be placed on real leaves. That is, the inference path is incomplete (truncated), so that abduction is not complete. The inference proposed tries to generate missing hypotheses that are placed on the middle of the inference path by both abductive inference and deductive inference using analogical mapping. As a result, the inference can generate missing hypotheses even on the middle of the inference path.

7 citations


Network Information
Related Topics (5)
Natural language
31.1K papers, 806.8K citations
82% related
Ontology (information science)
57K papers, 869.1K citations
79% related
Inference
36.8K papers, 1.3M citations
76% related
Heuristics
32.1K papers, 956.5K citations
76% related
Social network
42.9K papers, 1.5M citations
75% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022103
202156
202059
201956
201867