Open AccessPosted Content
ExplanationLP: Abductive Reasoning for Explainable Science Question Answering
TLDR
A novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains that elicits explanations by constructing a weighted graph of relevant facts for each candidate answer and extracting the facts that satisfy certain structural and semantic constraints.Abstract:
We propose a novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains. This paper frames question answering as an abductive reasoning problem, constructing plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer. Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer and extracting the facts that satisfy certain structural and semantic constraints. To extract the explanations, we employ a linear programming formalism designed to select the optimal subgraph. The graphs' weighting function is composed of a set of parameters, which we fine-tune to optimize answer selection performance. We carry out our experiments on the WorldTree and ARC-Challenge corpus to empirically demonstrate the following conclusions: (1) Grounding-Abstract inference chains provides the semantic control to perform explainable abductive reasoning (2) Efficiency and robustness in learning with a fewer number of parameters by outperforming contemporary explainable and transformer-based approaches in a similar setting (3) Generalisability by outperforming SOTA explainable approaches on general science question sets.read more
Citations
More filters
Inference To The Best Explanation
TL;DR: The inference to the best explanation is universally compatible with any devices to read and is available in the digital library an online access to it is set as public so you can get it instantly.
Proceedings ArticleDOI
Grow-and-Clip: Informative-yet-Concise Evidence Distillation for Answer Explanation
Yuyan Chen,Yang Xiao,Bang Liu +2 more
TL;DR: Experimental results show that automatic distilled evidences have human-like informativeness, conciseness and readability, which can enhance the interpretability of the answers to questions.
Journal ArticleDOI
Quasi-symbolic explanatory NLI via disentanglement: A geometrical examination
TL;DR: Empirical results indicate that the role-contents of explanations, such as ARG0-animal, are disentangled in the latent space, which provides a chance for controlling the explanation generation by manipulating the traversal of vector over latent space.
Posted Content
∂-Explainer: Abductive Natural Language Inference via Differentiable Convex Optimization.
TL;DR: In this article, the authors combine the best of both worlds by casting the constrained optimization as part of a deep neural network via differentiable convex optimization and fine-tuning pre-trained transformers for downstream explainable NLP tasks.
Proceedings ArticleDOI
STAR: Cross-modal [STA]tement [R]epresentation for selecting relevant mathematical premises
Deborah Ferreira,André Freitas +1 more
TL;DR: This paper proposed a cross-modal attention model to learn how to represent mathematical text for the task of Natural Language Premise Selection, which uses conjectures written in both natural and mathematical language to recommend premises that most likely will be relevant to prove a particular statement.
References
More filters
Posted Content
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TL;DR: A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Book
Collected Papers of Charles Sanders Peirce
Charles Hartshorne,Paul Weiss +1 more
TL;DR: The authors published an eight-volume collection of Peirce's writings on general philosophy, logic, pragmatism, metaphysics, experimental science, scientific method and philosophy of mind, as well as reviews and correspondence.
Book
Inference to the best explanation
TL;DR: Lipton argues that an illuminating version of "Inference to the Best Explanation" must rely on the latter notion, and provides a new account of what makes one explanation lovelier than another.
Proceedings Article
ConceptNet 5.5: An Open Multilingual Graph of General Knowledge
TL;DR: ConceptNet as mentioned in this paper is a knowledge graph that connects words and phrases of natural language with labeled edges to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use.
Posted Content
ConceptNet 5.5: An Open Multilingual Graph of General Knowledge
TL;DR: ConceptNet as discussed by the authors is a knowledge graph that connects words and phrases of natural language with labeled edges to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use.