scispace - formally typeset
Open AccessProceedings ArticleDOI

UCL Machine Reading Group: Four Factor Framework For Fact Finding (HexaF)

TLDR
This system is a four stage model consisting of document retrieval, sentence retrieval, natural language inference and aggregation that achieved a FEVER score of 62.52% on the provisional test set (without additional human evaluation), and 65.41%" on the development set.
Abstract
In this paper we describe our 2nd place FEVER shared-task system that achieved a FEVER score of 62.52% on the provisional test set (without additional human evaluation), and 65.41% on the development set. Our system is a four stage model consisting of document retrieval, sentence retrieval, natural language inference and aggregation. Retrieval is performed leveraging task-specific features, and then a natural language inference model takes each of the retrieved sentences paired with the claimed fact. The resulting predictions are aggregated across retrieved sentences with a Multi-Layer Perceptron, and re-ranked corresponding to the final prediction.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Survey of Hallucination in Natural Language Generation

TL;DR: This survey serves tofacilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG by providing a broad overview of the research progress and challenges in the hallucination problem inNLG.
Journal ArticleDOI

Combining Fact Extraction and Verification with Neural Semantic Matching Networks

TL;DR: Li et al. as mentioned in this paper presented a connected system consisting of three homogeneous neural semantic matching models that conduct document retrieval, sentence selection, and claim verification jointly for fact extraction and verification.
Proceedings ArticleDOI

Revealing the Importance of Semantic Retrieval for Machine Reading at Scale

TL;DR: This work proposes a simple yet effective pipeline system with special consideration on hierarchical semantic retrieval at both paragraph and sentence level, and their potential effects on the downstream task, and illustrates that intermediate semantic retrieval modules are vital for shaping upstream data distribution and providing better data for downstream modeling.
Proceedings ArticleDOI

GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification

TL;DR: A graph-based evidence aggregating and reasoning (GEAR) framework which enables information to transfer on a fully-connected evidence graph and then utilizes different aggregators to collect multi-evidence information is proposed.
Proceedings ArticleDOI

Fine-grained Fact Verification with Kernel Graph Attention Network

TL;DR: Kernel Graph Attention Network (KGAT) as mentioned in this paper introduces node kernels, which better measure the importance of the evidence node, and edge kernels to conduct fine-grained evidence propagation in the graph, for more accurate fact verification.
References
More filters
Proceedings ArticleDOI

Glove: Global Vectors for Word Representation

TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Proceedings Article

Rectified Linear Units Improve Restricted Boltzmann Machines

TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Proceedings ArticleDOI

Deep contextualized word representations

TL;DR: This paper introduced a new type of deep contextualized word representation that models both complex characteristics of word use (e.g., syntax and semantics), and how these uses vary across linguistic contexts (i.e., to model polysemy).
Posted Content

A large annotated corpus for learning natural language inference

TL;DR: The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.
Proceedings ArticleDOI

A Decomposable Attention Model for Natural Language Inference

TL;DR: The authors use attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable and achieving state-of-the-art results on the Stanford Natural Language Inference (SNLI) dataset.
Related Papers (5)