scispace - formally typeset
Search or ask a question

Showing papers by "Enrique Alfonseca published in 2022"


Proceedings ArticleDOI
17 May 2022
TL;DR: This work proposes a method to infuse structured knowledge into large language models, by directly training T5 models on factual triples of knowledge graphs (KGs), and shows that models pre-trained on Wikidata KG with this method outperform the T5 baselines on FreebaseQA and WikiHop, as well as theWikidata-answerable subset of TriviaQ a and NaturalQuestions.
Abstract: Large language models (LLMs) have demonstrated human-level performance on a vast spectrum of natural language tasks. However, it is largely unexplored whether they can better internalize knowledge from a structured data, such as a knowledge graph, or from text. In this work, we propose a method to infuse structured knowledge into LLMs, by directly training T5 models on factual triples of knowledge graphs (KGs). We show that models pre-trained on Wikidata KG with our method outperform the T5 baselines on FreebaseQA and WikiHop, as well as the Wikidata-answerable subset of TriviaQA and NaturalQuestions. The models pre-trained on factual triples compare competitively with the ones on natural language sentences that contain the same knowledge. Trained on a smaller size KG, WikiMovies, we saw 3x improvement of exact match score on MetaQA task. The proposed method has an advantage that no alignment between the knowledge graph and text corpus is required in curating training data. This makes our method particularly useful when working with industry-scale knowledge graphs.

18 citations


Proceedings ArticleDOI
14 Apr 2022
TL;DR: It is demonstrated that sharing parameters in projection layers would enable ADEs to perform competitively with SDEs, and three different improved versions of ADEs are proposed.
Abstract: Dual encoders have been used for question-answering (QA) and information retrieval (IR) tasks with good results. There are two major types of dual encoders, Siamese Dual Encoders (SDE), with parameters shared across two encoders, and Asymmetric Dual Encoder (ADE), with two distinctly parameterized encoders. In this work, we explore the dual encoder architectures for QA retrieval tasks. By evaluating on MS MARCO, open domain NQ, and the MultiReQA benchmarks, we show that SDE performs significantly better than ADE. We further propose three different improved versions of ADEs. Based on the evaluation of QA retrieval tasks and direct analysis of the embeddings, we demonstrate that sharing parameters in projection layers would enable ADEs to perform competitively with SDEs.

5 citations