scispace - formally typeset
Y

Yi Lu

Publications -  6
Citations -  162

Yi Lu is an academic researcher. The author has contributed to research in topics: Question answering & Context (language use). The author has an hindex of 3, co-authored 5 publications receiving 64 citations.

Papers
More filters
Posted Content

MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering

TL;DR: Multilingual Knowledge Questions and Answers is introduced, an open- domain question answering evaluation set comprising 10k question-answer pairs aligned across 26 typologically diverse languages, making results comparable across languages and independent of language-specific passages.
Proceedings ArticleDOI

An Exploration of Data Augmentation and Sampling Techniques for Domain-Agnostic Question Answering

TL;DR: This work investigates the relative benefits of large pre-trained language models, various data sampling strategies, as well as query and context paraphrases generated by back-translation, and finds a simple negative sampling technique to be particularly effective.
Posted Content

An Exploration of Data Augmentation and Sampling Techniques for Domain-Agnostic Question Answering

TL;DR: The authors investigated the relative benefits of large pre-trained language models, various data sampling strategies, as well as query and context paraphrases generated by back-translation for the Machine Reading Question Answering (MRQA) 2019 Shared Task.
Journal Article

Active Learning Over Multiple Domains in Natural Language Tasks

TL;DR: Among 18 acquisition functions from 4 families of methods, the first comprehensive analysis of both existing and novel methods for practitioners faced with multi-domain active learning for natural language tasks finds H-Divergence methods, and particularly the proposed variant DAL-E, yield effective results.
Proceedings ArticleDOI

On the Transferability of Minimal Prediction Preserving Inputs in Question Answering.

TL;DR: The interpretability of MPPIs is suggested to be insufficient to characterize generalization capacity of neural models, and more systematic analysis of model behavior outside of the human interpretable distribution of examples is encouraged.