scispace - formally typeset
S

Sewon Min

Researcher at University of Washington

Publications -  50
Citations -  4564

Sewon Min is an academic researcher from University of Washington. The author has contributed to research in topics: Question answering & Computer science. The author has an hindex of 22, co-authored 36 publications receiving 1992 citations. Previous affiliations of Sewon Min include Facebook.

Papers
More filters
Posted Content

Dense Passage Retrieval for Open-Domain Question Answering

TL;DR: This work shows that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework.
Proceedings ArticleDOI

Dense Passage Retrieval for Open-Domain Question Answering

TL;DR: In this paper, a dual-encoder framework is proposed to learn dense representations from a small number of questions and passages by a simple dual encoder framework, which outperforms a strong Lucene-BM25 system greatly.
Proceedings ArticleDOI

UNIFIEDQA: Crossing Format Boundaries with a Single QA System

TL;DR: This work uses the latest advances in language modeling to build a single pre-trained QA model, UNIFIEDQA, that performs well across 19 QA datasets spanning 4 diverse formats, and results in a new state of the art on 10 factoid and commonsense question answering datasets.
Proceedings Article

Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

TL;DR: This paper shows that ground truth demonstrations are in fact not required and that other aspects of the demonstrations are the key drivers of end task performance, including the fact that they provide a few examples of the label space, the distribution of the input text, and the overall format of the sequence.
Proceedings ArticleDOI

A discrete hard EM approach for weakly supervised question answering

TL;DR: This paper develops a hard EM learning scheme that computes gradients relative to the most likely solution at each update and significantly outperforms previous methods on six QA tasks, including absolute gains of 2–10%, and achieves the state-of-the-art on five of them.