scispace - formally typeset
D

Doug Downey

Researcher at Allen Institute for Artificial Intelligence

Publications -  40
Citations -  2602

Doug Downey is an academic researcher from Allen Institute for Artificial Intelligence. The author has contributed to research in topics: Language model & Commonsense reasoning. The author has an hindex of 10, co-authored 40 publications receiving 1270 citations. Previous affiliations of Doug Downey include Northwestern University.

Papers
More filters
Proceedings ArticleDOI

Don't Stop Pretraining: Adapt Language Models to Domains and Tasks

TL;DR: It is consistently found that multi-phase adaptive pretraining offers large gains in task performance, and it is shown that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable.
Posted Content

Abductive Commonsense Reasoning

TL;DR: This study introduces a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations, and conceptualizes two new tasks -- Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and Abduction NLG: a conditional generation task for explaining given observations in natural language.
Posted Content

Don't Stop Pretraining: Adapt Language Models to Domains and Tasks

TL;DR: The authors show that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable, and consistently find that multi-phase adaptive pretraining offers large gains in task performance.
Proceedings Article

Abductive Commonsense Reasoning

TL;DR: For example, the authors investigate the feasibility of abductive reasoning in natural language inference and generation and show that the best model achieves 68.9% accuracy, well below human performance of 91.4%.
Proceedings ArticleDOI

SPECTER: Document-level Representation Learning using Citation-informed Transformers

TL;DR: Specter as discussed by the authors proposes a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of documentlevel relatedness: the citation graph.