scispace - formally typeset
L

Luke Zettlemoyer

Researcher at Facebook

Publications -  344
Citations -  65369

Luke Zettlemoyer is an academic researcher from Facebook. The author has contributed to research in topics: Computer science & Parsing. The author has an hindex of 82, co-authored 278 publications receiving 40896 citations. Previous affiliations of Luke Zettlemoyer include Princeton University & Massachusetts Institute of Technology.

Papers
More filters
Journal ArticleDOI

Few-shot Mining of Naturally Occurring Inputs and Outputs

TL;DR: This method mines naturally occurring high-quality input output pairs to mimic the style of the seed set for multiple tasks, and sees improvements of 1.46 ROUGE-L on Xsum abstractive summarization.
Journal ArticleDOI

Revisiting Machine Translation for Cross-lingual Classification

TL;DR: This paper showed that by using a stronger MT system and mitigating the mismatch between training on original text and running inference on machine translated text, translate-test can do substantially better than previously assumed.
Posted Content

Prompting Contrastive Explanations for Commonsense Reasoning Tasks.

TL;DR: The authors used pre-trained language models to generate contrastive explanations for commonsense reasoning tasks, which are judged by humans to be more relevant for solving the task and facilitate a novel method to evaluate explanation faithfulfness.
Posted Content

Evaluating Gender Bias in Machine Translation

TL;DR: The authors presented the first challenge set and evaluation protocol for the analysis of gender bias in machine translation (MT) using two coreference resolution datasets composed of English sentences which cast participants into non-stereotypical gender roles (e.g., "The doctor asked the nurse to help her in the operation").
Posted Content

Language Grounding with 3D Objects

TL;DR: In this article, the authors introduce a new reasoning task that targets both visual and non-visual language about 3D objects about objects in the world, and find that adding view estimation to language grounding models improves accuracy on both SNARE and when identifying objects referred to in language on a robot platform.