L
Luke Zettlemoyer
Researcher at Facebook
Publications - 344
Citations - 65369
Luke Zettlemoyer is an academic researcher from Facebook. The author has contributed to research in topics: Computer science & Parsing. The author has an hindex of 82, co-authored 278 publications receiving 40896 citations. Previous affiliations of Luke Zettlemoyer include Princeton University & Massachusetts Institute of Technology.
Papers
More filters
Posted Content
Pre-training via Paraphrasing
Michael Lewis,Marjan Ghazvininejad,Gargi Ghosh,Armen Aghajanyan,Sida I. Wang,Luke Zettlemoyer +5 more
TL;DR: It is shown that fine-tuning gives strong performance on a range of discriminative and generative tasks in many languages, making MARGE the most generally applicable pre-training method to date.
Posted Content
End-to-end Neural Coreference Resolution
TL;DR: This paper proposed an end-to-end coreference resolution model that directly considers all spans in a document as potential mentions and learns distributions over possible antecedents for each, which is trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions.
Proceedings ArticleDOI
Mapping Language to Code in Programmatic Context
TL;DR: In this article, the authors introduce the task of generating class member functions given English documentation and the programmatic context provided by the rest of the class, which is challenging because the desired code can vary greatly depending on the functionality the class provides (e.g., a sort function may or may not be available when we are asked to return the smallest element in a particular member variable list).
Proceedings Article
Joint Coreference Resolution and Named-Entity Linking with Multi-Pass Sieves
TL;DR: NECO is introduced, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each.
Posted Content
Detecting Hallucinated Content in Conditional Neural Sequence Generation
Chunting Zhou,Graham Neubig,Jiatao Gu,Mona Diab,Paco Guzman,Luke Zettlemoyer,Marjan Ghazvininejad +6 more
TL;DR: A new task to predict whether each token in the output sequence is hallucinated conditioned on the source input, and a novel method for learning to model hallucination detection, based on pretrained language models fine tuned on synthetic data that includes automatically inserted hallucinations.