L
Luke Zettlemoyer
Researcher at Facebook
Publications - 344
Citations - 65369
Luke Zettlemoyer is an academic researcher from Facebook. The author has contributed to research in topics: Computer science & Parsing. The author has an hindex of 82, co-authored 278 publications receiving 40896 citations. Previous affiliations of Luke Zettlemoyer include Princeton University & Massachusetts Institute of Technology.
Papers
More filters
Posted Content
Cloze-driven Pretraining of Self-attention Networks
TL;DR: A new approach for pretraining a bi-directional transformer model that provides significant performance gains across a variety of language understanding problems, including cloze-style word reconstruction task, and a detailed analysis of a number of factors that contribute to effective pretraining.
Proceedings Article
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
Tianbao Xie,Chen Henry Wu,Peng Shi,Ruiqi Zhong,Torsten Scholak,Michihiro Yasunaga,Chien-Sheng Wu,Ming Zhong,Peng Yin,Sida Wang,Victor Zhong,Bailin Wang,Chengzu Li,Connor Boyle,Ansong Ni,Zhen Yao,Dragomir R. Radev,Caiming Xiong,Lingpeng Kong,Rui Zhang,Noah A. Smith,Luke Zettlemoyer,Tao Yu +22 more
TL;DR: The U NIFIED SKG framework is proposed, which unifies 21 SKG tasks into a text-to-text format, aiming to promote systematic SKG research, instead of being exclu-sive to a single task, domain, or dataset.
Proceedings ArticleDOI
Compositional Questions Do Not Necessitate Multi-hop Reasoning.
TL;DR: This work introduces a single-hop BERT-based RC model that achieves 67 F1—comparable to state-of-the-art multi-hop models and designs an evaluation setting where humans are not shown all of the necessary paragraphs for the intendedmulti-hop reasoning but can still answer over 80% of questions.
Posted Content
Nearest Neighbor Machine Translation
TL;DR: This work introduces $k$-nearest-neighbor machine translation ($k$NN-MT), which predicts tokens with a nearest neighbor classifier over a large datastore of cached examples, using representations from a neural translation model for similarity search.
Proceedings ArticleDOI
Cloze-driven Pretraining of Self-attention Networks.
TL;DR: This paper propose a cloze-style word reconstruction task, where each word is ablated and must be predicted given the rest of the text, and demonstrate large performance gains on GLUE and new state of the art results on NER as well as constituency parsing benchmarks, consistent with BERT.