scispace - formally typeset
L

Luke Zettlemoyer

Researcher at Facebook

Publications -  344
Citations -  65369

Luke Zettlemoyer is an academic researcher from Facebook. The author has contributed to research in topics: Computer science & Parsing. The author has an hindex of 82, co-authored 278 publications receiving 40896 citations. Previous affiliations of Luke Zettlemoyer include Princeton University & Massachusetts Institute of Technology.

Papers
More filters
Proceedings ArticleDOI

Emerging Cross-lingual Structure in Pretrained Language Models

TL;DR: It is shown that transfer is possible even when there is no shared vocabulary across the monolingual corpora and also when the text comes from very different domains, and it is strongly suggested that, much like for non-contextual word embeddings, there are universal latent symmetries in the learned embedding spaces.
Proceedings ArticleDOI

Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling

TL;DR: This work proposes an end-to-end approach for jointly predicting all predicates, arguments spans, and the relations between them, and makes independent decisions about what relationship, if any, holds between every possible word-span pair.
Proceedings ArticleDOI

Ultra-Fine Entity Typing

TL;DR: A model that can predict ultra-fine types is presented, and is trained using a multitask objective that pools the authors' new head-word supervision with prior supervision from entity linking, and achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for newly-introduced datasets.
Proceedings ArticleDOI

Broad-coverage CCG Semantic Parsing with AMR

TL;DR: A new model is presented that combines CCG parsing to recover compositional aspects of meaning and a factor graph to model non-compositional phenomena, such as anaphoric dependencies, which is significantly outperforming the previous state of the art.
Proceedings ArticleDOI

ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks

TL;DR: Action Learning From Realistic Environments and Directives (ALFRED) as mentioned in this paper is a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks.