scispace - formally typeset
L

Luke Zettlemoyer

Researcher at Facebook

Publications -  344
Citations -  65369

Luke Zettlemoyer is an academic researcher from Facebook. The author has contributed to research in topics: Computer science & Parsing. The author has an hindex of 82, co-authored 278 publications receiving 40896 citations. Previous affiliations of Luke Zettlemoyer include Princeton University & Massachusetts Institute of Technology.

Papers
More filters

Logical Particle Filtering.

TL;DR: In this article, a compact representation for relational hidden Markov models and an associated logical particle filtering algorithm are presented. But the algorithm updates the formulae as new observations are received, and it cannot be more accurate than a traditional particle filter in high dimensional state spaces.
Proceedings ArticleDOI

Prompting Language Models for Linguistic Structure

TL;DR: The authors presented a structured prompting approach for linguistic structured prediction tasks, allowing them to perform zero-and few-shot sequence tagging with autoregressive language models, and evaluated this approach on part-of-speech tagging, named entity recognition, and sentence chunking.
Proceedings ArticleDOI

Combining world and interaction models for human-robot collaborations

TL;DR: This paper studies an example of a scenario in which a visually impaired person and a robotic “guide” collaborate in an unfamiliar environment, and analyzes how the scenario can be realized through language- and gesture-based human-robot interaction, combined with semantic spatial understanding and reasoning.
Journal ArticleDOI

Stable and low-precision training for large-scale vision-language models

TL;DR: The authors introduced SwitchBack, a linear layer for int8 quantized training which provides a speedup of 13-25% while matching the performance of bfloat16 training within 0.1 percentage points for the 1B parameter CLIP ViT-Huge.
Posted Content

MetaICL: Learning to Learn In Context

TL;DR: Meta-training for In-Context Learning (MetaICL) as discussed by the authors is a meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learn-ing on a large set of training tasks.