scispace - formally typeset
L

Luke Zettlemoyer

Researcher at Facebook

Publications -  344
Citations -  65369

Luke Zettlemoyer is an academic researcher from Facebook. The author has contributed to research in topics: Computer science & Parsing. The author has an hindex of 82, co-authored 278 publications receiving 40896 citations. Previous affiliations of Luke Zettlemoyer include Princeton University & Massachusetts Institute of Technology.

Papers
More filters
Proceedings Article

An Imprecise Mouse Gesture for the Fast Activation of Controls.

TL;DR: The flick gesture is described, designed for this purpose, and experimental results demonstrate that the gesture performs well with regard to speed, accuracy, and variability, compared to conventional gestures in a laboratory setting.
Posted Content

Vision-and-Dialog Navigation

TL;DR: This work introduces Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments and establishes an initial, multi-modal sequence-to-sequence model.
Patent

Methods, systems, and computer program products for providing automated customer service via an intelligent virtual agent that is trained using customer-agent conversations

TL;DR: In this paper, a customer communication is responded to by receiving an utterance from the customer at an agent that executes on a data processing system, and the agent uses a knowledge base that includes information extracted from one or more exemplary conversations to generate a response to the received utterance.
Posted Content

Learning STRIPS Operators from Noisy and Incomplete Observations

TL;DR: This work proposes a method which learns STRIPS action models in such domains, by decomposing the problem into first learning a transition function between states in the form of a set of classifiers, and then deriving explicitSTRIPS rules from the classifiers' parameters.
Proceedings Article

Aligned Cross Entropy for Non-Autoregressive Machine Translation

TL;DR: Aligned cross entropy (AXE) as an alternative loss function for training of non-autoregressive models and AXE-based training of conditional masked language models (CMLMs) substantially improves performance on major WMT benchmarks, while setting a new state of the art for non-AUTOgressive models.