scispace - formally typeset
L

Luke Zettlemoyer

Researcher at Facebook

Publications -  344
Citations -  65369

Luke Zettlemoyer is an academic researcher from Facebook. The author has contributed to research in topics: Computer science & Parsing. The author has an hindex of 82, co-authored 278 publications receiving 40896 citations. Previous affiliations of Luke Zettlemoyer include Princeton University & Massachusetts Institute of Technology.

Papers
More filters
Proceedings Article

Learning to Relate Literal and Sentimental Descriptions of Visual Properties

TL;DR: This paper presents a new dataset, collected to describe Xbox avatars, as well as models for learning the relationships between these avatars and their literal and sentimental descriptions, and demonstrates that sentimental language provides a concise (though noisy) means of specifying low-level visual properties.
Posted Content

FaVIQ: FAct Verification from Information-seeking Questions.

TL;DR: The FaVIQ dataset as discussed by the authors uses information-seeking questions posed by real users who do not know how to answer, which enables automatically constructing true and false claims that reflect confusions arisen from users (e.g., the year of the movie being filmed vs. being released).
Posted Content

FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary

TL;DR: This paper proposed FEWS (Few-shot Examples of Word Senses), a new low-shot WSD dataset automatically extracted from example sentences in Wiktionary, which provides high sense coverage across different natural language domains and provides a large training set that covers many more senses than previous datasets and a comprehensive evaluation set containing few- and zero-shot examples of a wide variety of senses.
Posted Content

DESCGEN: A Distantly Supervised Dataset for Generating Abstractive Entity Descriptions.

TL;DR: The DESCGEN dataset as discussed by the authors consists of 37k entity descriptions from Wikipedia and Fandom, each paired with nine evidence documents on average, and the documents were collected using a combination of entity linking and hyperlinks to the Wikipedia entity pages, which together provided high-quality distant supervision.
Posted Content

Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment.

TL;DR: The authors combine unsupervised bitext mining and word alignment to improve the quality of bilingual lexicons, achieving the state-of-the-art performance on the BUCC 2020 shared task.