scispace - formally typeset
J

Jacob Eisenstein

Researcher at Google

Publications -  201
Citations -  11502

Jacob Eisenstein is an academic researcher from Google. The author has contributed to research in topics: Gesture & Language model. The author has an hindex of 50, co-authored 196 publications receiving 9772 citations. Previous affiliations of Jacob Eisenstein include Georgia Institute of Technology & University of Illinois at Urbana–Champaign.

Papers
More filters
Proceedings ArticleDOI

Measuring and Modeling Language Change

TL;DR: This tutorial will provide an overview of techniques and datasets from the quantitative social sciences and the digital humanities, which are not well-known in the computational linguistics community, which include vector autoregressive models, multiple comparisons corrections for hypothesis testing, and causal inference.
Posted Content

Toward Socially-Infused Information Extraction: Embedding Authors, Mentions, and Entities

TL;DR: In this paper, the authors, mentions, and entities are encoded in a continuous vector space, which is constructed so that socially-connected authors have similar vector representations, and incorporated into a neural structured prediction model, which captures structural constraints that are inherent in the entity linking task.
Journal ArticleDOI

Identifying visual attributes for object recognition from text and taxonomy

TL;DR: Two novel techniques for nominating attributes and a method for assessing the suitability of candidate attributes for object recognition are introduced and both taxonomy and distributional similarity serve as useful sources of information for attribute nomination and the methods can effectively exploit them.
Posted Content

Part-of-Speech Tagging for Historical English

TL;DR: This paper assess the capability of domain adaptation techniques to cope with historical texts, focusing on the classic benchmark task of part-of-speech tagging, and demonstrate that feature embedding method for unsupervised domain adaptation outperforms word embeddings and Brown clusters, showing the importance of embedding the entire feature space, rather than just individual words.
Posted Content

Revisiting the Primacy of English in Zero-shot Cross-lingual Transfer

TL;DR: The authors compare English with other transfer languages for fine-tuning, on two pre-trained multilingual models (mBERT and mT5) and multiple classification and question answering tasks.