C
Chris Alberti
Researcher at Google
Publications - 46
Citations - 5035
Chris Alberti is an academic researcher from Google. The author has contributed to research in topics: Question answering & Computer science. The author has an hindex of 21, co-authored 40 publications receiving 3053 citations. Previous affiliations of Chris Alberti include Georgia Institute of Technology.
Papers
More filters
Journal ArticleDOI
Natural Questions: A Benchmark for Question Answering Research
Tom Kwiatkowski,Jennimaria Palomaki,Olivia Redfield,Michael Collins,Ankur P. Parikh,Chris Alberti,Danielle Epstein,Illia Polosukhin,Jacob Devlin,Kenton Lee,Kristina Toutanova,Llion Jones,Matthew Kelcey,Ming-Wei Chang,Andrew M. Dai,Jakob Uszkoreit,Quoc V. Le,Slav Petrov +17 more
TL;DR: The Natural Questions corpus, a question answering data set, is presented, introducing robust metrics for the purposes of evaluating question answering systems; demonstrating high human upper bounds on these metrics; and establishing baseline results using competitive methods drawn from related literature.
Posted Content
Big Bird: Transformers for Longer Sequences
Manzil Zaheer,Guru Guruganesh,Avinava Dubey,Joshua Ainslie,Chris Alberti,Santiago Ontañón,Philip Pham,Anirudh Ravula,Qifan Wang,Li Yang,Amr Ahmed +10 more
TL;DR: It is shown that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model.
Proceedings ArticleDOI
Globally Normalized Transition-Based Neural Networks
Daniel Andor,Chris Alberti,David J. Weiss,Aliaksei Severyn,Alessandro Presta,Kuzman Ganchev,Slav Petrov,Michael Collins +7 more
TL;DR: A globally normalized transition-based neural network model that achieves state-of-the-art part- of-speech tagging, dependency parsing and sentence compression results is introduced.
Proceedings ArticleDOI
ETC: Encoding Long and Structured Inputs in Transformers
Joshua Ainslie,Santiago Ontañón,Chris Alberti,Vaclav Cvicek,Zachary Fisher,Philip Pham,Anirudh Ravula,Sumit Sanghai,Qifan Wang,Li Yang +9 more
TL;DR: Extended Transformer Construction (ETC) as mentioned in this paper introduces a novel global-local attention mechanism between global tokens and regular input tokens to scale attention to longer inputs and achieves state-of-the-art results on four natural language datasets.
Posted Content
Structured Training for Neural Network Transition-Based Parsing
TL;DR: This work presents structured perceptron training for neural network transition-based dependency parsing, and provides indepth ablative analysis to determine which aspects of this model provide the largest gains in accuracy.