Open AccessJournal Article
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel,Noam Shazeer,Adam Roberts,Katherine Lee,Sharan Narang,Michael Matena,Yanqi Zhou,Wei Li,Peter J. Liu +8 more
Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.Abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.read more
Citations
More filters
Posted Content
BERT-QE: Contextualized Query Expansion for Document Re-ranking
TL;DR: A novel query expansion model that leverages the strength of the BERT model to select relevant document chunks for expansion is proposed, which significantly outperforms BERT-Large models.
Posted Content
A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
TL;DR: This paper hypothesize, and verify empirically, that classification tasks of interest can be reformulated as next word prediction tasks, thus making language modeling a meaningful pretraining task, and analyzes properties of the cross-entropy objective to show that $\epsilon$-optimal language models in cross-ENTropy (log-perplexity) learn features that are $\mathcal{O}(\sqrt{\ep silon})$-good on natural linear classification tasks, demonstrating
Posted Content
Video Swin Transformer.
TL;DR: In this article, the authors advocate an inductive bias of locality in video Transformers, which leads to a better speed-accuracy trade-off compared to previous approaches which compute self-attention globally even with spatial-temporal factorization.
Posted Content
NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
Sewon Min,Jordan Boyd-Graber,Chris Alberti,Danqi Chen,Eunsol Choi,Michael Collins,Kelvin Guu,Hannaneh Hajishirzi,Kenton Lee,Jennimaria Palomaki,Colin Raffel,Adam Roberts,Tom Kwiatkowski,Patrick S. H. Lewis,Yuxiang Wu,Heinrich Küttler,Linqing Liu,Pasquale Minervini,Pontus Stenetorp,Sebastian Riedel,Sohee Yang,Minjoon Seo,Gautier Izacard,Fabio Petroni,Lucas Hosseini,Nicola De Cao,Edouard Grave,Ikuya Yamada,Sonse Shimaoka,Masatoshi Suzuki,Shumpei Miyawaki,Shun Sato,Ryo Takahashi,Jun Suzuki,Martin Fajcik,Martin Docekal,Karel Ondrej,Pavel Smrz,Hao Cheng,Yelong Shen,Xiaodong Liu,Pengcheng He,Weizhu Chen,Jianfeng Gao,Barlas Oguz,Xilun Chen,Vladimir Karpukhin,Stan Peshterliev,Dmytro Okhonko,Michael Sejr Schlichtkrull,Sonal Gupta,Yashar Mehdad,Wen-tau Yih +52 more
TL;DR: The EfficientQA competition as mentioned in this paper focused on open-domain question answering (QA), where systems take natural language questions as input and return natural language answers, and the aim of the competition was to build systems that can predict correct answers while also satisfying strict on-disk memory budgets.
Proceedings ArticleDOI
XtremeDistil: Multi-stage Distillation for Massive Multilingual Models
TL;DR: This article proposed a stage-wise optimization scheme leveraging teacher internal representations, that is agnostic of teacher architecture, and showed that it outperforms strategies employed in prior works, and investigated the role of several factors like the amount of unlabeled data, annotation resources, model architecture and inference latency to name a few.