Open AccessJournal Article
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel,Noam Shazeer,Adam Roberts,Katherine Lee,Sharan Narang,Michael Matena,Yanqi Zhou,Wei Li,Peter J. Liu +8 more
Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.Abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.read more
Citations
More filters
Proceedings ArticleDOI
Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision
Hao Tan,Mohit Bansal +1 more
TL;DR: The authors propose a technique named vokenization that extrapolates multimodal alignments to language-only data by contextually mapping language tokens to their related images, which is trained on relatively small image captioning datasets and applied to generate vokens for large language corpora.
Proceedings ArticleDOI
Incorporating commonsense knowledge graph in pretrained models for social commonsense tasks
TL;DR: This work proposes two approaches to implicitly and explicitly infuse external commonsense knowledge graphs (KGs) into pretrained language models, and demonstrates that these methods perform well on SocialIQA, a social commonsense reasoning task, in both limited and full training data regimes.
Posted Content
Datasets: A Community Library for Natural Language Processing
Quentin Lhoest,Albert Villanova del Moral,Yacine Jernite,Abhishek Thakur,Patrick von Platen,Suraj Patil,Julien Chaumond,Mariama Drame,Julien Plu,Lewis Tunstall,Joe Davison,Mario Sasko,Gunjan Chhablani,Bhavitvya Malik,Simon Brandeis,Teven Le Scao,Victor Sanh,Canwen Xu,Nicolas Patry,Angelina McMillan-Major,Philipp Schmid,Sylvain Gugger,Clement Delangue,Théo Matussière,Lysandre Debut,Stas Bekman,Pierric Cistac,Thibault Goehringer,Victor Mustar,François Lagunas,Alexander M. Rush,Thomas Wolf +31 more
TL;DR: Datasets as discussed by the authors is a community library for contemporary NLP designed to support the scale, variety, and quantity of publicly-available NLP datasets, as well as new tasks, larger models, and novel benchmarks.
Posted Content
Assessing Phrasal Representation and Composition in Transformers
Lang Yu,Allyson Ettinger +1 more
TL;DR: It is found that phrase representation in state-of-the-art pre-trained transformers relies heavily on word content, with little evidence of nuanced composition.
Posted Content
Scaling Laws for Transfer
TL;DR: In this paper, empirical scaling laws for transfer learning between distributions in an unsupervised, fine-tuning setting were studied and the effective data transferred from pre-training by determining how much data a transformer of the same size would have required to achieve the same loss when training from scratch.