Open AccessJournal Article
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel,Noam Shazeer,Adam Roberts,Katherine Lee,Sharan Narang,Michael Matena,Yanqi Zhou,Wei Li,Peter J. Liu +8 more
Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.Abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.read more
Citations
More filters
Posted Content
Data Movement Is All You Need: A Case Study on Optimizing Transformers
TL;DR: This work finds that data movement is the key bottleneck when training, and presents a recipe for globally optimizing data movement in transformers to achieve a 1.30x performance improvement over state-of-the-art frameworks when training BERT.
Proceedings ArticleDOI
AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization
TL;DR: A study of domain adaptation for the abstractive summarization task across six diverse target domains in a low-resource setting and finds that continuing pre-training could lead to the pre-trained model's catastrophic forgetting, and a learning method with less forgetting can alleviate this issue.
Posted Content
Learning from others' mistakes: Avoiding dataset biases without modeling them
TL;DR: This work considers cases where the bias issues may not be explicitly identified, and shows a method for training models that learn to ignore these problematic correlations, based on the observation that models with limited capacity primarily learn to exploit biases in the dataset.
Proceedings ArticleDOI
Covidex: Neural Ranking Models and Keyword Search Infrastructure for the COVID-19 Open Research Dataset
Edwin Zhang,Nikhil Gupta,Raphael Tang,Xiao Han,Ronak Pradeep,Kuang Lu,Yue Zhang,Rodrigo Nogueira,Kyunghyun Cho,Hui Fang,Jimmy Lin +10 more
TL;DR: Covidex, a search engine that exploits the latest neural ranking models to provide information access to the COVID-19 Open Research Dataset curated by the Allen Institute for AI, is presented.
Proceedings ArticleDOI
TLDR: Extreme Summarization of Scientific Documents
TL;DR: The authors introduce SCITLDR, a new multi-target dataset of 5.4k TLDRs over 3.2k papers, where the latter are collected using a novel annotation protocol that produces high quality summaries while minimizing annotation burden.