Open AccessJournal Article
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel,Noam Shazeer,Adam Roberts,Katherine Lee,Sharan Narang,Michael Matena,Yanqi Zhou,Wei Li,Peter J. Liu +8 more
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.Abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.read more
Citations
More filters
Posted Content
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
Mohammad Shoeybi,Md. Mostofa Ali Patwary,Raul Puri,Patrick LeGresley,Jared Casper,Bryan Catanzaro +5 more
TL;DR: A simple, efficient intra-layer model parallel approach that enables training transformer models with billions of parameters and shows that careful attention to the placement of layer normalization in BERT-like models is critical to achieving increased performance as the model size grows.
Posted Content
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
TL;DR: This work proposes pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective, PEGASUS, and demonstrates it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores.
Posted ContentDOI
Biological Structure and Function Emerge from Scaling Unsupervised Learning to 250 Million Protein Sequences
Alexander Rives,Siddharth Goyal,Joshua Meier,Demi Guo,Myle Ott,C. Lawrence Zitnick,Jerry Ma,Rob Fergus,Rob Fergus +8 more
TL;DR: This work uses unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity, enabling state-of-the-art supervised prediction of mutational effect and secondary structure, and improving state- of- the-art features for long-range contact prediction.
Proceedings ArticleDOI
Dense Passage Retrieval for Open-Domain Question Answering
Vladimir Karpukhin,Barlas Oguz,Sewon Min,Patrick S. H. Lewis,Ledell Wu,Sergey Edunov,Danqi Chen,Wen-tau Yih +7 more
TL;DR: In this paper, a dual-encoder framework is proposed to learn dense representations from a small number of questions and passages by a simple dual encoder framework, which outperforms a strong Lucene-BM25 system greatly.
Journal ArticleDOI
Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
Yu Gu,Robert Tinn,Hao Cheng,Michael Lucas,Naoto Usuyama,Xiaodong Liu,Tristan Naumann,Jianfeng Gao,Hoifung Poon +8 more
TL;DR: It is shown that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.