Open AccessJournal Article
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel,Noam Shazeer,Adam Roberts,Katherine Lee,Sharan Narang,Michael Matena,Yanqi Zhou,Wei Li,Peter J. Liu +8 more
Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.Abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.read more
Citations
More filters
Posted Content
ARMAN: Pre-training with Semantically Selecting and Reordering of Sentences for Persian Abstractive Summarization
TL;DR: The authors proposed a Transformer-based encoder-decoder model pre-trained with three novel objectives to summarize more accurately and similar to human writing patterns, applied modified sentence reordering, and established a human evaluation and show that using the semantic score significantly improves summarization results.
Posted Content
Learning to Follow Language Instructions with Compositional Policies
Vanya Cohen,Geraud Nangue Tasse,Nakul Gopalan,Steven James,Matthew C. Gombolay,Benjamin Rosman +5 more
TL;DR: This paper propose a framework that learns to execute natural language instructions in an environment consisting of goal-reaching tasks that share components of their task descriptions, with the aim of reducing the sample complexity of learning novel tasks.
Posted Content
KPDrop: An Approach to Improving Absent Keyphrase Generation.
TL;DR: In this article, a keyphrase dropout (or KPDrop) method was proposed to improve the performance of absent keyphrase generation by randomly dropping present keyphrases from the document during training.
Proceedings ArticleDOI
TGEA: An Error-Annotated Dataset and Benchmark Tasks for TextGeneration from Pretrained Language Models
TL;DR: The authors proposed TGEA, an error-annotated dataset with multiple benchmark tasks for text generation from pre-trained language models (PLMs), which is used to understand the capability of pretrained language models in text generation and conduct a diagnostic evaluation.
Posted Content
Teach Me What to Say and I Will Learn What to Pick: Unsupervised Knowledge Selection Through Response Generation with Pretrained Generative Models
TL;DR: In this paper, a score-and-aggregate module between encoder and decoder is added to pre-trained generative models to learn to pick the proper knowledge through minimising the language modelling loss.