scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Focusing on Possible Named Entities in Active Named Entity Label Acquisition

TL;DR: The authors proposed a better data-driven normalization approach to penalize too long or too short sentences and evaluated these proposed functions with both sentence-based and token-based cost evaluation strategies.
Proceedings Article

Good-Enough Example Extrapolation.

TL;DR: The authors proposed a simple data augmentation protocol called "good-enough example extrapolation" (GE3), which extrapolates the hidden space distribution of text examples from one class onto another.
Proceedings Article

Sparsity and Sentence Structure in Encoder-Decoder Attention of Summarization Systems

TL;DR: This paper propose a modified transformer architecture that selects the subset of sentences to constrain the encoder-decoder attention mechanism and demonstrate empirically that there is a sparse sentence structure in document summarization that can be exploited by constraining the attention mechanism.
Proceedings Article

AStitchInLanguageModels: Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models

TL;DR: This paper presented a dataset of naturally occurring sentences containing MWEs manually classified into a fine-grained set of meanings, spanning both English and Portuguese, and used this dataset in two tasks designed to test i) a language model's ability to detect idiom usage, and ii) the effectiveness of generating representations of sentences containing idioms.
Posted Content

Positioning yourself in the maze of Neural Text Generation: A Task-Agnostic Survey

TL;DR: This paper surveys the fundamental components of modeling approaches relaying task agnostic impacts across various generation tasks such as storytelling, summarization, translation etc, and presents an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.