scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?

TL;DR: This paper proposed a new language understanding task, Linguistic Ethical Interventions (LEI), where the goal is to amend a question-answering model's unethical behavior by communicating context-specific principles of ethics and equity to it.
Posted Content

Learning to Sample Replacements for ELECTRA Pre-Training

TL;DR: In this article, a hardness prediction mechanism was proposed to improve replacement sampling for ELECTRA pre-training, where the generator can encourage the discriminator to learn what it has not acquired.
Proceedings ArticleDOI

Improving Lexically Constrained Neural Machine Translation with Source-Conditioned Masked Span Prediction

TL;DR: This article proposed a masked span prediction model for domain-specific NMT, which achieved consistent improvements on both terminology and sentence-level translation for three domain specific corpora in two language pairs.
Posted Content

On the Interplay Between Fine-tuning and Composition in Transformers

TL;DR: The authors investigated the impact of fine-tuning on the capacity of contextualized embeddings to capture phrase meaning information beyond lexical content, and found that fine-tuning largely fails to benefit compositionality in these representations, though training on sentiment yields a small, localized benefit for certain models.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.