scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Leveraging Lead Bias for Zero-shot Abstractive News Summarization

TL;DR: In this paper, the authors propose to pre-train abstractive news summarization models on large-scale unlabeled news corpora by predicting the leading sentences using the rest of an article.
Proceedings ArticleDOI

Generating SOAP Notes from Doctor-Patient Conversations Using Modular Summarization Techniques

TL;DR: Cluster2Sent as discussed by the authors extracts important utterances relevant to each summary section, clusters together related utterances, and then generates one summary sentence per cluster, which outperforms its purely abstractive counterpart by 8 ROUGE-1 points.
Posted Content

Learning Dense Representations of Phrases at Scale

TL;DR: This article proposed to learn phrase representations from the supervision of reading comprehension tasks, coupled with novel negative sampling methods, which achieved state-of-the-art performance in open-domain question answering.
Proceedings ArticleDOI

Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering

TL;DR: Disfl-QA as discussed by the authors is a dataset where humans introduce contextual disfluencies in previously fluent questions, which require a more comprehensive understanding of the text than what was necessary in prior datasets.
Proceedings ArticleDOI

SemFace: Pre-training Encoder and Decoder with a Semantic Interface for Neural Machine Translation

TL;DR: This article propose a semantic interface between the pre-trained encoder and decoder to constrain the encoder outputs and decoders inputs in the same language-independent space, which achieves significant improvement over previous pre-training-based NMT models.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.