scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Pre-training for Abstractive Document Summarization by Reinstating Source Text

TL;DR: Three pre-training objectives are presented which allow us to pre-train a Seq2Seq based abstractive summarization model on unlabeled text and show that all three objectives can improve performance upon baselines.
Posted Content

CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data.

TL;DR: The authors proposed an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages and augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.
Posted Content

Improving and Simplifying Pattern Exploiting Training

TL;DR: ADAPET as discussed by the authors modifies PET's objective to provide denser supervision during fine-tuning, which outperforms PET on SuperGLUE without any task-specific unlabeled data.
Proceedings ArticleDOI

OCNLI: Original Chinese Natural Language Inference

TL;DR: The Original Chinese Natural Language Inference dataset (OCNLI) as mentioned in this paper is the first large-scale natural language inference dataset for Chinese, consisting of 56,000 annotated sentence pairs.
Proceedings ArticleDOI

Multi-Fact Correction in Abstractive Text Summarization

TL;DR: Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection to significantly boost the factual consistency of system- generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.