scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Unsupervised Paraphrasing with Pretrained Language Models

TL;DR: The authors adopt a transfer learning approach and propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting, which achieves state-of-the-art performance on both the Quora Question Pair (QQP) and ParaNMT datasets and is robust to domain shift between the two datasets of distinct distributions.
Posted Content

BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining

TL;DR: This article proposed BANG, a new pretraining model to bridge the gap between AR and NAR generation by designing a novel model structure for large-scale pretraining, which can simultaneously support AR, NAR and semi-NAR generation to meet different requirements.
Posted Content

Toward Stance-based Personas for Opinionated Dialogues

TL;DR: This work introduces a novel dataset allowing to explore different stance-based persona representations and their impact on claim generation, showing that they are able to grasp abstract and profound aspects of the author persona.

Teach Me What to Say and I Will Learn What to Pick: Unsupervised Knowledge Selection Through Response Generation with Pretrained Generative Models

TL;DR: In this article, a score-and-aggregate module between encoder and decoder is added to learn to pick the proper knowledge through minimising the language modelling loss (i.e. without having access to knowledge labels).
Posted Content

DEEPAG\'E: Answering Questions in Portuguese about the Brazilian Environment

TL;DR: In this article, the authors introduce multiple QA systems that combine the BM25 algorithm, a sparse retrieval technique, with PTT5, a pre-trained state-of-the-art language model.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.