scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

nmT5 - Is parallel data still relevant for pre-training massively multilingual language models?

TL;DR: This paper investigated the impact of incorporating parallel data into mT5 pre-training and found that the benefits of using parallel data for cross-lingual NLP tasks can be seen in the limited labelled data regime.
Proceedings ArticleDOI

Automated Paraphrase Generation with Over-Generation and Pruning Services.

TL;DR: This paper proposed an approach inspired by services integration to address these issues and generate paraphrases in English that are semantically relevant and diverse, which is a promising cost-effective and scalable approach to generating training samples.
Proceedings ArticleDOI

DESCGEN: A Distantly Supervised Datasetfor Generating Entity Descriptions

TL;DR: In this paper, given mentions spread over multiple documents, the goal is to generate an entity summary description, where the documents were collected using a combination of entity linking and hyperlinks into the entity pages, which together provided high-quality distant supervision.
Proceedings ArticleDOI

Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models

TL;DR: This paper showed that VL-BERT exhibits gender biases, often preferring to reinforce a stereotype over faithfully describing the visual scene, and extended these findings on a controlled case-study and extend them for a larger set of stereotypically gendered entities.
Posted Content

Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models.

TL;DR: This article showed that VL-BERT exhibits gender biases, often preferring to reinforce a stereotype over faithfully describing the visual scene, and extended these findings on a controlled case-study and extend them for a larger set of stereotypically gendered entities.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.