scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Space-Time Crop & Attend: Improving Cross-modal Video Representation Learning

TL;DR: In this paper, feature crops are used to simulate data augmentation in feature space and the use of transformer-based attention for processing feature crops is shown to improve the performance of video representation learning.
Posted Content

Comparison of Czech Transformers on Text Classification Tasks

TL;DR: In this article, the authors present their progress in pre-training monolingual Transformers for Czech and contribute to the research community by releasing their models for public, and compared them with relevant public models, trained (at least partially) for Czech.
Posted Content

Semantic Categorization of Social Knowledge for Commonsense Question Answering

TL;DR: This article proposed to categorize the semantics needed for commonsense question answering tasks using the SocialIQA as an example, and further train neural QA models to incorporate such social knowledge categories and relation information from a knowledge base.
Posted Content

Open Relation Modeling: Learning to Define Relations between Entities.

TL;DR: In this paper, the authors introduce the Open Relation Modeling (ORM) task, where given two entities, generate a coherent sentence describing the relation between them, using pre-trained language models.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.