scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Towards Accurate and Reliable Energy Measurement of NLP Models

TL;DR: In this article, the authors show that existing software-based energy measurements are not accurate because they do not take into account hardware differences and how resource utilization affects energy consumption, and propose a more accurate energy estimation model that takes into account the hardware variabilities and the non-linear relationship between resource utilization and energy consumption.
Posted Content

Learning Compact Metrics for MT

TL;DR: The authors investigated the tradeoff between multilinguality and model capacity with RemBERT, a state-of-the-art multilingual language model, using data from the WMT Metrics Shared Task.
Posted Content

Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding.

TL;DR: The authors introduce the TRIP dataset, which is a commonsense reasoning dataset with dense annotations that enable multi-tiered evaluation of machines' reasoning process, showing that large LMs can achieve high end performance, but they struggle to support their predictions with valid supporting evidence.
Posted Content

An Empirical Study of Training End-to-End Vision-and-Language Transformers.

TL;DR: In this paper, a fully transformer-based vision and language pre-training model is proposed to improve the performance of downstream VL downstream tasks, achieving an accuracy of 77.64% on the VQAv2 test-std set using only 4M images.
Proceedings ArticleDOI

Do Transformers Dream of Inference, or Can Pretrained Generative Models Learn Implicit Inferential Rules?

TL;DR: This work investigates the capability of a state-of-the-art transformer LM to generate explicit inference hops, i.e., to infer a new statement necessary to answer a question given some premise input statements.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.