scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data.

TL;DR: SEDE as discussed by the authors is a dataset with 12,023 pairs of utterances and SQL queries collected from real usage on the Stack Exchange website, which contains a variety of real-world challenges which were rarely reflected so far in any other semantic parsing dataset.
Posted Content

On Learning the Transformer Kernel

TL;DR: Kerneleded Transformers as discussed by the authors approximates the Transformer kernel as a dot product between spectral feature maps and learns the kernel by learning the spectral distribution, which not only helps in learning a generic kernel end-to-end, but also reduces the time and space complexity of Transformers from quadratic to linear.
Proceedings ArticleDOI

Improving Robotic Grasping on Monocular Images Via Multi-Task Learning and Positional Loss

TL;DR: In this paper, a multi-task CNN model was proposed to improve real-time object grasping performance from monocular color images in an end-to-end CNN architecture, which achieved an improvement from a baseline average of 72.04% to 78.14% on the large Jacquard grasping dataset when performing a supplementary depth reconstruction task.
Journal ArticleDOI

NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish

TL;DR: In this paper, a Transformer encoder-decoder model for abstractive summarization of text content in the Catalan language is presented, which is fine-tuned specifically for the Spanish language using a corpus of newspaper articles.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.