scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

PolyViT: Co-training Vision Transformers on Images, Videos and Audio

TL;DR: PolyViT as discussed by the authors uses co-training on multiple modalities and tasks to improve the accuracy of each individual task and achieve state-of-the-art results on 5 standard video and audio-classification datasets.
Posted Content

How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI

TL;DR: This paper proposed a new reasoning challenge, Fermi Problem (FP), which is a set of questions whose answers can only be approximately estimated because their precise computation is either impractical or impossible.
Posted Content

PiSLTRc: Position-informed Sign Language Transformer with Content-aware Convolution

TL;DR: Zhang et al. as mentioned in this paper proposed a new model architecture, namely PiSLTRc, with two distinctive characteristics: (i) content-aware and position-aware convolution layers.
Posted Content

UCD-CS at W-NUT 2020 Shared Task-3: A Text to Text Approach for COVID-19 Event Extraction on Social Media

TL;DR: In this article, a transformer-based T5 text-to-text model is proposed to extract answers from COVID-related tweets to a set of predefined slot-filling questions.
Proceedings Article

Revisiting Self-training for Few-shot Learning of Language Model

TL;DR: This paper presented a state-of-the-art prompt-based few-shot learner, SFLM, which generates a pseudo label on the weakly augmented version and then predicts the same pseudo label when fine-tuned with the strongly augmented version.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.