scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Toward Stance-based Personas for Opinionated Dialogues.

TL;DR: The authors investigated stance-based persona representations and their impact on claim generation, showing that they are able to grasp abstract and profound aspects of the author persona, such as opinions, values, and beliefs to drive language generation.
Proceedings ArticleDOI

Increasing Learning Efficiency of Self-Attention Networks through Direct Position Interactions, Learnable Temperature, and Convoluted Attention

TL;DR: This work investigates three modifications to SANs: direct position interactions, learnable temperature, and convoluted attention that enable faster learning, i.e., higher accuracies after fewer update steps.
Posted Content

Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start

TL;DR: In this article, a Universal Few-shot Textual Entailment (UFO-Entail) model is proposed to solve NLP problems with few-shot annotations, which can be used for NLP tasks such as question answering and coreference resolution.
Proceedings ArticleDOI

A first look: Towards explainable textVQA models via visual and textual explanations

TL;DR: MTXNet is proposed, an end-to-end trainable multimodal architecture to generate multi-reference textual explanations that are consistent with human interpretations, help justify the models’ decision, and provide useful insights to help diagnose an incorrect prediction.
Journal Article

Optimizing Transformers with Approximate Computing for Faster, Smaller and more Accurate NLP Models

TL;DR: Approximate Computing, specifically targeting the use of Transformers in NLP tasks, proposes a framework to create smaller, faster and in some cases more accurate models that are faster, smaller and/or more accurate, depending on the user's constraints.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.