scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Autoregressive Knowledge Distillation through Imitation Learning

TL;DR: This article developed a compression technique for autoregressive models that is driven by an imitation learning perspective on knowledge distillation, which is designed to address the exposure bias problem and consistently outperforms other distillation algorithms.
Proceedings ArticleDOI

Style is NOT a single variable: Case Studies for Cross-Stylistic Language Understanding

TL;DR: The authors proposed a cross-style classifier trained with multiple styles together to improve overall classification performance against individually trained style classifiers and found that some styles are highly dependent on each other in human-written text.
Posted Content

FastMoE: A Fast Mixture-of-Expert Training System

TL;DR: FastMoE as discussed by the authors is a distributed MoE training system based on PyTorch with common accelerators, which provides a hierarchical interface for both flexible model design and easy adaptation to different applications, such as Transformer XL and Megatron-LM.
Proceedings Article

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization

TL;DR: Zhang et al. as discussed by the authors presented a simple yet effective method to construct vision guided generative pre-trained language models (GPLMs) for the MAS task using attention-based add-on layers to incorporate visual information while maintaining their original text generation ability.
Proceedings ArticleDOI

Dynamic Semantic Graph Construction and Reasoning for Explainable Multi-hop Science Question Answering

TL;DR: Zhang et al. as mentioned in this paper employed Abstract Meaning Representation (AMR) as semantic graph representation and proposed a new framework to exploit more valid facts while obtaining explainability for multi-hop QA by dynamically constructing a semantic graph.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.