scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Want To Reduce Labeling Cost? GPT-3 Can Help

TL;DR: This paper explored ways to leverage GPT-3 as a low-cost data labeler to train other models and found that to make the downstream model achieve the same performance on a variety of NLU and NLG tasks, it costs 50% to 96% less to use labels from GPT3 than using labels from humans.
Posted Content

Easy and Efficient Transformer : Scalable Inference Solution For large NLP mode

TL;DR: In this article, a series of ultra-large-scale pre-training model optimization methods that combine algorithm characteristics and GPU processor hardware characteristics is proposed, which has a significant performance improvement over the existing schemes.
Proceedings ArticleDOI

Structure-to-Text Generation with Self-Training, Acceptability Classifiers and Context-Conditioning for the GEM Shared Task

TL;DR: This paper explored the use of self-training and acceptability classifiers with pre-trained models for natural language generation in structure-to-text settings using three GEM datasets (E2E, WebNLG-en, Schema-Guided Dialog).
Book Chapter

CUSTOM: Aspect-Oriented Product Summarization for E-Commerce

TL;DR: Wang et al. as discussed by the authors proposed CUSTOM, an aspect-oriented product summarization for e-commerce, which generates diverse and controllable summaries towards different product aspects.
Proceedings Article

Exploiting Reasoning Chains for Multi-hop Science Question Answering

TL;DR: In this article, a Chain Guided Retrieverreader (CGR) framework is proposed to model the reasoning chain for multi-hop Science Question Answering, which is capable of performing explainable reasoning without the need of any corpus-specific annotations, such as the ground-truth reasoning chain, or human-annotated entity mentions.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.