scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

FewCLUE: A Chinese Few-shot Learning Evaluation Benchmark

TL;DR: The Chinese Few-shot Learning Evaluation Benchmark (FewCLUE) as discussed by the authors is the first comprehensive small sample evaluation benchmark in Chinese, which includes nine tasks, ranging from single-sentence and sentence-pair classification tasks to machine reading comprehension tasks.
Posted Content

Question-aware Transformer Models for Consumer Health Question Summarization

TL;DR: In this article, an abstractive question summarization model that leverages the semantic interpretation of a question via recognition of medical entities was proposed to generate informative summaries for real-world consumer health questions.
Posted Content

MATINF: A Jointly Labeled Large-Scale Dataset for Classification, Question Answering and Summarization.

TL;DR: This work proposes MATINF, the first jointly labeled large-scale dataset for classification, question answering and summarization, and benchmarks existing methods and a novel multi-task baseline overMATINF to inspire further research.
Posted Content

With Little Power Comes Great Responsibility

TL;DR: This paper found that typical test sets of 2000 sentences have approximately 75% power to detect differences of 1 BLEU point, and that the most typical experimental design for human rating studies will be underpowered to detect small model differences, of the sort that are frequently studied.
Proceedings ArticleDOI

ForecastQA: A Question Answering Challenge for Event Forecasting with Temporal Text Data

TL;DR: In this paper, a restricted-domain, multiple-choice, question-answering (QA) task is formulated to simulate the forecasting scenario on temporal news documents, and the problem is formulated as a restricted domain, multiplechoice, QA task, where a model has to make a forecasting judgement.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.