scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Compositional Generalization for Neural Semantic Parsing via Span-level Supervised Attention

TL;DR: A span-level supervised attention loss that improves compositional generalization in semantic parsers that builds on existing losses that encourage attention maps in neural sequence-to-sequence models to imitate the output of classical word alignment algorithms.
Posted Content

Template Guided Text Generation for Task-Oriented Dialogue.

TL;DR: This work investigates two methods for Natural Language Generation using a single domain-independent model across a large number of APIs, proposing a schema-guided approach and a small number of templates, growing linearly in number of slots, to convey the semantics of the API.
Proceedings ArticleDOI

Unsupervised Multi-hop Question Answering by Question Generation.

TL;DR: This work proposes MQA-QG, an unsupervised framework that can generate human-like multi-hop training data from both homogeneous and heterogeneous data sources and shows that pretraining the QA system with the generated data would greatly reduce the demand for human-annotated training data.
Journal ArticleDOI

Analysis and Evaluation of Language Models for Word Sense Disambiguation

TL;DR: An in-depth quantitative and qualitative analysis of the celebrated BERT model with respect to lexical ambiguity reveals that BERT can accurately capture high-level sense distinctions, even when a limited number of examples is available for each word sense.
Proceedings ArticleDOI

Language ID in the Wild: Unexpected Challenges on the Path to a Thousand-Language Web Text Corpus

TL;DR: Two classes of techniques are proposed: wordlist-based tunable-precision filters and transformer-based semi-supervised LangID models, which increase median dataset precision from 5.5% to 71.2% and enable an initial data set covering 100K or more relatively clean sentences in each of 500+ languages, paving the way towards a 1,000-language web text corpus.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.