scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

BiToD: A Bilingual Multi-Domain Dataset For Task-Oriented Dialogue Modeling

TL;DR: The BiToD task-oriented dialogue dataset as discussed by the authors contains 7k multi-domain dialogues (144k utterances) with a large and realistic bilingual knowledge base, which serve as an effective benchmark for evaluating bilingual ToD systems and cross-lingual transfer learning.
Proceedings Article

DeLighT: Deep and Light-weight Transformer

TL;DR: DeLighT as mentioned in this paper is a deep and light-weight transformer-based model for machine translation and language modeling tasks with 2.5 to 4 times fewer parameters on average.
Proceedings ArticleDOI

What does BERT know about books, movies and music? Probing BERT for Conversational Recommendation

TL;DR: Overall, the experiments show that: (i) BERT has knowledge stored in its parameters about the content of books, movies and music; (ii) it has more content-based knowledge than collaborative-basedknowledge; and (iii) fails on conversational recommendation when faced with adversarial data.
Posted Content

Post-hoc Interpretability for Neural NLP: A Survey.

TL;DR: A survey of interpretability methods for NLP models can be found in this article, where the authors provide a categorization of how interpretability can be used to communicate explanations and discuss the methods in depth.
Proceedings Article

Plan-then-Generate: Controlled Data-to-Text Generation via Planning

TL;DR: In this article, the authors proposed a novel Plan-then-Generate (PlanGen) framework to improve the controllability of neural data-to-text models, which is able to control both the intra-sentence and intersentence structure of the generated output.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.