scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Towards generating citation sentences for multiple references with intent control.

TL;DR: The authors used the Fusion-in-Decoder approach to cope with multiple long inputs and incorporated the predicted citation intents into training for intent control to generate multiple citation sentences for scientific literature review and assist article writing.
Proceedings ArticleDOI

AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization

TL;DR: Fabbri et al. as discussed by the authors presented a paper on the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NCLT 2022.
Posted Content

Pseudo Relevance Feedback with Deep Language Models and Dense Retrievers: Successes and Pitfalls

TL;DR: In this paper, a text-based and vector-based pseudo relevance feedback (Pseudo Relevance Feedback) is used to improve the performance of bag-of-words re-rankers.
Posted Content

Neural language modeling of free word order argument structure

TL;DR: The authors focus on verb argument structure in German, which has the interesting property that verb arguments may appear in a relatively free order in subordinate clauses, and show that both Transformers and LSTM achieve a score substantially better than chance on this test.
Proceedings Article

TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning

TL;DR: The authors presented a new unsupervised method based on pre-trained Transformers and sequential denoising auto-encoder (TSDAE) which outperforms previous approaches by up to 6.4 points.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.