scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

DocFormer: End-to-End Transformer for Document Understanding

TL;DR: Docformer as mentioned in this paper uses text, vision and spatial features and combines them using a novel multi-modal self-attention layer, which makes it easy for the model to correlate text to visual tokens and vice versa.
Proceedings ArticleDOI

BERT-based Dense Retrievers Require Interpolation with BM25 for Effective Passage Retrieval

TL;DR: In this paper, the authors further investigate the topic of interpolating BM25 and BERT-based rankers and find that interpolation with BM25 is necessary for BERTbased dense retrievers to perform effectively.
Posted Content

RPT: Relational Pre-trained Transformer Is Almost All You Need towards Democratizing Data Preparation

TL;DR: RPT, a denoising autoencoder for tuple-to-X models, is presented, a Transformer-based neural translation architecture that consists of a bidirectional encoder and a left- to-right autoregressive decoder leading to a generalization of both BERT and GPT.
Proceedings Article

Support-set bottlenecks for video-text representation learning

TL;DR: In this article, a generative model is proposed to push visually similar video-text pairs together, where each sample's caption must be reconstructed as a weighted combination of a support set of visual representations.
Proceedings ArticleDOI

A Unified Generative Framework for Various NER Subtasks

TL;DR: The authors propose to formulate the NER subtasks as an entity span sequence generation task, which can be solved by a unified sequence-to-sequence (Seq2Seq) framework.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.