scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Towards mental time travel: a hierarchical memory for reinforcement learning agents

TL;DR: The Hierarchical Transformer Memory (HTM) as discussed by the authors is a reinforcement learning approach to remember the past in detail by dividing the past into chunks and performing high-level attention over coarse summaries of the chunks, and then performing detailed attention within only the most relevant chunks.
Posted Content

Sentence-Permuted Paragraph Generation

TL;DR: The authors proposed a hierarchical positional embedding to improve the content diversity of multi-sentence paragraph generation model by permuting the sentence orders to maximize the expected log-likelihood of output paragraph distributions with respect to all possible sentence orders.
Proceedings Article

Ultra-High Dimensional Sparse Representations with Binarization for Efficient Text Retrieval

TL;DR: In this paper, the authors proposed an ultra-high dimensional (UHD) representation scheme equipped with directly controllable sparsity, which allows for binarized representations, which are highly efficient for storage and search.
Proceedings ArticleDOI

Benchmarking a transformer-FREE model for ad-hoc retrieval

TL;DR: In this paper, the authors empirically assess the feasibility of applying transformer-based models in real-world ad-hoc retrieval applications by comparison to a more sustainable alternative, comprising only 620 trainable parameters.
Proceedings Article

ForumSum: A Multi-Speaker Conversation Summarization Dataset.

TL;DR: The authors collected ForumSum, a diverse and high-quality conversation summarization dataset with human written summaries, and used a conversational corpus for pre-training to improve the quality of the chat summarization model.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.