scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

How Much Knowledge Can You Pack Into the Parameters of a Language Model

TL;DR: The authors fine-tuned pre-trained models to answer questions without access to any external context or knowledge, which scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions.
Proceedings ArticleDOI

QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering.

TL;DR: This work proposes a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) through two key innovations: relevance scoring and joint reasoning.
Posted ContentDOI

CERT: Contrastive Self-supervised Learning for Language Understanding

TL;DR: This work proposes CERT: Contrastive self-supervised Encoder Representations from Transformers, which pretrains language representation models using contrastiveSelf-super supervised learning at the sentence level using contrastively self- Supervised Encoding from Transformers.
Posted Content

Synthesizer: Rethinking Self-Attention in Transformer Models

TL;DR: The true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models is investigated and a model that learns synthetic attention weights without token-token interactions is proposed, called Synthesizer.
Posted Content

A Transformer-based Framework for Multivariate Time Series Representation Learning

TL;DR: A novel framework for multivariate time series representation learning based on the transformer encoder architecture, which can offer substantial performance benefits over fully supervised learning on downstream tasks, both with but even without leveraging additional unlabeled data, i.e., by reusing the existing data samples.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.