scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

KFCNet: Knowledge Filtering and Contrastive Learning Network for Generative Commonsense Reasoning

TL;DR: The authors proposed a knowledge filtering and contrastive learning network (KFCNet) for natural language generation tasks with high-quality requirements on the output, such as commonsense generation and ad keyword generation.
Posted Content

Morph Call: Probing Morphosyntactic Content of Multilingual Transformers

TL;DR: This paper used a combination of neuron-, layer- and representation-level introspection techniques to analyze the morphosyntactic content of four multilingual transformers, including their less explored distilled versions.
Posted Content

HAConvGNN: Hierarchical Attention Based Convolutional Graph Neural Network for Code Documentation Generation in Jupyter Notebooks

TL;DR: In this article, a hierarchical attention-based ConvGNN component is proposed to augment the Seq2Seq network for code documentation generation CDG task in Kaggle notebooks.
Posted Content

GNN is a Counter? Revisiting GNN for Question Answering.

TL;DR: In this article, a simple graph neural counter can outperform all the existing GNN modules on CommonsenseQA and OpenBookQA, two popular QA benchmark datasets which heavily rely on knowledge-aware reasoning.
Posted ContentDOI

Protein embeddings and deep learning predict binding residues for various ligand classes

TL;DR: In this paper, an Artificial Intelligence (AI)-based method using embeddings from the Transformer-based protein Language Model (pLM) ProtT5 as input was proposed.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.