scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models

TL;DR: This work presents a universal training framework named Pretrain-KGE consisting of three phases: semantic-based fine-tuning phase, knowledge extracting phase and KGE training phase, which can improve results over KGE models, especially on solving the low-resource problem.
Posted Content

Dynamic Contextualized Word Embeddings

TL;DR: Based on a pretrained language model (PLM), dynamic contextualized word embeddings model time and social space jointly, which makes them attractive for a range of NLP tasks involving semantic variability.
Posted Content

English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too

TL;DR: This article investigated whether English intermediate-task training is still helpful on non-English target tasks, using nine intermediate language-understanding tasks, and evaluated intermediate task transfer in a zero-shot cross-lingual setting on the XTREME benchmark.
Posted Content

QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering

TL;DR: QA-GNN as discussed by the authors proposes a new model, which uses pre-trained language models (LMs) and knowledge graphs (KGs) to estimate the importance of KG nodes relative to the given QA context.
Proceedings ArticleDOI

Exploring Text-to-Text Transformers for English to Hinglish Machine Translation with Synthetic Code-Mixing

TL;DR: A dependency-free method for generating code-mixed texts from bilingual distributed representations that is competitive with (and in some cases is even superior to) several standard methods under a diverse set of conditions.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.