scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Claim Matching Beyond English to Scale Global Fact-Checking

TL;DR: In this article, a claim matching task is defined to identify pairs of textual messages containing claims that can be served with one fact-check, and a high-quality teacher model is used to address the imbalance in embedding quality between low and high-resource languages in the dataset.
Proceedings ArticleDOI

A Simple Recipe for Multilingual Grammatical Error Correction

TL;DR: This paper proposed a language-agnostic method to generate a large number of synthetic examples and used large-scale multilingual language models to train state-of-the-art multilingual Grammatical Error Correction models.
Proceedings ArticleDOI

Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization

TL;DR: The authors proposed two methods to address this task and introduced a novel dataset named pn-summary for Persian abstractive text summarization, i.e., mT5 and an encoder-decoder version of the ParsBERT model.
Posted Content

Hierarchical Learning for Generation with Long Source Sequences.

TL;DR: The authors proposed a hierarchical attention transformer-based architecture for sequence-to-sequence (seq2seq) models, achieving state-of-the-art results on summarization and document-level machine translation tasks.
Posted Content

Going Full-TILT Boogie on Document Understanding with Text-Image-Layout Transformer

TL;DR: TILT as mentioned in this paper uses a pretrained encoder-decoder Transformer to learn layout information, visual features, and textual semantics simultaneously, which achieves state-of-the-art results in extracting information from documents and answering questions which demand layout understanding.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.