scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

XLM-E: Cross-lingual Language Model Pre-training via ELECTRA

TL;DR: This paper introduced ELECTRA-style tasks to cross-lingual language model pre-training, such as multilingual replaced token detection and translation replaced token detector, and showed that XLM-E outperforms the baseline models on various crosslingual understanding tasks with much less computation cost.
Proceedings ArticleDOI

Text Editing by Command

TL;DR: This work proposes a novel text editing task, and introduces WikiDocEdits, a dataset of single-sentence edits crawled from Wikipedia, and shows that the Interactive Editor, a transformer-based model trained on this dataset, outperforms baselines and obtains positive results in both automatic and human evaluations.
Journal ArticleDOI

Aggregating Customer Review Attributes for Online Reputation Generation

TL;DR: Experimental results coming from several real-world data sets of miscellaneous domains collected from IMDb, TripAdvisor and Amazon websites show the effectiveness of the proposed method in generating and visualizing reputation compared to three state-of-the-art reputation systems.
Proceedings ArticleDOI

Towards A Friendly Online Community: An Unsupervised Style Transfer Framework for Profanity Redaction

TL;DR: This work designs a Retrieve, Generate and Edit unsupervised style transfer pipeline to redact the offensive comments in a word-restricted manner while maintaining a high level of fluency and preserving the content of the original text.
Proceedings ArticleDOI

Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?

TL;DR: This article proposed a leakage-adjusted simulatability metric for evaluating natural language explanations, which measures how well explanations help an observer predict a model's output, while controlling for how explanations can directly leak the output.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.