scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

StructFormer: Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling

TL;DR: This article propose a dependency-constrained self-attention mechanism to induce dependency and constituency structures at the same time. But their model is not suitable for unsupervised parsing.
Proceedings ArticleDOI

Domain-matched Pre-training Tasks for Dense Retrieval

TL;DR: Oguz et al. as mentioned in this paper presented the Association for Computational Linguistics (NAACL) 2022 Conference on NLP, NAACL 2022, NACL 2019.
Posted Content

WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach

TL;DR: The authors conducted a thorough examination of pretrained model based unsupervised sentence embeddings and found that averaging all tokens is better than only using [CLS] vector and combining both top and bottom layers is better compared to only using top layers.
Proceedings Article

Back to Square One: Artifact Detection, Training and Commonsense Disentanglement in the Winograd Schema

TL;DR: This article showed that the apparent progress on WS may not necessarily reflect progress in commonsense reasoning and proposed a method for evaluating WS-like sentences in a zero-shot setting to account for the commonsENSE reasoning abilities acquired during the pretraining and observe that popular language models perform randomly in this setting when using our more strict evaluation.
Posted Content

Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression

TL;DR: This article proposed two new metrics, label loyalty and probability loyalty, to measure how closely a compressed model mimics the original model (i.e., student) and explore the effect of compression with regard to robustness under adversarial attacks.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.