scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models

TL;DR: This paper proposed translation invariant self-attention (TISA), which accounts for the relative position between tokens in an interpretable fashion without needing conventional position embeddings and has several theoretical advantages over existing position-representation approaches.
Posted Content

Multilingual Translation via Grafting Pre-trained Language Models

TL;DR: The authors propose Graformer to graft separately pre-trained (masked) language models for machine translation using monolingual data for pre-training and parallel data for grafting training, which maximally takes advantage of the usage of both types of data.

News Aggregation with Diverse Viewpoint Identification Using Neural Embeddings and Semantic Understanding Models

TL;DR: This article proposed a transformer-based news aggregation system, composed of topic modeling, semantic clustering, claim extraction, and textual entailment that identifies viewpoints presented in articles within a semantic cluster and classifies them into positive, neutral and negative entailments.
Proceedings ArticleDOI

Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks

TL;DR: Hyperformer as mentioned in this paper uses shared hypernetworks to enable the model to adapt to each individual task through task-specific adapters, which improves the performance in few-shot domain generalization.
Posted Content

Differentiable Open-Ended Commonsense Reasoning

TL;DR: The authors proposed an efficient Differentiable model for multi-hop Reasoning over knowledge facts (DrFact) for answering a commonsense question without any pre-defined choices using as a resource only a corpus of commonsense facts written in natural language.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.