scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Enhancing Transformers with Gradient Boosted Decision Trees for NLI Fine-Tuning

TL;DR: The authors proposed Gradient Boosted Decision Trees (GBDTs) as an alternative to the commonly used Multi-Layer Perceptron (MLP) classification head for small Natural Language Inference (NLI) datasets.

Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings.

TL;DR: This article investigated different sources of external knowledge and evaluated the performance of their models on in-domain data as well as on special transfer datasets that are designed to assess fine-grained reasoning capabilities.
Posted Content

Amortized Prompt: Lightweight Fine-Tuning for CLIP in Domain Generalization

TL;DR: Amortized Prompt (AP) as mentioned in this paper is a novel approach for domain inference in the form of prompt generation, which has been shown to be robust to many distribution shifts and therefore should lead to substantial improvements in DG.
Posted Content

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization

TL;DR: Zhang et al. as discussed by the authors presented a simple yet effective method to construct vision guided generative pre-trained language models (GPLMs) for the MAS task using attention-based add-on layers to incorporate visual information while maintaining their original text generation ability.
Proceedings ArticleDOI

NUIG-DSI’s submission to the GEM Benchmark 2021

TL;DR: In this paper, the authors conducted a study with the financial support of the Science Foundation Ireland Centre for Research Training in Artificial Intelligence under Grant No. 18/CRT/6223 and co-supported by the Science foundation Ireland under grant number======SFI/12/RC/2289 2 (Insight), co-funded by the European Regional Development Fund.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.