scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

aschern at SemEval-2020 Task 11: It Takes Three to Tango: RoBERTa, CRF, and Transfer Learning

TL;DR: This article used RoBERTa-based neural architectures, additional CRF layers, transfer learning between the two subtasks, and advanced post-processing to handle the multi-label nature of the task, the consistency between nested spans, repetitions, and labels from similar spans in training.
Posted Content

Open4Business(O4B): An Open Access Dataset for Summarizing Business Documents.

TL;DR: This work introduces Open4Business (O4B), a dataset of 17,458 open access business articles and their reference summaries, and evaluates existing models on it and consequently shows that models trained on O4B and a 7x larger non-open access dataset achieve comparable performance on summarization.
Posted Content

GitTables: A Large-Scale Corpus of Relational Tables

TL;DR: GitTables as discussed by the authors is a corpus of 1.7M relational tables extracted from GitHub and annotated with more than 2K different semantic types from this http URL and DBpedia.
Proceedings ArticleDOI

Modulating Language Models with Emotions

TL;DR: In this paper, a modulated layer normalization (MLN) approach is proposed to generate context-aware language that embodies diverse emotions for the MojiTalk NLP task.
Proceedings Article

QACE: Asking Questions to Evaluate an Image Caption

TL;DR: In this article, a new metric based on Question Answering for Caption Evaluation (QACE) was proposed to evaluate image captioning based on question generation and question answering systems.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.