scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Dialogue Inspectional Summarization with Factual Inconsistency Awareness.

TL;DR: This article proposed an end-to-end dialogue summary generation framework with two auxiliary tasks: Expectant Factual Aspect Regularization (EFAR) and Missing Factual Entity Discrimination (MFED), which can generate a more readable summary with accurate coverage of factual aspects as well as informing the user with potential missing facts detected from the input dialogue for further human intervention.
Posted Content

Controlling Conditional Language Models with Distributional Policy Gradients

TL;DR: The authors proposed a conditional DPG (CDPG) to adapt pre-trained generative models to a new task without destroying its capabilities, which can be applied to solve a large number of tasks.
Posted Content

VIOLET : End-to-End Video-Language Transformers with Masked Visual-token Modeling

TL;DR: VIOLET as mentioned in this paper adopts a video transformer to explicitly model the temporal dynamics of video inputs, which achieves state-of-the-art performance on video question answering tasks and text-to-video retrieval tasks.
Posted Content

Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training

TL;DR: The authors propose to verbalize the entire English Wikidata KG, and discuss the unique challenges associated with a broad, open-domain, large-scale verbalization, and further show that verbalizing a comprehensive, encyclopedic KG like wikidata can be used to integrate structured KGs and natural language corpora.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.