scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Language Models As or For Knowledge Bases

TL;DR: The authors examined the complementary nature of pre-trained language models (LMs) and explicit knowledge bases (KBs) and argued that latent LMs are not suitable as a substitute for explicit KBs, but could play a major role for augmenting and curating KBs.
Posted Content

EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments.

TL;DR: In this article, an end-to-end empathetic chatbot, EmpBot, is proposed, which uses a transformer pretrained language model (T5) with three objectives: response language modeling, sentiment understanding, and empathy forcing.
Posted Content

Crisis Domain Adaptation Using Sequence-to-sequence Transformers.

TL;DR: The authors proposed CAST, an approach for crisis domain adaptation leveraging Sequence-to-Sequence Transformers (S2T) for cross-domain adaptation in crisis-related message classification.
Posted Content

Coreference Augmentation for Multi-Domain Task-Oriented Dialogue State Tracking

TL;DR: In this paper, the authors propose a coreference dialogue state tracker (CDST) that explicitly models the coreference feature, and at each turn, the proposed model jointly predicts the coreferred domain-slot pair and extracts the keyference values from the dialogue context.
Posted Content

Estimating Redundancy in Clinical Text

TL;DR: In this article, an information-theoretic approach and a lexicosyntactic and semantic model were used to measure redundancy in EHR notes, with a high correlation with lexico-syntactical and semantic redundancy.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.