scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Improve Query Focused Abstractive Summarization by Incorporating Answer Relevance

TL;DR: QFS-BART as mentioned in this paper incorporates the explicit answer relevance of the source documents given the query via a question answering model to generate coherent and answer-related summaries, which can take advantage of large pre-trained models which improve the summarization performance significantly.
Proceedings ArticleDOI

NCUEE-NLP at MEDIQA 2021: Health Question Summarization Using PEGASUS Transformers

TL;DR: The NCUEE-NLP system for the MEDIQA challenge at the BioNLP workshop as discussed by the authors achieved a ROUGE2-F1 score of 0.1597.
Proceedings ArticleDOI

HiTab: A Hierarchical Table Dataset for Question Answering and Natural Language Generation

TL;DR: Cheng et al. as mentioned in this paper presented a paper at the 60th Annual Meeting of the Association for Computational Linguistics (ACLL), which was entitled "The Future of Computational Language Learning".
Proceedings Article

Enhancing Document Ranking with Task-adaptive Training and Segmented Token Recovery Mechanism

TL;DR: Li et al. as discussed by the authors proposed a new ranking model DR-BERT, which improves the document retrieval task by a task-adaptive training process and a Segmented Token Recovery Mechanism (STRM).
Posted ContentDOI

DeepOS: pan-cancer prognosis estimation from RNA-sequencing data

TL;DR: DeepOS as mentioned in this paper is a deep learning model that predicts overall survival from pan-cancer RNA-seq with a concordance-index of 0.715 and a survival AUC of 1.752 across 33 TCGA tumor types whilst tested on an unseen test cohort.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.