scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Dual Reader-Parser on Hybrid Textual and Tabular Evidence for Open Domain Question Answering

TL;DR: This article proposed a hybrid framework that takes both textual and tabular evidences as input and generates either direct answers or SQL queries depending on which form could better answer the question, which can then be executed on the associated databases to obtain the final answers.

Scientific Claim Verification with VerT5erini

TL;DR: This paper proposed a system called VerT5erini that exploits T5 for abstract retrieval, sentence selection, and label prediction, which are three critical sub-tasks of claim verification.
Posted Content

Anticipative Video Transformer

TL;DR: In this paper, an end-to-end attention-based video modeling architecture that attends to the previously observed video in order to anticipate future actions is proposed, which has the advantage of maintaining the sequential progression of observed actions while still capturing long-range dependencies.
Proceedings ArticleDOI

Towards Table-to-Text Generation with Numerical Reasoning

TL;DR: This paper proposed a model consisting of a pre-trained model and a copy mechanism to generate fluent text that is enriched with numerical reasoning, but it still lacks fidelity to the table contents, and the copy mechanism is incorporated in the fine-tuning step by using general placeholders to avoid producing hallucinated phrases that are not supported by a table.
Journal Article

What's new? Summarizing Contributions in Scientific Literature

TL;DR: A new task of disentangled paper summarization is introduced, which seeks to generate separate summaries for the paper contributions and the context of the work, making it easier to identify the key findings shared in articles.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.