scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

A Text Mining Approach to Discovering COVID-19 Relevant Factors

TL;DR: In this article, a text mining approach that uses PyLucene search engine and the GrapeNLP grammar engine for extracting links between temperature, humidity and the spread of COVID-19, from a vast collection of scientific publications is presented.
Posted Content

AMMU : A Survey of Transformer-based Biomedical Pretrained Language Models

TL;DR: Transformer-based pretrained language models (PLMs) have started a new era in modern natural language processing (NLP) as discussed by the authors, which combine the power of transformers, transfer learning, and self-supervised learning (SSL).
Proceedings Article

IR like a SIR: Sense-enhanced Information Retrieval for Multiple Languages

TL;DR: SIR as mentioned in this paper proposes a multilingual query expansion mechanism based on word sense disambiguation that provides sense definitions as additional semantic information for the query, thus allowing their model to perform considerably better than its supervised and unsupervised alternatives across French, German, Italian and Spanish languages on several CLEF benchmarks.
Posted Content

Table Caption Generation in Scholarly Documents Leveraging Pre-trained Language Models.

TL;DR: This paper proposed a method of retrieving relevant sentences from the paper body, and feeding the table content as well as the retrieved sentences into pre-trained language models (e.g. T5 and GPT-2) for generating table captions.
Posted Content

ReadOnce Transformers: Reusable Representations of Text for Transformers

TL;DR: The ReadOnce Transformers approach to convert a transformer-based model into one that can build an information-capturing, task-independent, and compressed representation of text, resulting in a 2x-5x speedup compared to standard text-to-text models, and allows existing language models to handle longer documents without the need for designing new pre-trained models.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.