scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Corrected CBOW Performs as well as Skip-gram.

TL;DR: The authors showed that after correcting a bug in the CBOW gradient update, one can learn CBOW word embeddings that are fully competitive with SG on various intrinsic and extrinsic tasks, while being many times faster to train.
Journal ArticleDOI

FFCI: A Framework for Interpretable Automatic Evaluation of Summarization

TL;DR: This article propose a framework for fine-grained summarization evaluation that comprises four elements: faithfulness (degree of factual consistency with the source), focus (precision of summary content relative to the reference), coverage, and inter-sentential coherence (document fluency between adjacent sentences).
Posted Content

LU-BZU at SemEval-2021 Task 2: Word2Vec and Lemma2Vec performance in Arabic Word-in-Context disambiguation

TL;DR: The authors presented a set of experiments to evaluate and compare between CBOW Word2vec and Lemma2Vec models for Arabic word-in-context (WiC) disambiguation without using sense inventories or sense embeddings.
Posted Content

Automated Fact-Checking: A Survey

TL;DR: The authors reviewed relevant research on automated fact-checking covering both the claim detection and claim validation components, and proposed NLP methods to further research in the development of different components, including claim validation and claim detection.
Posted Content

Focus Attention: Promoting Faithfulness and Diversity in Summarization

TL;DR: This article proposed a focus attention mechanism to encourage decoders to proactively generate tokens that are similar or topical to the input document and proposed a Focus Sampling method to enable generation of diverse summaries.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.