scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Training Adaptive Computation for Open-Domain Question Answering with Computational Constraints

TL;DR: APE as mentioned in this paper is an adaptive passage encoder that can be applied to an existing Open-Domain Question Answering (ODQA) model and can be trained efficiently on a single GPU.
Posted Content

IntenT5: Search Result Diversification using Causal Language Models

TL;DR: In this article, the authors explore the capacity of causal language models to generate potential query intents and find that to encourage diversity in the generated queries, it is beneficial to adapt the model by including a new Distributional Causal Language Modeling (DCLM) objective during fine-tuning and a representation replacement during inference.
Posted Content

JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation

TL;DR: This article proposed a Japanese-specific Sequence to Sequence (JASS) pre-training approach for NMT involving Japanese as the source or target language, which focuses on Japanese linguistic units called bunsetsus and showed that JASS can give results that are competitive with if not better than those given by MASS.
Posted Content

Dynamic Semantic Graph Construction and Reasoning for Explainable Multi-hop Science Question Answering

TL;DR: Zhang et al. as discussed by the authors employed Abstract Meaning Representation (AMR) as semantic graph representation and proposed a new framework to exploit more valid facts while obtaining explainability for multi-hop QA by dynamically constructing a semantic graph.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.