scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation

TL;DR: The authors proposed a framework to automatically extract, denoise, and enforce important input concepts as lexical constraints, which performs comparably or better than its unconstrained counterpart on automatic metrics, demonstrates higher coverage for concept preservation, and receives better ratings in human evaluation.
Posted Content

Noisy Text Data: Achilles' Heel of popular transformer based NLP models

TL;DR: In this paper, the authors explore the sensitivity of popular transformer-based NLP models to noise in the text data and show that these models perform poorly on most common NLP tasks such as text classification, textual similarity, NER, question answering, and text summarization.
Posted Content

Doc2Dict: Information Extraction as Text Generation

TL;DR: Doc2Dict as discussed by the authors uses a transformer language model trained on existing database records to directly generate structured JSON, removing the workload associated with producing token-level annotations and taking advantage of a data source which is generally quite plentiful (e.g. database records).
Proceedings Article

Explore Better Relative Position Embeddings from Encoding Perspective for Transformer Models

TL;DR: In this article, the authors investigate the potential problems in Shaw-RPE and XL-Relative Position Embedding (RPE) and propose two novel RPEs called Low-level Fine-grained High-level Coarse-graining (LFHC) RPE and Gaussian Cumulative Distribution Function (GCDF) R PE.
Proceedings Article

Aspect Sentiment Quad Prediction as Paraphrase Generation

TL;DR: Wang et al. as mentioned in this paper introduced the Aspect Sentiment Quad Prediction (ASQP) task, which aims to jointly detect all sentiment elements in quads for a given opinionated sentence, which can reveal a more complete aspect-level sentiment structure.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.