scispace - formally typeset
Open AccessJournal Article

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Fine-tuning wav2vec2 for speaker recognition.

TL;DR: The authors applied the wav2vec2 framework to speaker recognition instead of speech recognition and achieved a 1.88% EER on the extended voxceleb1 test set compared to 1.69% with an ECAPA-TDNN baseline.
Proceedings ArticleDOI

Question Answering Over Temporal Knowledge Graphs

TL;DR: This article proposed CRONKGQA, a transformer-based solution that exploits recent advances in Temporal KG embeddings, and achieves performance superior to all baselines, with an increase of 120% in accuracy over the next best performing method.
Proceedings ArticleDOI

Dynamic Contextualized Word Embeddings

TL;DR: This paper proposed dynamic contextualized word embeddings that represent words as a function of both linguistic and extralinguistic context based on a pretrained language model (PLM), which makes them attractive for a range of NLP tasks involving semantic variability.
Posted Content

Exploring Transfer Learning For End-to-End Spoken Language Understanding

TL;DR: This work proposes an E2E system that is designed to jointly train on multiple speech-to-text tasks, such as ASR (speech-transcription) and SLU ( speech-hypothesis), and text- to- text tasks,such as NLU (text- Hypothesis) and calls it the Audio-Text All-Task (AT-AT) Model, and shows that it beats the performance of E1E models trained on individual tasks.
Posted Content

Case-based Reasoning for Natural Language Queries over Knowledge Bases

TL;DR: CBR-KBQA as mentioned in this paper proposes a neuro-symbolic case-based reasoning approach for question answering over large knowledge bases, which consists of a non-parametric memory that stores cases (question and logical forms) and a parametric model which can generate logical forms by retrieving relevant cases from memory.
Related Papers (5)
Trending Questions (1)
What are the limitations of transfer learning with a unified text-to-text transformer?

The paper does not mention the limitations of transfer learning with a unified text-to-text transformer.