Open AccessJournal Article
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel,Noam Shazeer,Adam Roberts,Katherine Lee,Sharan Narang,Michael Matena,Yanqi Zhou,Wei Li,Peter J. Liu +8 more
Reads0
Chats0
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.Abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.read more
Citations
More filters
Proceedings ArticleDOI
Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training
TL;DR: The authors propose to verbalize the entire English Wikidata KG, and discuss the unique challenges associated with a broad, open-domain, large-scale verbalization, and further show that verbalizing a comprehensive, encyclopedic KG like wikidata can be used to integrate structured KGs and natural language corpora.
Proceedings Article
PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models
TL;DR: PICARD as discussed by the authors constrains auto-regressive decoders of language models through incremental parsing to find valid output sequences by rejecting inadmissible tokens at each decoding step.
Posted Content
Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
TL;DR: A Dialogue-Adaptive Pre-training Objective (DAPO) based on some important qualities for assessing dialogues which are usually ignored by general LM pre-training objectives is designed and Experimental results show that models with DAPO surpass those with general LMPre- training objectives and other strong baselines on downstream DrNLP tasks.
Posted Content
MT-Clinical BERT: Scaling Clinical Information Extraction with Multitask Learning
Andriy Mulyar,Bridget T. McInnes +1 more
TL;DR: Multitask-Clinical BERT is developed, a single deep learning model that simultaneously performs 8 clinical tasks spanning entity extraction, personal health information identification, language entailment, and similarity by sharing representations among tasks, and performs competitively with all state-of-the-art task-specific systems.
Posted Content
Unifying Vision-and-Language Tasks via Text Generation
TL;DR: The authors proposed a unified framework that learns different tasks in a single architecture with the same language modeling objective, i.e., multimodal conditional text generation, where their models learn to generate labels in text based on the visual and textual inputs.