Open AccessJournal Article
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel,Noam Shazeer,Adam Roberts,Katherine Lee,Sharan Narang,Michael Matena,Yanqi Zhou,Wei Li,Peter J. Liu +8 more
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.Abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.read more
Citations
More filters
Posted Content
Robust Transfer Learning with Pretrained Language Models through Adapters
Wenjuan Han,Bo Pang,Ying Nian Wu +2 more
TL;DR: This paper proposed a simple yet effective adapter-based approach to mitigate adversarial attack by inserting small bottleneck layers (i.e., adapter) within each layer of a pretrained model, then fix the pretrained layers and train the adapter layers on the downstream task data.
Posted Content
Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality
TL;DR: In this article, the authors provide a systematic study on the role of dimension reduction methods (principal components analysis, factorization techniques, or multi-layer auto-encoders) as well as the dimensionality of embedding vectors and sample sizes as a function of predictive performance.
Posted Content
Challenges in Detoxifying Language Models
Johannes Welbl,Amelia Glaese,Jonathan Uesato,Sumanth Dathathri,John Mellor,Lisa Anne Hendricks,Kirsty Anderson,Pushmeet Kohli,Ben Coppin,Po-Sen Huang +9 more
TL;DR: The authors evaluate several toxicity mitigation strategies with respect to both automatic and human evaluation, and analyze consequences of toxicity mitigation in terms of model bias and LM quality, showing that while basic intervention strategies can effectively optimize previously established automatic metrics on the RealToxicityPrompts dataset, this comes at the cost of reduced LM coverage for both texts about, and dialects of, marginalized groups.
Posted Content
SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption
TL;DR: SCARF as discussed by the authors corrupts a random subset of features to improve the performance of self-supervised contrastive representation learning on real-world tabular datasets, achieving state-of-the-art performance on OpenML-CC18 benchmark.
Posted Content
Comparative analysis of word embeddings in assessing semantic similarity of complex sentences
Dhivya Chandrasekaran,Vijay Mago +1 more
TL;DR: The authors analyzed the sensitivity of various word embeddings with respect to the complexity of the sentences and found that the increase in complexity of sentences has a significant impact on the performance of the embedding models resulting in a 10-20% decrease in Pearson's and Spearman's correlation.