Open AccessJournal Article
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel,Noam Shazeer,Adam Roberts,Katherine Lee,Sharan Narang,Michael Matena,Yanqi Zhou,Wei Li,Peter J. Liu +8 more
TLDR
This article introduced a unified framework that converts all text-based language problems into a text-to-text format and compared pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks.Abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.read more
Citations
More filters
Proceedings Article
On the Challenges of Evaluating Compositional Explanations in Multi-Hop Inference: Relevance, Completeness, and Expert Ratings
TL;DR: The authors used a large corpus of 126k domain-expert relevance ratings that augment a corpus of explanations to standardized science exam questions, discovering 80k additional relevant facts not rated as gold.
Proceedings Article
Discrete and Soft Prompting for Multilingual Models.
Mengjie Zhao,Hinrich Schütze +1 more
TL;DR: The authors showed that discrete and soft prompting perform better than finetuning in multilingual cases: crosslingual transfer and in-language training of multilingual natural language inference, and also demonstrate good performance of prompting with training data in multiple languages other than English.
Proceedings Article
WhyAct: Identifying Action Reasons in Lifestyle Vlogs
TL;DR: In this article, a multimodal model was proposed to automatically infer human action reasons in online videos, focusing on the widespread genre of lifestyle vlogs, in which people perform actions while verbally describing them.
Posted Content
Evaluation of contextual embeddings on less-resourced languages.
Matej Ulčar,Ales Zagar,Carlos Santos Armendariz,Andraz Repar,Senja Pollak,Matthew Purver,Marko Robnik-Šikonja +6 more
TL;DR: This paper presented the first multilingual empirical comparison of two ELMo and several monolingual and multilingual BERT models using 14 tasks in nine languages and found that BERT model trained on only a few languages mostly do best.
Book ChapterDOI
Acquiring Input Features from Stock Market Summaries: A NLG Perspective
TL;DR: In this article, the authors focus on generating input features that can be aligned for stock market summaries, and they introduce a new corpus for the task and define a rule-based approach to automatically identify salient market features from market prices.