scispace - formally typeset
Open AccessPosted Content

What to Pre-Train on? Efficient Intermediate Task Selection

Reads0
Chats0
TLDR
This article showed that efficient embedding based methods that rely solely on the respective datasets outperform computational expensive few-shot fine-tuning approaches, demonstrating that they are able to efficiently identify the best datasets for intermediate training.
Abstract
Intermediate task fine-tuning has been shown to culminate in large transfer gains across many NLP tasks. With an abundance of candidate datasets as well as pre-trained language models, it has become infeasible to run the cross-product of all combinations to find the best transfer setting. In this work we first establish that similar sequential fine-tuning gains can be achieved in adapter settings, and subsequently consolidate previously proposed methods that efficiently identify beneficial tasks for intermediate transfer learning. We experiment with a diverse set of 42 intermediate and 11 target English classification, multiple choice, question answering, and sequence tagging tasks. Our results show that efficient embedding based methods that rely solely on the respective datasets outperform computational expensive few-shot fine-tuning approaches. Our best methods achieve an average Regret@3 of less than 1% across all target tasks, demonstrating that we are able to efficiently identify the best datasets for intermediate training.

read more

Citations
More filters
Posted Content

SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer.

TL;DR: This paper proposed a soft prompt transfer approach to learn task-specific soft prompts to condition a frozen language model to perform downstream tasks, which significantly boosts the performance of PromptTuning across many tasks.
Proceedings ArticleDOI

SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

TL;DR: This article proposed a soft prompt transfer approach, which first learns a prompt on one or more source tasks and then uses it to initialize the prompt for a target task, which significantly boosts the performance of Prompt Tuning across many tasks.
Posted Content

AdapterHub Playground: Simple and Flexible Few-Shot Learning with Adapters

TL;DR: The AdapterHub Playground as mentioned in this paper provides an intuitive interface, allowing the usage of adapters for prediction, training and analysis of textual data for a variety of NLP tasks, and demonstrates that predictive performance can easily be increased in a few-shot learning scenario.
Posted Content

AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing

TL;DR: Transformer-based pretrained language models (T-PTLMs) as discussed by the authors have achieved great success in almost every NLP task and are built on the top of transformers, self-supervised learning and transfer learning.
Proceedings ArticleDOI

AdapterHub Playground: Simple and Flexible Few-Shot Learning with Adapters

TL;DR: In this article , the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations was held. And the authors presented a system demonstration system for the first time.
References
More filters
Proceedings ArticleDOI

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Posted Content

RoBERTa: A Robustly Optimized BERT Pretraining Approach

TL;DR: It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.
Proceedings Article

Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank

TL;DR: A Sentiment Treebank that includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality, and introduces the Recursive Neural Tensor Network.
Proceedings ArticleDOI

Transformers: State-of-the-Art Natural Language Processing

TL;DR: Transformers is an open-source library that consists of carefully engineered state-of-the art Transformer architectures under a unified API and a curated collection of pretrained models made by and available for the community.
Journal ArticleDOI

Cumulated gain-based evaluation of IR techniques

TL;DR: This article proposes several novel measures that compute the cumulative gain the user obtains by examining the retrieval result up to a given ranked position, and test results indicate that the proposed measures credit IR methods for their ability to retrieve highly relevant documents and allow testing of statistical significance of effectiveness differences.
Related Papers (5)