scispace - formally typeset
Open AccessJournal ArticleDOI

AutoML for Multi-Label Classification: Overview and Empirical Evaluation

Reads0
Chats0
TLDR
In this paper, the authors survey existing approaches to AutoML for multi-label classification (MLC) and propose a benchmarking framework that supports a fair and systematic comparison of these approaches.
Abstract
Automated machine learning (AutoML) supports the algorithmic construction and data-specific customization of machine learning pipelines, including the selection, combination, and parametrization of machine learning algorithms as main constituents. Generally speaking, AutoML approaches comprise two major components: a search space model and an optimizer for traversing the space. Recent approaches have shown impressive results in the realm of supervised learning, most notably (single-label) classification (SLC). Moreover, first attempts at extending these approaches towards multi-label classification (MLC) have been made. While the space of candidate pipelines is already huge in SLC, the complexity of the search space is raised to an even higher power in MLC. One may wonder, therefore, whether and to what extent optimizers established for SLC can scale to this increased complexity, and how they compare to each other. This paper makes the following contributions: First, we survey existing approaches to AutoML for MLC. Second, we augment these approaches with optimizers not previously tried for MLC. Third, we propose a benchmarking framework that supports a fair and systematic comparison. Fourth, we conduct an extensive experimental study, evaluating the methods on a suite of MLC problems. We find a grammar-based best-first search to compare favorably to other optimizers.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Coevolution of remaining useful lifetime estimation pipelines for automated predictive maintenance

TL;DR: In this paper, the search space of a single pipeline is partitioned into two sub-spaces, one for feature extraction methods and one for regression methods, and employ cooperative coevolution for searching a good combination.
Journal ArticleDOI

Guest Editorial: Automated Machine Learning

TL;DR: A snapshot of cutting edge automated machine learning (AutoML) research can be found in this paper, with 15 articles of outstanding quality that together comprise a snapshot of the current state-of-the-art.
Journal ArticleDOI

Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval

TL;DR: Two models to learn discriminative and modality-invariant representations for cross-modal retrieval are proposed and a novel soft multi-label contrastive loss is proposed, with the soft positive sampling probability, which can align the representation similarity and the label similarity.
Journal ArticleDOI

Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval

TL;DR: Zhang et al. as discussed by the authors proposed a dual generative adversarial network (GAN) to learn discriminative and modality-invariant representations for cross-modal retrieval.
Proceedings ArticleDOI

Feature Pyramid Vision Transformer for MedMNIST Classification Decathlon

TL;DR: This paper proposed Feature Pyramid Vision Transformer (FPViT), a strong alternative for MedMNIST Classification Decathlon, which exhibits enhanced feature learning and modeling capabilities, which merits both residual network (ResNet) and Vision Trans transformer (ViT).
References
More filters
Journal ArticleDOI

Random Forests

TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Book

An Introduction to Genetic Algorithms

TL;DR: An Introduction to Genetic Algorithms focuses in depth on a small set of important and interesting topics -- particularly in machine learning, scientific modeling, and artificial life -- and reviews a broad span of research, including the work of Mitchell and her colleagues.
Journal ArticleDOI

Efficient Global Optimization of Expensive Black-Box Functions

TL;DR: This paper introduces the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering and shows how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule.
Proceedings Article

Practical Bayesian Optimization of Machine Learning Algorithms

TL;DR: This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.
Proceedings Article

Algorithms for Hyper-Parameter Optimization

TL;DR: This work contributes novel techniques for making response surface models P(y|x) in which many elements of hyper-parameter assignment (x) are known to be irrelevant given particular values of other elements.
Related Papers (5)