scispace - formally typeset
Search or ask a question
Topic

Classifier chains

About: Classifier chains is a research topic. Over the lifetime, 170 publications have been published within this topic receiving 20989 citations.


Papers
More filters
Proceedings ArticleDOI
TL;DR: A hierarchical framework that builds chains of local binary neural networks after one global neural network over all the class labels, Local Classifier Chains based Convolutional Neural Networks (LCC-CNN).
Abstract: This paper focuses on improving the performance of current convolutional neural networks in face recognition without changing the network architecture We propose a hierarchical framework that builds chains of local binary neural networks after one global neural network over all the class labels, Local Classifier Chains based Convolutional Neural Networks (LCC-CNN) Two different criteria based on a similarity matrix and confusion matrix are introduced to select binary label pairs to create local deep networks To avoid error propagation, each testing sample travels through one global model and a local classifier chain to obtain its final prediction The proposed framework has been evaluated with UHDB31 and CASIA-WebFace datasets The experimental results indicate that our framework achieves better performance when compared with using only baseline methods as the global deep network The accuracy is improved by 27% and 07% on the two datasets, respectively

9 citations

Book ChapterDOI
21 Jun 2017
TL;DR: This short note introduces multi-objective optimisation for feature subset selection in multi-label classification, using label powerset, binary relevance, classifier chains and calibrated label ranking as the multi- label learning methods, and decision trees and SVMs as base learners.
Abstract: In this short note we introduce multi-objective optimisation for feature subset selection in multi-label classification We aim at optimise multiple multi-label loss functions simultaneously, using label powerset, binary relevance, classifier chains and calibrated label ranking as the multi-label learning methods, and decision trees and SVMs as base learners Experiments on multi-label benchmark datasets show that the feature subset obtained through MOO performs reasonably better than the systems that make use of exhaustive feature sets

9 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: These experiments demonstrate that, even though designed to work online, EDDO delivers estimators of competitive accuracy compared to batch Bayesian structure learners and batch variants of EDDO.
Abstract: We address the problem of estimating a discrete joint density online, that is, the algorithm is only provided the current example and its current estimate. The proposed online estimator of discrete densities, EDDO (Estimation of Discrete Densities Online), uses classifier chains to model dependencies among features. Each classifier in the chain estimates the probability of one particular feature. Because a single chain may not provide a reliable estimate, we also consider ensembles of classifier chains and ensembles of weighted classifier chains. For all density estimators, we provide consistency proofs and propose algorithms to perform certain inference tasks. The empirical evaluation of the estimators is conducted in several experiments and on data sets of up to several million instances: We compare them to density estimates computed from Bayesian structure learners, evaluate them under the influence of noise, measure their ability to deal with concept drift, and measure the run-time performance. Our experiments demonstrate that, even though designed to work online, EDDO delivers estimators of competitive accuracy compared to batch Bayesian structure learners and batch variants of EDDO.

9 citations

Proceedings Article
04 Nov 2018
TL;DR: In this paper, the authors proposed two extensions of the ensemble of classifier chains (ECC) to improve the exploitation of majority examples with approximately the same computational budget, in order to make ECC resilient to class imbalance.
Abstract: Class imbalance is an intrinsic characteristic of multi-label data. Most of the labels in multi-label data sets are associated with a small number of training examples, much smaller compared to the size of the data set. Class imbalance poses a key challenge that plagues most multi-label learning methods. Ensemble of Classifier Chains (ECC), one of the most prominent multi-label learning methods, is no exception to this rule, as each of the binary models it builds is trained from all positive and negative examples of a label. To make ECC resilient to class imbalance, we first couple it with random undersampling. We then present two extensions of this basic approach, where we build a varying number of binary models per label and construct chains of different sizes, in order to improve the exploitation of majority examples with approximately the same computational budget. Experimental results on 16 multi-label datasets demonstrate the effectiveness of the proposed approaches in a variety of evaluation metrics.

9 citations

Proceedings ArticleDOI
TL;DR: This paper proposes a randomized distributed algorithm that guarantees almost sure convergence to the optimal solution of optimally configuring classifier chains for real-time multimedia stream mining systems and provides results using speech data showing that the algorithm can perform well under highly dynamic environments.
Abstract: We consider the problem of optimally configuring classifier chains for real-time multimedia stream mining systems. Jointly maximizing the performance over several classifiers under minimal end-to-end processing delay is a difficult task due to the distributed nature of analytics (e.g. utilized models or stored data sets), where changing the filtering process at a single classifier can have an unpredictable effect on both the feature values of data arriving at classifiers further downstream, as well as the end-to-end processing delay. While the utility function can not be accurately modeled, in this paper we propose a randomized distributed algorithm that guarantees almost sure convergence to the optimal solution. We also provide results using speech data showing that the algorithm can perform well under highly dynamic environments.© (2008) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

8 citations


Network Information
Related Topics (5)
Deep learning
79.8K papers, 2.1M citations
77% related
Support vector machine
73.6K papers, 1.7M citations
77% related
Feature extraction
111.8K papers, 2.1M citations
76% related
Convolutional neural network
74.7K papers, 2M citations
76% related
Artificial neural network
207K papers, 4.5M citations
75% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202112
202018
201927
201812
201717
20166