scispace - formally typeset
Search or ask a question
Topic

Classifier chains

About: Classifier chains is a research topic. Over the lifetime, 170 publications have been published within this topic receiving 20989 citations.


Papers
More filters
01 Jan 2011
TL;DR: The Ensemble of Classifier Chains (ECC) algorithm is modified in order to improve the per concept (or per label) performance and improve its performance over Mean Average Precision (MAP) metric.
Abstract: There are two main purposes for this thesis. Firstly we are trying to improve the multi-label classification techniques and secondly we apply these techniques in automated image annotation field. In machine learning part we examine the Ensemble of Classifier Chains (ECC) algorithm. We modify this algorithm in order to improve the per concept (or per label) performance and improve its performance over Mean Average Precision (MAP) metric. Also we suggest techniques to manipulate the existence of label constraints in a data set. We introduce a post-processing step and we suggest two different techniques to operate the different constraints in the data set. In the second part we focus mainly on the data set that we examine in this work. This dataset is taken out from the photo annotation task of ImageCLEF 2010 contest and we give a short description of it. Then we build models depending on two different kinds of information that we have for every image of the data set, the visual information and the textual information. Another contribution of that work is the suggestion of an ensemble model depending only on different kinds of textual information. An interesting thing to mention is that there is an increasing interest for automated image annotation. Many contests are focused on this field while are already some online applications for image annotation. So it is worth to search and simulate multi-label algorithms in image annotation field in order to see how they perform comparing to other machine learning algorithms.
Book ChapterDOI
20 Sep 2018
TL;DR: Graph-based semi-clustering algorithm is considered, where documents are represented by vertices with edge weights calculated according to the similarity of associated texts, which enables reducing label dimensionality.
Abstract: An increasing number of large online text repositories require effective techniques of document classification In many cases, more than one class label should be assigned to documents When the number of labels is big, it is difficult to obtain required multi-label classification accuracy Efficient label space dimension reduction may significantly improve classification performance In the paper, we consider applying graph-based semi-clustering algorithm, where documents are represented by vertices with edge weights calculated according to the similarity of associated texts Semi-clusters are used for finding patterns of labels that occur together Such approach enables reducing label dimensionality The performance of the method is examined by experiments conducted on real medical documents The assessment of classification results, in terms of Classification Accuracy, F-Measure and Hamming Loss, obtained for the most popular multi-label classifiers: Binary Relevance, Classifier Chains and Label Powerset showed good potential of the proposed methodology
Posted Content
Bohlender, Simon, Loza Mencia, Eneldo, Kulessa, Moritz 
TL;DR: In this paper, the authors combine the concept of dynamic classifier chains (DCC) with the boosting of extreme gradient boosted trees (XGBoost), an effective and scalable state-of-the-art technique.
Abstract: Classifier chains is a key technique in multi-label classification, since it allows to consider label dependencies effectively. However, the classifiers are aligned according to a static order of the labels. In the concept of dynamic classifier chains (DCC) the label ordering is chosen for each prediction dynamically depending on the respective instance at hand. We combine this concept with the boosting of extreme gradient boosted trees (XGBoost), an effective and scalable state-of-the-art technique, and incorporate DCC in a fast multi-label extension of XGBoost which we make publicly available. As only positive labels have to be predicted and these are usually only few, the training costs can be further substantially reduced. Moreover, as experiments on eleven datasets show, the length of the chain allows for a more control over the usage of previous predictions and hence over the measure one want to optimize.
28 Jan 2014
TL;DR: Experimental results highlight significant differences for 3 selected evaluation measures: Log-Loss, Ranking-L loss, Learning/Prediction time, and the best results are obtained with: Multi-label k Nearest neighbors (ML-kNN).
Abstract: The objective of this paper is to evaluate the ability of 12 multi-label classification algorithms at learning, in a short time, with few training examples. Experimental results highlight significant differences for 3 selected evaluation measures: Log-Loss, Ranking-Loss, Learning/Prediction time, and the best results are obtained with: Multi-label k Nearest neighbors (ML-kNN ), followed by Ensemble of Classifier Chains (ECC) and Ensemble of Binary Relevance (EBR).
Book ChapterDOI
11 Apr 2000
TL;DR: Several modifications performed to the credit assignment mechanism of Goldberg’s Simple Classifier System are described and the results obtained when solving a problem that requires the formation of classifier chains are tested.
Abstract: This article describes several modifications performed to the credit assignment mechanism of Goldberg’s Simple Classifier System [4] and the results obtained when solving a problem that requires the formation of classifier chains. The first set of these modifications included changes to the formula used to compute the effective bid of a classifier by taking into consideration the reputation of the classifier and the maximum bid of the previous auction in which a classifier was active. Noise was made proportional to the strength of the classifier and specificity was incorporated as an additional term in the formula that is independent from the bid coefficient. A second set of changes was related to the manner in which classifiers belonging to a chain may receive a payoff or a penalty from the environment in addition to the payments obtained from succeeding classifiers. We also tested the effect that bridge classifiers [13] have in the solution of the example problem by allowing the creation of shorter chains. Some experiments in which classifiers were better informed gave better results than those in which only the noise and the specificity were included in the computation of the effective bid.

Network Information
Related Topics (5)
Deep learning
79.8K papers, 2.1M citations
77% related
Support vector machine
73.6K papers, 1.7M citations
77% related
Feature extraction
111.8K papers, 2.1M citations
76% related
Convolutional neural network
74.7K papers, 2M citations
76% related
Artificial neural network
207K papers, 4.5M citations
75% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202112
202018
201927
201812
201717
20166