scispace - formally typeset
Search or ask a question
Topic

Classifier chains

About: Classifier chains is a research topic. Over the lifetime, 170 publications have been published within this topic receiving 20989 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A multi label classification framework to cater multi resident in smart home environment using Classifier Chains approach and recognized patterns are based on Activity of Daily Living (ADL).
Abstract: Rapid development in smart home environment are driven by the development of computing and sensing technology, has been changing the landscape of home resident’s daily life. Among others, activity recognition has become an interesting area of exploration in the domain of smart home. Activity recognition describes the paradigm of obtaining raw sensor data as inputs and predicting a home resident’s activity accordingly consist from environmental-based sensors that are embedded into the environment. The recognized patterns are based on Activity of Daily Living (ADL). In this paper, we design a multi label classification framework to cater multi resident in smart home environment using Classifier Chains approach. Human activities, everyday are gradually becoming complex especially relating with multi resident requirement and thus complicate the inferences of activities in smart home. Hence, this paper will highlight the methodology of sensing technology involved as well as important research works done in activity recognition area specifically on multi resident complex activity recognition involving interaction activity of multi resident within the same environment. Furthermore, this paper also discussed potential directions for future research in the activity recognition.

13 citations

Book ChapterDOI
19 Sep 2016
TL;DR: This work solves the multi-label classification problem by using a widely known technique: Classifier Chains CC and extends a typical metalearning approach by combining metafeatures characterizing the interdependencies between the classifiers with the base-level features.
Abstract: Dynamic selection or combination DSC methods allow to select one or more classifiers from an ensemble according to the characteristics of a given test instance x. Most methods proposed for this purpose are based on the nearest neighbours algorithm: it is assumed that if a classifier performed well on a set of instances similar to x, it will also perform well on x. We address the problem of dynamically combining a pool of classifiers by combining two approaches: metalearning and multi-label classification. Taking into account that diversity is a fundamental concept in ensemble learning and the interdependencies between the classifiers cannot be ignored, we solve the multi-label classification problem by using a widely known technique: Classifier Chains CC. Additionally, we extend a typical metalearning approach by combining metafeatures characterizing the interdependencies between the classifiers with the base-level features. We executed experiments on 42 classification datasets and compared our method with several state-of-the-art DSC techniques, including another metalearning approach. Results show that our method allows an improvement over the other metalearning approach and is very competitive with the other four DSC methods.

13 citations

Book ChapterDOI
11 Apr 1994
TL;DR: A modified version of the traditional classifiers system, called the delayed action classifier system (DACS), devised specifically for learning of anticipatory or predictive behaviour, is proposed.
Abstract: To manifest anticipatory behaviour that goes beyond simple stimulus-response, classifier systems must evolve internal reasoning processes based on couplings via internal messages. A major challenge that has been encountered in engendering internal reasoning processes in classifier systems has been the discovery and maintenance of long classifier chains. This paper proposes a modified version of the traditional classifier system, called the delayed action classifier system (DACS), devised specifically for learning of anticipatory or predictive behaviour. DACS operates by delaying the action (i.e. posting of messages) of appropriately tagged, matched classifiers by a number of execution cycles which is encoded on the classifier. Since classifier delays are encoded on the classifier genome, a GA is able to explore simultaneously the spaces of actions and delays. Results of experiments comparing DACS to a traditional classifier system in terms of the dynamics of classifier reinforcement and system performance using the bucket brigade are presented and examined. Experiments comparing DACS with a traditional classifier system, which appear encouraging, for a simple prediction problem are described and considered. Areas for further work using the delayed-action classifier notion are suggested and briefly discussed.

13 citations

Proceedings ArticleDOI
01 Jan 2015
TL;DR: A novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture is developed that can recover a rich set of dependency relations among inputs and outputs that a single multi- label classification model cannot capture due to its modeling simplifications.
Abstract: We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd |X) using a product of posterior distributions over components of the output space. Our approach captures different input-output and output-output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods.

12 citations

Proceedings ArticleDOI
01 Jan 2013
TL;DR: This paper proposes online distributed algorithms which can learn how to construct the optimal classifier chain in order to maximize the stream mining performance (i.e. mining accuracy minus cost) based on the dynamically-changing data characteristics.
Abstract: A plethora of emerging Big Data applications require processing and analyzing streams of data to extract valuable information in real-time. For this, chains of classifiers which can detect various concepts need to be constructed in real-time. In this paper, we propose online distributed algorithms which can learn how to construct the optimal classifier chain in order to maximize the stream mining performance (i.e. mining accuracy minus cost) based on the dynamically-changing data characteristics. The proposed solution does not require the distributed local classifiers to exchange any information when learning at runtime. Moreover, our algorithm requires only limited feedback of the mining performance to enable the learning of the optimal classifier chain. We model the learning problem of the optimal classifier chain at run-time as a multi-player multi-armed bandit problem with limited feedback. To our best knowledge, this paper is the first that applies bandit techniques to stream mining problems. However, existing bandit algorithms are inefficient in the considered scenario due to the fact that each component classifier learns its optimal classification functions using only the aggregate overall reward without knowing its own individual reward and without exchanging information with other classifiers. We prove that the proposed algorithms achieve logarithmic learning regret uniformly over time and hence, they are order optimal. Therefore, the long-term time average performance loss tends to zero. We also design learning algorithms whose regret is linear in the number of classification functions. This is much smaller than the regret results which can be obtained using existing bandit algorithms that scale linearly in the number of classifier chains and exponentially in the number of classification functions.

12 citations


Network Information
Related Topics (5)
Deep learning
79.8K papers, 2.1M citations
77% related
Support vector machine
73.6K papers, 1.7M citations
77% related
Feature extraction
111.8K papers, 2.1M citations
76% related
Convolutional neural network
74.7K papers, 2M citations
76% related
Artificial neural network
207K papers, 4.5M citations
75% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202112
202018
201927
201812
201717
20166