scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings ArticleDOI
24 Mar 2014
TL;DR: A novel method, lp-norm multi-task multiple kernel learning (MTMKL), that jointly learns the classifiers for detecting the absence and presence of multiple AUs and outperforms the state-of-the-art methods for AU detection.
Abstract: Facial action unit (AU) detection is a challenging topic in computer vision and pattern recognition. Most existing approaches design classifiers to detect AUs individually or AU combinations without considering the intrinsic relations among AUs. This paper presents a novel method, l p -norm multi-task multiple kernel learning (MTMKL), that jointly learns the classifiers for detecting the absence and presence of multiple AUs. l p -norm MTMKL is an extension of the regularized multi-task learning, which learns shared kernels from a given set of base kernels among all the tasks within Support Vector Machines (SVM). Our approach has several advantages over existing methods: (1) AU detection work is transformed to a MTL problem, where given a specific frame, multiple AUs are detected simultaneously by exploiting their inter-relations; (2) l p -norm multiple kernel learning is applied to increase the discriminant power of classifiers. Our experimental results on the CK+ and DISFA databases show that the proposed method outperforms the state-of-the-art methods for AU detection.

39 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: A novel Ensemble Multiple Kernel Correlation Alignment (EMKCA) based approach to HDP, which takes into consideration the two characteristics of the defect prediction data, and designs a kernel correlation alignment method to make the data distribution of the source and target projects similar in the kernel space.
Abstract: Heterogeneous defect prediction (HDP) aims to predict defect-prone software modules in one project using heterogeneous data collected from other projects. Recently, several HDP methods have been proposed. However, these methods do not sufficiently incorporate the two characteristics of the defect prediction data: (1) data could be linearly inseparable, and (2) data could be highly imbalanced. These two data characteristics make it challenging to build an effective HDP model. In this paper, we propose a novel Ensemble Multiple Kernel Correlation Alignment (EMKCA) based approach to HDP, which takes into consideration the two characteristics of the defect prediction data. Specifically, we first map the source and target project data into high dimensional kernel space through multiple kernel leaning, where the defective and non-defective modules can be better separated. Then, we design a kernel correlation alignment method to make the data distribution of the source and target projects similar in the kernel space. Finally, we integrate multiple kernel classifiers with ensemble learning to relieve the influence caused by class imbalance problem, which can improve the accuracy of the defect prediction model. Consequently, EMKCA owns the advantages of both multiple kernel learning and ensemble learning. Extensive experiments on 30 public projects show that EMKCA outperforms the related competing methods.

38 citations

Proceedings ArticleDOI
01 Dec 2009
TL;DR: This work proposes an alternate method, where training data instances are selected, using AdaBoost, for each of the base kernels, and a composite decision function, which can be evaluated by computing kernel similarities with respect to only these chosen instances, is learnt.
Abstract: We investigate the problem of combining multiple feature channels for the purpose of efficient image classification. Discriminative kernel based methods, such as SVMs, have been shown to be quite effective for image classification. To use these methods with several feature channels, one needs to combine base kernels computed from them. Multiple kernel learning is an effective method for combining the base kernels. However, the cost of computing the kernel similarities of a test image with each of the support vectors for all feature channels is extremely high. We propose an alternate method, where training data instances are selected, using AdaBoost, for each of the base kernels. A composite decision function, which can be evaluated by computing kernel similarities with respect to only these chosen instances, is learnt. This method significantly reduces the number of kernel computations required during testing. Experimental results on the benchmark UCI datasets, as well as on a challenging painting dataset, are included to demonstrate the effectiveness of our method.

38 citations

Journal ArticleDOI
TL;DR: A new weighted average combination method is presented, which is shown to perform better than MKL in both accuracy and efficiency in experiments, and is integrated into the k-nearest neighbors (kNNs) framework.
Abstract: In object classification, feature combination can usually be used to combine the strength of multiple complementary features and produce better classification results than any single one While multiple kernel learning (MKL) is a popular approach to feature combination in object classification, it does not always perform well in practical applications On one hand, the optimization process in MKL usually involves a huge consumption of computation and memory space On the other hand, in some cases, MKL is found to perform no better than the baseline combination methods This observation motivates us to investigate the underlying mechanism of feature combination with average combination and weighted average combination As a result, we empirically find that in average combination, it is better to use a sample of the most powerful features instead of all, whereas in one type of weighted average combination, the best classification accuracy comes from a nearly sparse combination We integrate these observations into the k-nearest neighbors (kNNs) framework, based on which we further discuss some issues related to sparse solution and MKL Finally, by making use of the kNN framework, we present a new weighted average combination method, which is shown to perform better than MKL in both accuracy and efficiency in experiments We believe that the work in this paper is helpful in exploring the mechanism underlying feature combination

38 citations

Journal ArticleDOI
Xia Wu1, Qing Li1, Lele Xu1, Kewei Chen, Li Yao1 
TL;DR: The experimental results on three large well-known face databases suggested that combination multiple features in MKSCDDL improved the recognition rate compared with SCDDL, indicating the effectiveness of the multiple kernel learning technique in the combination of multiple features for classification.

38 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114