scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings ArticleDOI
23 Jun 2013
TL;DR: This paper proposes two criteria for jointly learning the kernel and the classifier using a single optimization problem for the SVM classifier and formulate the problem of learning a good kernel-classifier combination as a convex optimization problem and solve it efficiently following the multiple kernel learning approach.
Abstract: In computer vision applications, features often lie on Riemannian manifolds with known geometry. Popular learning algorithms such as discriminant analysis, partial least squares, support vector machines, etc., are not directly applicable to such features due to the non-Euclidean nature of the underlying spaces. Hence, classification is often performed in an extrinsic manner by mapping the manifolds to Euclidean spaces using kernels. However, for kernel based approaches, poor choice of kernel often results in reduced performance. In this paper, we address the issue of kernel selection for the classification of features that lie on Riemannian manifolds using the kernel learning approach. We propose two criteria for jointly learning the kernel and the classifier using a single optimization problem. Specifically, for the SVM classifier, we formulate the problem of learning a good kernel-classifier combination as a convex optimization problem and solve it efficiently following the multiple kernel learning approach. Experimental results on image set-based classification and activity recognition clearly demonstrate the superiority of the proposed approach over existing methods for classification of manifold features.

113 citations

Journal ArticleDOI
TL;DR: A novel rule extraction approach using the information provided by the separating hyperplane and support vectors is proposed to improve the generalization capacity and comprehensibility of rules and reduce the computational complexity of SVM.

113 citations

Journal ArticleDOI
TL;DR: This work proposes Composite Kernel Learning to address the situation where distinct components give rise to a group structure among kernels, and describes the convexity of the learning problem, and provides a general wrapper algorithm for computing solutions.
Abstract: The Support Vector Machine is an acknowledged powerful tool for building classifiers, but it lacks flexibility, in the sense that the kernel is chosen prior to learning. Multiple Kernel Learning enables to learn the kernel, from an ensemble of basis kernels, whose combination is optimized in the learning process. Here, we propose Composite Kernel Learning to address the situation where distinct components give rise to a group structure among kernels. Our formulation of the learning problem encompasses several setups, putting more or less emphasis on the group structure. We characterize the convexity of the learning problem, and provide a general wrapper algorithm for computing solutions. Finally, we illustrate the behavior of our method on multi-channel data where groups correspond to channels.

112 citations

Journal ArticleDOI
TL;DR: The proposed discriminative multiple kernel learning method for spectral image classification can achieve a substantial improvement in classification performance without strict limitation for selection of basic kernels and reduces the computational burden by requiring fewer support vectors.
Abstract: In this paper, we propose a discriminative multiple kernel learning (DMKL) method for spectral image classification. The core idea of the proposed method is to learn an optimal combined kernel from predefined basic kernels by maximizing separability in reproduction kernel Hilbert space. DMKL achieves the maximum separability via finding an optimal projective direction according to statistical significance, which leads to the minimum within-class scatter and maximum between-class scatter instead of a time-consuming search for the optimal kernel combination. Fisher criterion (FC) and maximum margin criterion (MMC) are used to find the optimal projective direction, thus leading to two variants of the proposed method, DMKL-FC and DMKL-MMC, respectively. After learning the projective direction, all basic kernels are projected to generate a discriminative combined kernel. Three merits are realized by DMKL. First, DMKL can achieve a substantial improvement in classification performance without strict limitation for selection of basic kernels. Second, the discriminating scales of a Gaussian kernel, the useful bands for classification, and the competitive sizes of spatial filters can be selected by ranking the corresponding weights, where the large weights correspond to the most relevant. Third, DMKL reduces the computational burden by requiring fewer support vectors. Experiments are conducted on two hyperspectral data sets and one multispectral data set. The corresponding experimental results demonstrate that the proposed algorithms can achieve the best performance with satisfactory computational efficiency for spectral image classification, compared with several state-of-the-art algorithms.

111 citations

Proceedings ArticleDOI
09 Dec 2013
TL;DR: This work proposes a method to automatically detect emotions in unconstrained settings as part of the 2013 Emotion Recognition in the Wild Challenge and achieves competitive results, with an accuracy gain of approximately 10% above the challenge baseline.
Abstract: We propose a method to automatically detect emotions in unconstrained settings as part of the 2013 Emotion Recognition in the Wild Challenge [16], organized in conjunction with the ACM International Conference on Multimodal Interaction (ICMI 2013). Our method combines multiple visual descriptors with paralinguistic audio features for multimodal classification of video clips. Extracted features are combined using Multiple Kernel Learning and the clips are classified using an SVM into one of the seven emotion categories: Anger, Disgust, Fear, Happiness, Neutral, Sadness and Surprise. The proposed method achieves competitive results, with an accuracy gain of approximately 10% above the challenge baseline.

110 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114