scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Experimental results show that the model outperforms current state-of-the-art contextual frameworks and reveals individual contributions for each contextual interaction level as well as appearance features, indicating their relative importance for object localization.
Abstract: Recently, many object localization models have shown that incorporating contextual cues can greatly improve accuracy over using appearance features alone. Therefore, many of these models have explored different types of contextual sources, but only considering one level of contextual interaction at the time. Thus, what context could truly contribute to object localization, through integrating cues from all levels, simultaneously, remains an open question. Moreover, the relative importance of the different contextual levels and appearance features across different object classes remains to be explored. Here we introduce a novel framework for multiple class object localization that incorporates different levels of contextual interactions. We study contextual interactions at the pixel, region and object level based upon three different sources of context: semantic, boundary support, and contextual neighborhoods. Our framework learns a single similarity metric from multiple kernels, combining pixel and region interactions with appearance features, and then applies a conditional random field to incorporate object level interactions. To effectively integrate different types of feature descriptions, we extend the large margin nearest neighbor to a novel algorithm that supports multiple kernels. We perform experiments on three challenging image databases: Graz-02, MSRC and PASCAL VOC 2007. Experimental results show that our model outperforms current state-of-the-art contextual frameworks and reveals individual contributions for each contextual interaction level as well as appearance features, indicating their relative importance for object localization.

33 citations

Proceedings ArticleDOI
20 Mar 2016
TL;DR: An algorithm is proposed that designs kernels as a part of Laplacian SVM learning which correspond to deep multi-layered combinations of elementary kernels which capture simple - linear - as well as intricate - nonlinear - relationships between data.
Abstract: Semi-supervised learning seeks to build accurate classification machines by taking advantage of both labeled and unlabeled data. This learning scheme is useful especially when labeled data are scarce while unlabeled ones are abundant. Among the existing semi-supervised learning algorithms, Laplacian support vector machines (SVMs) are known to be particularly powerful but their success is highly dependent on the choice of kernels., In this paper, we propose an algorithm that designs kernels as a part of Laplacian SVM learning. The proposed kernels correspond to deep multi-layered combinations of elementary kernels which capture simple — linear — as well as intricate — nonlinear — relationships between data. Our optimization process finds both the parameters of the deep kernels and the Laplacian SVMs in a unified framework resulting into highly discriminative and accurate classifiers. When applied to the challenging ImageCLEF2013 Photo Annotation benchmark, the proposed deep kernels show significant and consistent gain compared to existing elementary kernels as well as standard multiple kernels.

33 citations

Proceedings ArticleDOI
04 May 2014
TL;DR: It is proved that MFCCs play a crucial role in speech emotion recognition and multiple kernel learning is presented, which outperforms state-of-the-art results and shows the effectiveness of the method.
Abstract: To enhance the recognition rate of speaker independent speech emotion recognition, a feature selection and feature fusion combination method based on multiple kernel learning is presented. Firstly, multiple kernel learning is used to obtain sparse feature subsets. The features selected at least n times are recombined into another subset named n-subset. The optimal n is determined by 10 cross-validation experiments. Secondly, feature fusion is made at the kernel level. Not only each kind of feature is associated with a kernel, but also the full feature set is associated with a kernel which is not considered in the previous studies. All of the kernels are added together to obtain a combination kernel. The final recognition rate for 7 kinds of emotions on Berlin Database is 83.10%, which outperforms state-of-the-art results and shows the effectiveness of our method. It is also proved that MFCCs play a crucial role in speech emotion recognition.

33 citations

Journal ArticleDOI
TL;DR: This paper proposes a simple but effective multiclass MKL method by a two-stage strategy, in which the first stage finds the kernel weights to combine the kernels, and the second stage trains a standard multiclass support vector machine (SVM).
Abstract: The success of kernel methods is very much dependent on the choice of kernels. Multiple kernel learning (MKL) aims at learning a combination of different kernels in order to better match the underlying problem instead of using a single fixed kernel. In this paper, we propose a simple but effective multiclass MKL method by a two-stage strategy, in which the first stage finds the kernel weights to combine the kernels, and the second stage trains a standard multiclass support vector machine (SVM). Specifically, we first present an evaluation criterion named multiclass kernel polarization (MKP) to assess the quality of a kernel in the multiclass classification scenario, and then develop a heuristic rule to directly assign a weight to each kernel based on the quality of the individual kernel. MKP is a multiclass extension of the kernel polarization, which is a universal kernel evaluation criterion for kernel design and learning. Comprehensive experiments are conducted on several UCI benchmark examples and the results well demonstrate the effectiveness and efficiency of our approach.

33 citations

Journal ArticleDOI
TL;DR: A Localized Multiple Kernel learning approach for Anomaly Detection (LMKAD) using OCC, where the weight for each kernel is assigned locally and the parameters of the gating function and one-class classifier are optimized simultaneously through a two-step optimization process.
Abstract: Multi-kernel learning has been well explored in the recent past and has exhibited promising outcomes for multi-class classification and regression tasks. In this paper, we present a multiple kernel learning approach for the One-class Classification (OCC) task and employ it for anomaly detection. Recently, the basic multi-kernel approach has been proposed to solve the OCC problem, which is simply a convex combination of different kernels with equal weights. This paper proposes a Localized Multiple Kernel learning approach for Anomaly Detection ( L M K A D ) using OCC, where the weight for each kernel is assigned locally. Proposed L M K A D approach adapts the weight for each kernel using a gating function. The parameters of the gating function and one-class classifier are optimized simultaneously through a two-step optimization process. We present the empirical results of the performance of L M K A D on 25 benchmark datasets from various disciplines. This performance is evaluated against existing Multi Kernel Anomaly Detection ( M K A D ) algorithm, and four other existing kernel-based one-class classifiers to showcase the credibility of our approach. L M K A D achieves significantly better Gmean scores while using a lesser number of support vectors compared to M K A D . Friedman test is also performed to verify the statistical significance of the results claimed in this paper.

32 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114