scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Posted Content
TL;DR: In this article, the authors derived an upper bound on the local Rademacher complexity of multiple kernel learning, which yields a tighter excess risk bound than global approaches, and derived consequences regarding excess loss, namely fast convergence rates of the order O(n^{-\frac{\alpha}{1+\alpha}) where α is the minimum eigenvalue decay rate of individual kernels.
Abstract: We derive an upper bound on the local Rademacher complexity of $\ell_p$-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches. Previous local approaches aimed at analyzed the case $p=1$ only while our analysis covers all cases $1\leq p\leq\infty$, assuming the different feature mappings corresponding to the different kernels to be uncorrelated. We also show a lower bound that shows that the bound is tight, and derive consequences regarding excess loss, namely fast convergence rates of the order $O(n^{-\frac{\alpha}{1+\alpha}})$, where $\alpha$ is the minimum eigenvalue decay rate of the individual kernels.

50 citations

Journal ArticleDOI
TL;DR: In recent years, several methods have been proposed to combine multiple kernels instead of using a single one, but these different kernels may correspond to using different notions of similarity or may...
Abstract: In recent years, several methods have been proposed to combine multiple kernels instead of using a single one. These different kernels may correspond to using different notions of similarity or may...

49 citations

Proceedings ArticleDOI
20 Mar 2016
TL;DR: A kernel-based nonlinear connectivity model based on which it obtains topology revealing PCs is proposed, and a data-driven approach is advocated to learn the combination of multiple kernel functions that optimizes the data fit.
Abstract: Partial correlations (PCs) of functional magnetic resonance imaging (fMRI) time series play a principal role in revealing connectivity of brain networks. To explore nonlinear behavior of the blood-oxygen-level dependent signal, the present work postulates a kernel-based nonlinear connectivity model based on which it obtains topology revealing PCs. Instead of relying on a single predefined kernel, a data-driven approach is advocated to learn the combination of multiple kernel functions that optimizes the data fit. Synthetically generated data based on both a dynamic causal and a linear model are used to validate the proposed approach in resting-state fMRI scenarios, highlighting the gains in edge detection performance when compared with the popular linear PC method. Tests on real fMRI data demonstrate that connectivity patterns revealed by linear and nonlinear models are different.

49 citations

Journal ArticleDOI
TL;DR: A new regression method for continuous estimation of the intensity of facial behavior interpretation, called Doubly Sparse Relevance Vector Machine (DSRVM), which enforces double sparsity by jointly selecting the most relevant training examples and the most important kernels relevant for interpretation of observed facial expressions.
Abstract: Certain inner feelings and physiological states like pain are subjective states that cannot be directly measured, but can be estimated from spontaneous facial expressions. Since they are typically characterized by subtle movements of facial parts, analysis of the facial details is required. To this end, we formulate a new regression method for continuous estimation of the intensity of facial behavior interpretation, called Doubly Sparse Relevance Vector Machine (DSRVM). DSRVM enforces double sparsity by jointly selecting the most relevant training examples (a.k.a. relevance vectors) and the most important kernels associated with facial parts relevant for interpretation of observed facial expressions. This advances prior work on multi-kernel learning, where sparsity of relevant kernels is typically ignored. Empirical evaluation on challenging Shoulder Pain videos, and the benchmark DISFA and SEMAINE datasets demonstrate that DSRVM outperforms competing approaches with a multi-fold reduction of running times in training and testing.

49 citations

Journal ArticleDOI
TL;DR: To fuse various NGFs with different physical properties and discriminability, the multiple kernel learning (MKL) is utilized to learn the combination weights, rather than assigning the same weight to all features as usually applied by the traditional support vector machines (SVMs).
Abstract: Compared with the high-resolution synthetic aperture radar (SAR) image, a moderate-resolution SAR image can offer wider swath, which is more suitable for maritime ship surveillance. Taking into account the amount of information in a moderate-resolution SAR image and the stability of feature extraction, we propose naive geometric features (NGFs) for ship classification. In contrast to the strictly defined geometric features (SGFs), the extraction of NGFs is very simpler and efficient. And more importantly, the NGFs are enough to reveal the essential difference between different types of ships for classification. To fuse various NGFs with different physical properties and discriminability, the multiple kernel learning (MKL) is utilized to learn the combination weights, rather than assigning the same weight to all features as usually applied by the traditional support vector machines (SVMs). The comprehensive experiments validate that: 1) the performance of the proposed NGF-combined MKL outperforms that of NGF-combined SVM by 3.4% and is very close to that obtained by SGF-combined MKL and 2) in terms of classifying ships in a moderate-resolution SAR image, NGFs are more feasible than scattering features.

49 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114