scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A classification algorithm that minimizes brain signal variation across different subjects and sessions, using linear programming support-vector machines (LP-SVM) and their extension to multiple kernel learning methods and their relation to BCI variation is presented.

23 citations

Proceedings ArticleDOI
03 Dec 2010
TL;DR: This paper proposes new local color descriptors, unifying edge and color feature extraction into the “Bag Of Word” model, and proposes a new combination strategy based on ℓ1 Multiple Kernel Learning (MKL) that simultaneously learns individual kernel parameters and the kernel combination.
Abstract: Recently, increasing interest has been brought to improve image categorization performances by combining multiple descriptors. However, very few approaches have been proposed for combining features based on complementary aspects, and evaluating the performances in realistic databases. In this paper, we tackle the problem of combining different feature types (edge and color), and evaluate the performance gain in the very challenging VOC 2009 benchmark. Our contribution is three-fold. First, we propose new local color descriptors, unifying edge and color feature extraction into the “Bag Of Word” model. Second, we improve the Spatial Pyramid Matching (SPM) scheme for better incorporating spatial information into the similarity measurement. Last but not least, we propose a new combination strategy based on l 1 Multiple Kernel Learning (MKL) that simultaneously learns individual kernel parameters and the kernel combination. Experiments prove the relevance of the proposed approach, which outperforms baseline combination methods while being computationally effective.

23 citations

Journal ArticleDOI
01 Jul 2018
TL;DR: This work extends the state‐of‐the‐art kernel learning method by developing kernels for peak interactions to combine with kernels for peaks through multiple kernel learning (MKL), and formulates a sparse interaction model for metabolite peaks, which is computationally light and interpretable for fingerprint prediction.
Abstract: Motivation Recent success in metabolite identification from tandem mass spectra has been led by machine learning, which has two stages: mapping mass spectra to molecular fingerprint vectors and then retrieving candidate molecules from the database. In the first stage, i.e. fingerprint prediction, spectrum peaks are features and considering their interactions would be reasonable for more accurate identification of unknown metabolites. Existing approaches of fingerprint prediction are based on only individual peaks in the spectra, without explicitly considering the peak interactions. Also the current cutting-edge method is based on kernels, which are computationally heavy and difficult to interpret. Results We propose two learning models that allow to incorporate peak interactions for fingerprint prediction. First, we extend the state-of-the-art kernel learning method by developing kernels for peak interactions to combine with kernels for peaks through multiple kernel learning (MKL). Second, we formulate a sparse interaction model for metabolite peaks, which we call SIMPLE, which is computationally light and interpretable for fingerprint prediction. The formulation of SIMPLE is convex and guarantees global optimization, for which we develop an alternating direction method of multipliers (ADMM) algorithm. Experiments using the MassBank dataset show that both models achieved comparative prediction accuracy with the current top-performance kernel method. Furthermore SIMPLE clearly revealed individual peaks and peak interactions which contribute to enhancing the performance of fingerprint prediction. Availability and implementation The code will be accessed through http://mamitsukalab.org/tools/SIMPLE/.

23 citations

Proceedings Article
26 Jun 2012
TL;DR: This work develops an efficient algorithm for solving the related convex-concave optimization problem with a fast convergence rate of O(1/T) where T is the number of iterations and a minimax formulation is presented.
Abstract: We study the problem of multiple kernel learning from noisy labels. This is in contrast to most of the previous studies on multiple kernel learning that mainly focus on developing efficient algorithms and assume perfectly labeled training examples. Directly applying the existing multiple kernel learning algorithms to noisily labeled examples often leads to suboptimal performance due to the incorrect class assignments. We address this challenge by casting multiple kernel learning from noisy labels into a stochastic programming problem, and presenting a minimax formulation. We develop an efficient algorithm for solving the related convex-concave optimization problem with a fast convergence rate of O(1/T) where T is the number of iterations. Empirical studies on UCI data sets verify both the effectiveness and the efficiency of the proposed algorithm.

23 citations

Proceedings Article
14 Jun 2011
TL;DR: In this paper, two generalization error bounds for multiple kernel learning (MKL) were proposed, one of which is a Rademacher complexity bound which is additive in the kernel complexity and margin term.
Abstract: We propose two new generalization error bounds for multiple kernel learning (MKL). First, using the bound of Srebro and BenDavid (2006) as a starting point, we derive a new version which uses a simple counting argument for the choice of kernels in order to generate a tighter bound when 1-norm regularization (sparsity) is imposed in the kernel learning problem. The second bound is a Rademacher complexity bound which is additive in the (logarithmic) kernel complexity and margin term. This dependence is superior to all previously published Rademacher bounds for learning a convex combination of kernels, including the recent bound of Cortes et al. (2010), which exhibits a multiplicative interaction. We illustrate the tightness of our bounds with simulations.

23 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114