scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Book ChapterDOI
20 Sep 2011
TL;DR: A concept detection system that combines feature representations for different spatial extents using multiple kernel learning is proposed and it is shown experimentally on a large set of 101 concepts and on the PASCAL Visual Object Classes Challenge that these feature representations are complementary.
Abstract: State-of-the-art systems for visual concept detection typically rely on the Bag-of-Visual-Words representation. While several aspects of this representation have been investigated, such as keypoint sampling strategy, vocabulary size, projection method, weighting scheme or the integration of color, the impact of the spatial extents of local SIFT descriptors has not been studied in previous work. In this paper, the effect of different spatial extents in a state-of-the-art system for visual concept detection is investigated. Based on the observation that SIFT descriptors with different spatial extents yield large performance differences, we propose a concept detection system that combines feature representations for different spatial extents using multiple kernel learning. It is shown experimentally on a large set of 101 concepts from the Mediamill Challenge and on the PASCAL Visual Object Classes Challenge that these feature representations are complementary: Superior performance can be achieved on both test sets using the proposed system.

4 citations

Journal ArticleDOI
TL;DR: This paper presents a simple multiple kernel learning framework for complicated data modeling, where randomized multi-scale Gaussian kernels are employed as base kernels and a l1-norm regularizer is employed as a base regularizer.
Abstract: This paper presents a simple multiple kernel learning framework for complicated data modeling, where randomized multi-scale Gaussian kernels are employed as base kernels and a l1-norm regularizer i...

4 citations

Journal ArticleDOI
TL;DR: Web-rMKL as mentioned in this paper is a web server that provides an integrative dimensionality reduction with subsequent clustering of samples based on data from multiple inputs, such as genetic or histone modification data.
Abstract: More and more affordable high-throughput techniques for measuring molecular features of biomedical samples have led to a huge increase in availability and size of different types of multi-omic datasets, containing, for example, genetic or histone modification data. Due to the multi-view characteristic of the data, established approaches for exploratory analysis are not directly applicable. Here we present web-rMKL, a web server that provides an integrative dimensionality reduction with subsequent clustering of samples based on data from multiple inputs. The underlying machine learning method rMKL-LPP performed best for clinical enrichment in a recent benchmark of state-of-the-art multi-view clustering algorithms. The method was introduced for a multi-omic cancer subtype discovery setting, however, it is not limited to this application scenario as exemplified by a presented use case for stem cell differentiation. web-rMKL offers an intuitive interface for uploading data and setting the parameters. rMKL-LPP runs on the back end and the user may receive notifications once the results are available. We also introduce a preprocessing tool for generating kernel matrices from tables containing numerical feature values. This program can be used to generate admissible input if no precomputed kernel matrices are available. The web server is freely available at web-rMKL.org.

4 citations

Patent
26 Mar 2014
TL;DR: In this paper, a self-adaptive parameter multiple kernel learning classification method based on large-scale data is proposed. But the method is not suitable for large datasets, and it cannot adapt to learning C parameters, the solving efficiency is improved, and the complicated cross validation process is avoided.
Abstract: The invention discloses a self-adaptive parameter multiple kernel learning classification method based on large-scale data. The method includes the following steps that a multiple kernel learning kernel function is selected; a dataset is loaded, and the dataset is randomly divided into a training dataset and a testing dataset; a single one-dimensional characteristic valve and each dimension of characteristic value of the training dataset are mixed so that a kernel matrix set is obtained; the unit matrix is increased to the first item of the kernel matrix set so that a new kernel matrix set is formed, a weigh parameter set of the new kernel matrix set is solved, the first item of the weigh parameter set is the reciprocal of a regularized penalty factor parameter C, and the other items are weigh parameters of all seed kernels; a classification model is obtained through calculation of a semi-infinite linear programming problem; the classification result is obtained by the test data set through the classification model. With the method, the multiple kernel learning problem is converted into positive semidefinite linear programming optimization problem, and the problem of the large-scale data is solved; the method can adapt to learning C parameters, the solving efficiency is improved, and the complicated cross validation process is avoided.

4 citations

Proceedings Article
02 May 2016
TL;DR: This paper proposes a new non-linear MKL method that utilizes nuclear norm regularization and leads to convex optimization problem, and shows that it equals or outperforms the state-ofthe-art MKL methods on all these data sets.
Abstract: Multiple Kernel Learning (MKL) methods are known for their effectiveness in solving classification and regression problems involving multimodal data. Many MKL approaches use linear combination of base kernels, resulting in somewhat limited feature representations. Several non-linear MKL formulations were proposed recently. They provide much higher dimensional feature spaces, and, therefore, richer representations. However, these methods often lead to nonconvex optimization and to intractable number of optimization parameters. In this paper, we propose a new non-linear MKL method that utilizes nuclear norm regularization and leads to convex optimization problem. The proposed Nuclear-norm-Constrained MKL (NuC-MKL) algorithm converges faster, and requires smaller number of calls to an SVM solver, as compared to other competing methods. Moreover, the number of the model support vectors in our approach is usually much smaller, as compared to other methods. This suggests that our algorithm is more resilient to overfitting. We test our algorithm on several known benchmarks, and show that it equals or outperforms the state-ofthe-art MKL methods on all these data sets.

4 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114