scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings ArticleDOI
01 Sep 2013
TL;DR: The proposed approach, pairwise similarity constraints are used to adjust the weights of the combined kernels and simultaneously learn the appropriate distance metric, and it is shown that the proposed method outperforms some recently introduced semi-supervised metric learning approaches.
Abstract: Distance metrics are widely used in various machine learning and pattern recognition algorithms. A main issue in these algorithms is choosing the proper distance metric. In recent years, learning an appropriate distance metric has become a very active research field. In the kernelised version of distance metric learning algorithms, the data points are implicitly mapped into a higher dimensional feature space and the learning process is performed in the resulted feature space. The performance of the kernel-based methods heavily depends on the chosen kernel function. So, selecting an appropriate kernel function and/or tuning its parameter(s) impose significant challenges in such methods. The Multiple Kernel Learning theory (MKL) addresses this problem by learning a linear combination of a number of predefined kernels. In this paper, we formulate the MKL problem in a semi-supervised metric learning framework. In the proposed approach, pairwise similarity constraints are used to adjust the weights of the combined kernels and simultaneously learn the appropriate distance metric. Using both synthetic and real-world datasets, we show that the proposed method outperforms some recently introduced semi-supervised metric learning approaches.

3 citations

Posted Content
TL;DR: An effective strategy to select a diverse subset from the prespecified kernels as the representative kernels is proposed, and an efficient optimization method is developed to reduce the time complexity of optimizing the kernel combination weights.
Abstract: To cluster data that are not linearly separable in the original feature space, $k$-means clustering was extended to the kernel version. However, the performance of kernel $k$-means clustering largely depends on the choice of kernel function. To mitigate this problem, multiple kernel learning has been introduced into the $k$-means clustering to obtain an optimal kernel combination for clustering. Despite the success of multiple kernel $k$-means clustering in various scenarios, few of the existing work update the combination coefficients based on the diversity of kernels, which leads to the result that the selected kernels contain high redundancy and would degrade the clustering performance and efficiency. In this paper, we propose a simple but efficient strategy that selects a diverse subset from the pre-specified kernels as the representative kernels, and then incorporate the subset selection process into the framework of multiple $k$-means clustering. The representative kernels can be indicated as the significant combination weights. Due to the non-convexity of the obtained objective function, we develop an alternating minimization method to optimize the combination coefficients of the selected kernels and the cluster membership alternatively. We evaluate the proposed approach on several benchmark and real-world datasets. The experimental results demonstrate the competitiveness of our approach in comparison with the state-of-the-art methods.

3 citations

Journal Article
TL;DR: A hybrid algorithm of financial distress prediction model which integrates multiple kernel learning with manifold learning is proposed, and it can be used in the situation of prediction research with large number of indexes.

3 citations

Book ChapterDOI
01 Jan 2015
TL;DR: This chapter is dedicated to nonparametric modeling of nonlinear functions in reproducing kernel Hilbert spaces (RKHS), including positive definite kernels, reproducing kernels, kernel matrices, and the kernel trick.
Abstract: This chapter is dedicated to nonparametric modeling of nonlinear functions in reproducing kernel Hilbert spaces (RKHS). The basic definitions and concepts behind RKH spaces are presented, including positive definite kernels, reproducing kernels, kernel matrices, and the kernel trick. Cover’s theorem and the representer theorem are introduced. Then, kernel ridge regression, support vector regression, and support vector machines are studied. Online algorithms for learning in RKH spaces, such as the kernel LMS, NORMA, and the kernel APSM are discussed. The notion of multiple kernel learning is presented and a discussion on sparse modeling for nonparametric models in the context of additive models is provided. The chapter closes with a case study for text authorship identification via the use of string kernels.

3 citations

Book ChapterDOI
03 Nov 2013
TL;DR: Experimental results on several benchmark classification data sets show that the proposed MKL method based on minimal redundant maximal relevance criterion and kernel alignment can enhance the performance of MKL.
Abstract: Multiple kernel learning (MKL) is a widely used kernel learning method, but how to select kernel is lack of theoretical guidance. The performance of MKL is depend on the users’ experience, which is difficult to choose the proper kernels in practical applications. In this paper, we propose a MKL method based on minimal redundant maximal relevance criterion and kernel alignment. The main feature of this method compared to others in the literature is that the selection of kernels is considered as a feature selection issue in the Hilbert space, and can obtain a set of base kernels with the highest relevance to the target task and the minimal redundancies among themselves. Experimental results on several benchmark classification data sets show that our proposed method can enhance the performance of MKL.

3 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114