scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel multiple kernel learning model that is based on the idea of the multiplicative perturbation of data in a new feature space in the framework of uncertain convex programs (UCPs) and can obtain competitive performance with some of the state-of-the-art MKL algorithms.
Abstract: In this paper we develop a novel multiple kernel learning (MKL) model that is based on the idea of the multiplicative perturbation of data in a new feature space in the framework of uncertain convex programs (UCPs). In the proposed model, we utilize the Kullback---Leibler divergence to measure the difference between the estimated kernel weights and ideal kernel weights. Instead of directly handling the proposed model in the primal, we obtain the optimistic counterpart of its Langrage dual in terms of the theory of UCPs and solve it by using the alternating optimization. In the case of a varying parameter, the proposed model gives the solution path from a robust combined kernel to some combined kernel corresponding to the initially ideal kernel weights. In addition, we also give a simple strategy to select the initial kernel weights as the ideal kernel weights if any prior knowledge of kernel weights is not available. Experimental results on several data sets show that the proposed model can obtain competitive performance with some of the state-of-the-art MKL algorithms.

4 citations

Proceedings ArticleDOI
01 Oct 2015
TL;DR: This paper presents a method of action recognition based on hypothesizing that the classification of action can be boosted by motion information using optical flow and uses the Multiple Kernel Learning (MKL) technique at the kernel level for action classification from RGB and depth feature pooling.
Abstract: Recognizing human action is valuable for many real world applications such as video surveillance, human computer interaction, smart home and gaming. In this paper, we present a method of action recognition based on hypothesizing that the classification of action can be boosted by motion information using optical flow. Emergence of automatic RGBD video analysis, we propose fusing optical flow is extracted from both RGB and depth channels for action representation. Firstly, we extract optical flow from RGB and depth data. Secondly, motion descriptor with spatial pyramid is computed from histogram of optical flow of RGB and depth. Then, feature pooling technique is used in order to accumulate RGB and depth feature into set of feature vectors for each action. Finally, we use the Multiple Kernel Learning (MKL) technique at the kernel level for action classification from RGB and depth feature pooling. To demonstrate generalizability, our proposed method has been systematically evaluated on two benchmark datasets shown to be more effective and accurate for action recognition compared to the previous work. We obtain overall accuracies of: 97.5 % and 92.8 % with our proposed method on the 3D Action Pairs and MSR-Daily Activity 3D dataset, respectively.

4 citations

Proceedings ArticleDOI
06 Mar 2010
TL;DR: A novel regression technique, called Genetic Multiple Kernel Relevance Vector Regression (GMK RVR), which combines genetic programming and relevance vector regression to evolve a multiple kernel function is proposed.
Abstract: Relevance vector machine (RVM) is a state-of-the-art technique for regression and classification, as a sparse Bayesian extension version of the support vector machine. The kernel function and parameter selection is a key problem in the research of RVM. The real-world application and recent researches have emphasized the requirement to multiple kernel learning. This paper proposes a novel regression technique, called Genetic Multiple Kernel Relevance Vector Regression (GMK RVR), which combines genetic programming and relevance vector regression to evolve a multiple kernel function. The proposed technique are compared with those of a standard RVR using the Polynomial, Gaussian RBF and Sigmoid kernel with various parameter settings, based on several benchmark problems. Numerical experiments show that the GMK performs better than such widely used kernels and prove the validation of the GMK.

4 citations

Journal ArticleDOI
TL;DR: The proposed SSMEKL can improve the performance of the classifier by using a small number of labeled samples and numerous unlabeled samples to improve the classification performance of MEKL.
Abstract: Multiple empirical kernel learning (MEKL) is a scalable and efficient supervised algorithm based on labeled samples. However, there is still a huge amount of unlabeled samples in the real‐world application, which are not applicable for the supervised algorithm. To fully utilize the spatial distribution information of the unlabeled samples, this paper proposes a novel semi‐supervised multiple empirical kernel learning (SSMEKL). SSMEKL enables multiple empirical kernel learning to achieve better classification performance with a small number of labeled samples and a large number of unlabeled samples. First, SSMEKL uses the collaborative information of multiple kernels to provide a pseudo labels to some unlabeled samples in the optimization process of the model, and SSMEKL designs pseudo‐empirical loss to transform learning process of the unlabeled samples into supervised learning. Second, SSMEKL designs the similarity regularization for unlabeled samples to make full use of the spatial information of unlabeled samples. It is required that the output of unlabeled samples should be similar to the neighboring labeled samples to improve the classification performance of the model. The proposed SSMEKL can improve the performance of the classifier by using a small number of labeled samples and numerous unlabeled samples to improve the classification performance of MEKL. In the experiment, the results on four real‐world data sets and two multiview data sets validate the effectiveness and superiority of the proposed SSMEKL.

4 citations

Proceedings Article
01 Jan 2010
TL;DR: An efficient algorithm wrapping a Support Vector Regression model for opti- mizing the MKL weights, named SimpleMKL, is used for the analysis and shows the relevance of multiple kernel learn- ing for the automatic selection of time series inputs.
Abstract: In this paper we study the relevance of multiple kernel learn- ing (MKL) for the automatic selection of time series inputs. Recently, MKL has gained great attention in the machine learning community due to its flexibility in modelling complex patterns and performing feature selection. In general, MKL constructs the kernel as a weighted linear com- bination of basis kernels, exploiting different sources of information. An efficient algorithm wrapping a Support Vector Regression model for opti- mizing the MKL weights, named SimpleMKL, is used for the analysis. In this sense, MKL performs feature selection by discarding inputs/kernels with low or null weights. The approach proposed is tested with simu- lated linear and nonlinear time series (AutoRegressive, Henon and Lorenz series).

4 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114