scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
01 Aug 2022
TL;DR: The Multiple Kernel Transfer Clustering (MKTC) method as mentioned in this paper uses a weakly supervised multi-instance subset of the dataset, where a set of data instances are together provided some labels.
Abstract: Multiple kernel clustering methods have been quite successful recently especially concerning the multi-view clustering of complex datasets. These methods simultaneously learn a multiple kernel metric while clustering in an unsupervised setting. With the motivation that some minimal supervision can potentially increase their effectiveness, we propose a Multiple Kernel Transfer Clustering (MKTC) method that can be described in terms of two tasks: a source task, where the multiple kernel metric is learned, and a target task where the multiple kernel metric is transferred to partition a dataset. In the source task, we create a weakly supervised multi-instance subset of the dataset, where a set of data instances are together provided some labels. We put forth a Multiple Kernel Multi-Instance $k$ -Means (MKMIKM) method to simultaneously cluster the multi-instance subset while also learning a multiple kernel metric under weak supervision. In the target task, MKTC transfers the multiple kernel metric learned by MKMIKM to perform unsupervised single-instance clustering of the entire dataset in a single step. The advantage of using a multi-instance setup for the source task is that it requires reduced labeling effort to guide the learning of the multiple kernel metric. Our formulations lead to a significantly lower computational cost in comparison to the state-of-the-art multiple kernel clustering algorithms, making them more applicable to larger datasets. Experiments over benchmark computer vision datasets suggest that MKTC can achieve significant improvements in clustering performance in comparison to the state-of-the-art unsupervised multiple-kernel clustering methods and other transfer clustering methods.
Patent
21 Aug 2018
TL;DR: In this paper, a metaheuristic optimization algorithm is fused in a traditional multiple kernel learning (MKL) framework to combine kernel functions; and under the condition of not requiring target function change and not needing gradient estimation, parameters for constructing basic kernels and kernel weights for constructing compound kernels are jointly optimized.
Abstract: The invention discloses an improved multicore learning framework-based near infrared face detection method, and relates to the field of intelligent monitoring and computer vision. According to the method, a metaheuristic optimization algorithm is fused in a traditional multiple kernel learning (MKL) framework to combine kernel functions; and under the condition of not requiring target function change and not needing gradient estimation, parameters for constructing basic kernels and kernel weights for constructing compound kernels are jointly optimized. Experimental researches show that the method has high effectiveness in near infrared face detection, guidance is provided for determining kernel functions and weight coefficients corresponding to the kernel functions in the current MKL research, and the face detection method is supplemented.
Journal ArticleDOI
TL;DR: An improved version of LMKL is proposed, which is named ILMKL, which explicitly takes into consideration both the margin and the radius and so achieves better performance over its counterpart.
Abstract: Localized multiple kernel learning (LMKL) is an effective method of multiple kernel learning (MKL). It tries to learn the optimal kernel from a set of predefined basic kernels by directly using the maximum margin principle, which is embodied in support vector machine (SVM). However, LMKL does not consider the radius of minimum enclosing ball (MEB) which actually impacts the error bound of SVM as well as the separating margin. In the paper, we propose an improved version of LMKL, which is named ILMKL. The proposed method explicitly takes into consideration both the margin and the radius and so achieves better performance over its counterpart. Moreover, the proposed method can automatically tune the regularization parameter when learning the optimal kernel. Consequently, it avoids using the time-consuming cross-validation process to choose the parameter. Comprehensive experiments are conducted and the results well demonstrate the effectiveness and efficiency of the proposed method.
Proceedings ArticleDOI
26 Feb 2018
TL;DR: A quantitative analysis for five popular feature encoding methods (histogram encoding, locality-constrained linear encoding, fisher vector encoding, vector of linearly aggregated descriptor and kernel codebook encoding) which achieved sufficient performance in object recognition tasks are given.
Abstract: It is well known that object recognition pipeline is composed of three stages: local feature extraction, feature encoding and classification, all of which play important roles in the final classification performance. In the feature encoding phase, feature vectors are assigned to visual words of existed codebook through certain way. In this paper, we focus on the feature encoding phase and give a quantitative analysis for five popular feature encoding methods (histogram encoding, locality-constrained linear encoding, fisher vector encoding, vector of linearly aggregated descriptor and kernel codebook encoding) which achieved sufficient performance in object recognition tasks. Considering different encoding methods' results as different channels of multiple kernel learning, weights of encoding methods were obtained in experiments. And the results show that each encoding methods have its advantages for different categories, i.e. VLAD and FV have higher weights in building-related categories, etc. This may bring a new insight of chosen appropriate encoding method for better classification performance.
01 Jan 2014
TL;DR: A novel self-learning approach with multiple kernel learning for adaptive kernel selection for adaptivekernel selection for SR framework learns and selects the optimal Kernel ridge regression model when producing an SR image, which results in the minimum SR reconstruction error.
Abstract: Learning-based approaches for image super-resolution (SR) have attracted the attention of researchers in the past few years. We present a novel self-learning approach with multiple kernel learning for adaptive kernel selection for SR. The Multiple Kernel Learning is theoretically and technically very attractive, because it learns the kernel weights and the classifier simultaneously based on the margin criterion. With theoretical supports of kernel matching search method and Optimization approach (Gradient) are proposed in SR framework learns and selects the optimal Kernel ridge regression model when producing an SR image, which results in the minimum SR reconstruction error. Evaluate this method on a variety of images, and obtain very promising SR results. In most cases, this method quantitatively and qualitatively outperforms bi-cubic interpolation and state-of-the-art learning based SR approaches.

Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114