scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings ArticleDOI
12 Jul 2008
TL;DR: This study reformulate the SDP problem to reduce the time and space requirements, and strategies for reducing the search space in solving the SSPD problem are introduced.
Abstract: Support vector machines (SVMs) have been successfully applied to classification problems. Practical issues Involve how to determine the right type and suitable hyperparameters of kernel functions. Recently, multiple-kernel learning (MKL) algorithms are developed to handle these issues by combining different kernels. The weight with each kernel in the combination is obtained through learning. One of the most popular methods is to learn the weights with semidefinite programming (SDP). However, the amount of time and space required by this method is demanding. In this study, we reformulate the SDP problem to reduce the time and space requirements. Strategies for reducing the search space in solving the SDP problem are introduced. Experimental results obtained from running on synthetic datasets and benchmark datasets of UCI and Statlog show that the proposed approach improves the efficiency of the SDP method without degrading the performance.

3 citations

Posted Content
TL;DR: This work proposes to automatically learn similarity information from data and simultaneously consider the constraint that the similarity matrix has exact c connected components if there are c clusters, and extends the model to incorporate multiple kernel learning ability.
Abstract: Spectral clustering has found extensive use in many areas. Most traditional spectral clustering algorithms work in three separate steps: similarity graph construction; continuous labels learning; discretizing the learned labels by k-means clustering. Such common practice has two potential flaws, which may lead to severe information loss and performance degradation. First, predefined similarity graph might not be optimal for subsequent clustering. It is well-accepted that similarity graph highly affects the clustering results. To this end, we propose to automatically learn similarity information from data and simultaneously consider the constraint that the similarity matrix has exact c connected components if there are c clusters. Second, the discrete solution may deviate from the spectral solution since k-means method is well-known as sensitive to the initialization of cluster centers. In this work, we transform the candidate solution into a new one that better approximates the discrete one. Finally, those three subtasks are integrated into a unified framework, with each subtask iteratively boosted by using the results of the others towards an overall optimal solution. It is known that the performance of a kernel method is largely determined by the choice of kernels. To tackle this practical problem of how to select the most suitable kernel for a particular data set, we further extend our model to incorporate multiple kernel learning ability. Extensive experiments demonstrate the superiority of our proposed method as compared to existing clustering approaches.

3 citations

Book
01 Jan 2020
TL;DR: In this article, the authors present a review of the latest advances in deep learning for hyperspectral image analysis, addressing challenges arising in Practical Imaging Scenarios and addressing the Inevitable Imprecision: Multiple Instance Learning for Hyperspectral Image Analysis.
Abstract: 1. Introduction.- 2. Machine Learning Methods for Spatial and Temporal Parameter Estimation.- 3. Deep Learning for Hyperspectral Image Analysis, Part I: Theory and Algorithms.- 4. Deep Learning for Hyperspectral Image Analysis, Part II: Applications to Remote Sensing and Biomedicine.- 5. Advances in Deep Learning for Hyperspectral Image Analysis - Addressing Challenges Arising in Practical Imaging Scenarios.- 6. Addressing the Inevitable Imprecision: Multiple Instance Learning for Hyperspectral Image Analysis.- 7. Supervised, Semi Supervised and Unsupervised Learning for Hyperspectral Regression.- 8. Sparsity-based Methods for Classification.- 9. Multiple Kernel Learning for Hyperspectral Image Classification.- 10. Low Dimensional Manifold Model in Hyperspectral Image Reconstruction.- 11. Deep Sprase Band Selection for Hyperspectral Face Recognition.- 12. Detection of Large-Scale and Anomalous Changes.- 13. Recent Advances in Hyperspectral Unmixing Using Sparse Techniques and Deep Learning.- 14. Chapter Hyperspectral-Multispectral Image Fusion Enhancement Based on Deep Learning.- 15. Automatic Target Detection for Sparse Hyperspectral Images.

3 citations

Proceedings ArticleDOI
03 Nov 2019
TL;DR: Zhang et al. as discussed by the authors proposed an interpretable multiple-kernel prototype learning (IMKPL) to construct highly interpretable prototypes in the feature space, which are also efficient for the discriminative representation of the data.
Abstract: Prototype-based methods are of the particular interest for domain specialists and practitioners as they summarize a dataset by a small set of representatives. Therefore, in a classification setting, interpretability of the prototypes is as significant as the prediction accuracy of the algorithm. Nevertheless, the state-of-the-art methods make inefficient trade-offs between these concerns by sacrificing one in favor of the other, especially if the given data has a kernel-based (or multiple-kernel) representation. In this paper, we propose a novel interpretable multiple-kernel prototype learning (IMKPL) to construct highly interpretable prototypes in the feature space, which are also efficient for the discriminative representation of the data. Our method focuses on the local discrimination of the classes in the feature space and shaping the prototypes based on condensed class-homogeneous neighborhoods of data. Besides, IMKPL learns a combined embedding in the feature space in which the above objectives are better fulfilled. When the base kernels coincide with the data dimensions, this embedding results in a discriminative features selection. We evaluate IMKPL on several benchmarks from different domains which demonstrate its superiority to the related state-of-the-art methods regarding both interpretability and discriminative representation.

3 citations

Proceedings ArticleDOI
25 Mar 2012
TL;DR: A new weighting method using a multiple kernel learning (MKL) algorithm to localize the brain area contributing to the accurate vowel discrimination is proposed, showing both the large-weight MEG sensors mainly in a language area of the brain and the high classification accuracy in the 100 ~ 200 ms latency range.
Abstract: This paper shows that pattern classification based on machine learning is a powerful tool to analyze human brain activity data obtained by magnetoencephalography (MEG). We propose a new weighting method using a multiple kernel learning (MKL) algorithm to localize the brain area contributing to the accurate vowel discrimination. Our MKL simultaneously estimates both the classification boundary and the weight of each MEG sensor; MEG amplitude obtained from each pair of sensors is an element of the feature vector. The estimated weight indicates how the corresponding sensor is useful for classifying the MEG response patterns. Our results show both the large-weight MEG sensors mainly in a language area of the brain and the high classification accuracy (73.0%) in the 100 ∼ 200 ms latency range.

3 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114