scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper investigates the use of MKL as a tool that allows us to avoid using ad-hoc topographic indices as covariables in statistical models in complex terrains and examines the stability of the MKL algorithm with respect to the number of training data samples and to the presence of noise.
Abstract: This paper presents multiple kernel learning (MKL) regression as an exploratory spatial data analysis and modelling tool. The MKL approach is introduced as an extension of support vector regression, where MKL uses dedicated kernels to divide a given task into sub-problems and to treat them separately in an effective way. It provides better interpretability to non-linear robust kernel regression at the cost of a more complex numerical optimization. In particular, we investigate the use of MKL as a tool that allows us to avoid using ad-hoc topographic indices as covariables in statistical models in complex terrains. Instead, MKL learns these relationships from the data in a non-parametric fashion. A study on data simulated from real terrain features confirms the ability of MKL to enhance the interpretability of data-driven models and to aid feature selection without degrading predictive performances. Here we examine the stability of the MKL algorithm with respect to the number of training data samples and to the presence of noise. The results of a real case study are also presented, where MKL is able to exploit a large set of terrain features computed at multiple spatial scales, when predicting mean wind speed in an Alpine region.

43 citations

Journal ArticleDOI
TL;DR: This paper fully considers the internal correlation between feature space and label space while fusing kernelized information from respective spaces and constructs a robust multi-label kernelized fuzzy rough set model, called RMFRS in this paper.

43 citations

Proceedings Article
06 Dec 2010
TL;DR: This paper uses the ratio between the margin and the radius of the minimum enclosing ball to measure the goodness of a kernel, and presents a new minimization formulation for kernel learning that is invariant to scalings of learned kernels and to the types of norm constraints on combination coefficients.
Abstract: In this paper, we point out that there exist scaling and initialization problems in most existing multiple kernel learning (MKL) approaches, which employ the large margin principle to jointly learn both a kernel and an SVM classifier. The reason is that the margin itself can not well describe how good a kernel is due to the negligence of the scaling. We use the ratio between the margin and the radius of the minimum enclosing ball to measure the goodness of a kernel, and present a new minimization formulation for kernel learning. This formulation is invariant to scalings of learned kernels, and when learning linear combination of basis kernels it is also invariant to scalings of basis kernels and to the types (e.g., L1 or L2) of norm constraints on combination coefficients. We establish the differentiability of our formulation, and propose a gradient projection algorithm for kernel learning. Experiments show that our method significantly outperforms both SVM with the uniform combination of basis kernels and other state-of-art MKL approaches.

43 citations

Journal ArticleDOI
TL;DR: An efficient multiple-feature learning-based model with adaptive weights for effectively classifying complex hyperspectral images with limited training samples and a novel decision fusion strategy that combines linear and multiple kernel features to balance the classification results of different classifiers.
Abstract: Linearly derived features have been widely used in hyperspectral image classification to find linear separability of certain classes in recent years. Moreover, nonlinearly transformed features are more effective for class discrimination in real analysis scenarios. However, few efforts have attempted to combine both linear and nonlinear features in the same framework even if they can demonstrate some complementary properties. Moreover, conventional multiple-feature learning-based approaches deal with different features equally, which is not reasonable. This paper proposes an efficient multiple-feature learning-based model with adaptive weights for effectively classifying complex hyperspectral images with limited training samples. A new diversity kernel function is proposed first to simulate the vision perception and analysis procedure of human beings. It could simultaneously evaluate the contrast differences of global features and spatial coherence. Since existing multiple-kernel feature models are always time-consuming, we then design a new adaptive weighted multiple kernel learning method. It employs kernel projection, which could lower the dimensionalities and also learn kernel weights to further discriminate the classification boundaries. For combining both linear and nonlinear features, this paper also proposes a novel decision fusion strategy. The method combines linear and multiple kernel features to balance the classification results of different classifiers. The proposed scheme is tested on several hyperspectral data sets and extended to multisource feature classification environment. The experimental results show that the proposed classification method outperforms most of the existing ones and significantly reduces the computational complexity.

43 citations

Journal Article
TL;DR: A non-sparse version of MK-FDA is proposed, which imposes a general lp norm regularisation on the kernel weights, and it is demonstrated that lp MK-fDA improves upon sparse MK- FDA in many practical situations and tends to outperform its SVM counterpart.
Abstract: Sparsity-inducing multiple kernel Fisher discriminant analysis (MK-FDA) has been studied in the literature. Building on recent advances in non-sparse multiple kernel learning (MKL), we propose a non-sparse version of MK-FDA, which imposes a general lp norm regularisation on the kernel weights. We formulate the associated optimisation problem as a semi-infinite program (SIP), and adapt an iterative wrapper algorithm to solve it. We then discuss, in light of latest advances in MKL optimisation techniques, several reformulations and optimisation strategies that can potentially lead to significant improvements in the efficiency and scalability of MK-FDA. We carry out extensive experiments on six datasets from various application areas, and compare closely the performance of lp MK-FDA, fixed norm MK-FDA, and several variants of SVM-based MKL (MK-SVM). Our results demonstrate that lp MK-FDA improves upon sparse MK-FDA in many practical situations. The results also show that on image categorisation problems, lp MK-FDA tends to outperform its SVM counterpart. Finally, we also discuss the connection between (MK-)FDA and (MK-)SVM, under the unified framework of regularised kernel machines.

42 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114