scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings ArticleDOI
01 Jan 2011
TL;DR: This work has developed a very simple histogram matching based technique to achieve relative kernel distribution invariance (RKDI) by transforming the kernels to a canonical distribution.
Abstract: Kernel-based feature combination techniques such as Multiple Kernel Learning use arithmetical operations to linearly combine different kernels. We have observed that the kernel distributions of different features are usually very different. We argue that the similarity distributions amongst the data points for a given dataset should not change with their representation features and propose the concept of relative kernel distribution invariance (RKDI). We have developed a very simple histogram matching based technique to achieve RKDI by transforming the kernels to a canonical distribution. We have performed extensive experiments on various computer vision and machine learning datasets and show that calibrating the kernels to an empirically chosen canonical space before they are combined can always achieve a performance gain over state-of-art methods. As histogram matching is a remarkably simple and robust technique, the new method is universally applicable to kernel-based feature combination.

7 citations

Posted Content
TL;DR: This article introduces a new research problem, named stream-based active MKL (AMKL), in which a learner is allowed to label some selected data from an oracle according to a selection criterion, and presents AMKL with an adaptive kernel selection in which irrelevant kernels can be excluded from a kernel dictionary ``on the fly.
Abstract: Online multiple kernel learning (OMKL) has provided an attractive performance in nonlinear function learning tasks. Leveraging a random feature approximation, the major drawback of OMKL, known as the curse of dimensionality, has been recently alleviated. In this paper, we introduce a new research problem, termed (stream-based) active multiple kernel learning (AMKL), in which a learner is allowed to label selected data from an oracle according to a selection criterion. This is necessary in many real-world applications as acquiring true labels is costly or time-consuming. We prove that AMKL achieves an optimal sublinear regret, implying that the proposed selection criterion indeed avoids unuseful label-requests. Furthermore, we propose AMKL with an adaptive kernel selection (AMKL-AKS) in which irrelevant kernels can be excluded from a kernel dictionary 'on the fly'. This approach can improve the efficiency of active learning as well as the accuracy of a function approximation. Via numerical tests with various real datasets, it is demonstrated that AMKL-AKS yields a similar or better performance than the best-known OMKL, with a smaller number of labeled data.

7 citations

Book ChapterDOI
13 Nov 2011
TL;DR: This work studies multitask learning (MKL) problems and forms a novel MTL algorithm that trains coupled but nonidentical MKL models across the tasks and demonstrates the generalization performance and the efficiency of this approach.
Abstract: Empirical success of kernel-based learning algorithms is very much dependent on the kernel function used. Instead of using a single fixed kernel function, multiple kernel learning (MKL) algorithms learn a combination of different kernel functions in order to obtain a similarity measure that better matches the underlying problem. We study multitask learning (MKL) problems and formulate a novel MTL algorithm that trains coupled but nonidentical MKL models across the tasks. The proposed algorithm is especially useful for tasks that have different input and/or output space characteristics and is computationally very efficient. Empirical results on three data sets validate the generalization performance and the efficiency of our approach.

7 citations

Book ChapterDOI
09 Jan 2008
TL;DR: A novel hierarchical boosting algorithm is proposed by incorporating concept ontology and multi-task learning to achieve hierarchical image classifier training to enable automatic multi-level image annotation.
Abstract: In this paper, we have proposed a novel algorithm to achieve automatic multi-level image annotation by incorporating concept ontology and multitask learning for hierarchical image classifier training. To achieve more reliable image classifier training in high-dimensional heterogeneous feature space, a new algorithm is proposed by incorporating multiple kernels for diverse image similarity characterization, and a multiple kernel learning algorithm is developed to train the SVM classifiers for the atomic image concepts at the first level of the concept ontology. To enable automatic multi-level image annotation, a novel hierarchical boosting algorithm is proposed by incorporating concept ontology and multi-task learning to achieve hierarchical image classifier training.

7 citations

Book ChapterDOI
02 Oct 2009
TL;DR: The paper presents the Multiple Kernel Learning approach as a modelling and data exploratory tool and applies it to the problem of wind speed mapping and confirms the usefulness of MKL as a tool that enhances the interpretability of data-driven models.
Abstract: The paper presents the Multiple Kernel Learning (MKL) approach as a modelling and data exploratory tool and applies it to the problem of wind speed mapping. Support Vector Regression (SVR) is used to predict spatial variations of the mean wind speed from terrain features (slopes, terrain curvature, directional derivatives) generated at different spatial scales. Multiple Kernel Learning is applied to learn kernels for individual features and thematic feature subsets, both in the context of feature selection and optimal parameters determination. An empirical study on real-life data confirms the usefulness of MKL as a tool that enhances the interpretability of data-driven models.

7 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114