scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings ArticleDOI
07 Nov 2009
TL;DR: This paper proposes an automatic food image recognition system for recording people's eating habits and uses the Multiple Kernel Learning (MKL) method to integrate several kinds of image features such as color, texture and SIFT adaptively.
Abstract: Since health care on foods is drawing people's attention recently, a system that can record everyday meals easily is being awaited. In this paper, we propose an automatic food image recognition system for recording people's eating habits. In the proposed system, we use the Multiple Kernel Learning (MKL) method to integrate several kinds of image features such as color, texture and SIFT adaptively. MKL enables to estimate optimal weights to combine image features for each category. In addition, we implemented a prototype system to recognize food images taken by cellular-phone cameras. In the experiment, we have achieved the 61.34% classification rate for 50 kinds of foods. To the best of our knowledge, this is the first report of a food image classification system which can be applied for practical use.

195 citations

Proceedings Article
06 Dec 2010
TL;DR: It is demonstrated that linear MKL regularised with the p-norm squared, or with certain Bregman divergences, can indeed be trained using SMO, and the resulting algorithm retains both simplicity and efficiency and is significantly faster than state-of-the-art specialised p- norm MKL solvers.
Abstract: Our objective is to train p-norm Multiple Kernel Learning (MKL) and, more generally, linear MKL regularised by the Bregman divergence, using the Sequential Minimal Optimization (SMO) algorithm. The SMO algorithm is simple, easy to implement and adapt, and efficiently scales to large problems. As a result, it has gained widespread acceptance and SVMs are routinely trained using SMO in diverse real world applications. Training using SMO has been a long standing goal in MKL for the very same reasons. Unfortunately, the standard MKL dual is not differentiable, and therefore can not be optimised using SMO style co-ordinate ascent. In this paper, we demonstrate that linear MKL regularised with the p-norm squared, or with certain Bregman divergences, can indeed be trained using SMO. The resulting algorithm retains both simplicity and efficiency and is significantly faster than state-of-the-art specialised p-norm MKL solvers. We show that we can train on a hundred thousand kernels in approximately seven minutes and on fifty thousand points in less than half an hour on a single core.

190 citations

Proceedings ArticleDOI
25 Jul 2010
TL;DR: This paper discusses a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets.
Abstract: The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequences of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods.

190 citations

Proceedings Article
12 Dec 2011
TL;DR: A variational Bayesian inference algorithm which can be widely applied to sparse linear models and is based on the spike and slab prior, which is the golden standard for sparse inference is introduced.
Abstract: We introduce a variational Bayesian inference algorithm which can be widely applied to sparse linear models. The algorithm is based on the spike and slab prior which, from a Bayesian perspective, is the golden standard for sparse inference. We apply the method to a general multi-task and multiple kernel learning model in which a common set of Gaussian process functions is linearly combined with task-specific sparse weights, thus inducing relation between tasks. This model unifies several sparse linear models, such as generalized linear models, sparse factor analysis and matrix factorization with missing values, so that the variational algorithm can be applied to all these cases. We demonstrate our approach in multi-output Gaussian process regression, multi-class classification, image processing applications and collaborative filtering.

189 citations

Journal ArticleDOI
TL;DR: This paper addresses the MKL for classification in hyperspectral images by extracting the most variation from the space spanned by multiple kernels and proposes a representative MKL (RMKL) algorithm that greatly reduces the computational load for searching optimal combination of basis kernels.
Abstract: Recently, multiple kernel learning (MKL) methods have been developed to improve the flexibility of kernel-based learning machine. The MKL methods generally focus on determining key kernels to be preserved and their significance in optimal kernel combination. Unfortunately, computational demand of finding the optimal combination is prohibitive when the number of training samples and kernels increase rapidly, particularly for hyperspectral remote sensing data. In this paper, we address the MKL for classification in hyperspectral images by extracting the most variation from the space spanned by multiple kernels and propose a representative MKL (RMKL) algorithm. The core idea embedded in the algorithm is to determine the kernels to be preserved and their weights according to statistical significance instead of time-consuming search for optimal kernel combination. The noticeable merits of RMKL consist that it greatly reduces the computational load for searching optimal combination of basis kernels and has no limitation from strict selection of basis kernels like most MKL algorithms do; meanwhile, RMKL keeps excellent properties of MKL in terms of both good classification accuracy and interpretability. Experiments are conducted on different real hyperspectral data, and the corresponding experimental results show that RMKL algorithm provides the best performances to date among several the state-of-the-art algorithms while demonstrating satisfactory computational efficiency.

186 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114