scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Posted Content
TL;DR: This work uses the natural hierarchical structure of the problem to extend the multiple kernel learning framework to kernels that can be embedded in a directed acyclic graph, and shows that it is then possible to perform kernel selection through a graph-adapted sparsity-inducing norm, in polynomial time in the number of selected kernels.
Abstract: We consider the problem of high-dimensional non-linear variable selection for supervised learning. Our approach is based on performing linear selection among exponentially many appropriately defined positive definite kernels that characterize non-linear interactions between the original variables. To select efficiently from these many kernels, we use the natural hierarchical structure of the problem to extend the multiple kernel learning framework to kernels that can be embedded in a directed acyclic graph; we show that it is then possible to perform kernel selection through a graph-adapted sparsity-inducing norm, in polynomial time in the number of selected kernels. Moreover, we study the consistency of variable selection in high-dimensional settings, showing that under certain assumptions, our regularization framework allows a number of irrelevant variables which is exponential in the number of observations. Our simulations on synthetic datasets and datasets from the UCI repository show state-of-the-art predictive performance for non-linear regression problems.

80 citations

Journal ArticleDOI
TL;DR: This paper proposes to incorporate the radius of the minimum enclosing ball (MEB) into MKL with the following advantages: more robust in the presence of outliers or noisy training samples; more computationally efficient by avoiding the quadratic optimization for computing the radius at each iteration; and readily solvable by the existing off-the-shelf MKL packages.
Abstract: Integrating radius information has been demonstrated by recent work on multiple kernel learning (MKL) as a promising way to improve kernel learning performance. Directly integrating the radius of the minimum enclosing ball (MEB) into MKL as it is, however, not only incurs significant computational overhead but also possibly adversely affects the kernel learning performance due to the notorious sensitivity of this radius to outliers. Inspired by the relationship between the radius of the MEB and the trace of total data scattering matrix, this paper proposes to incorporate the latter into MKL to improve the situation. In particular, in order to well justify the incorporation of radius information, we strictly comply with the radius-margin bound of support vector machines (SVMs) and thus focus on the l2-norm soft-margin SVM classifier. Detailed theoretical analysis is conducted to show how the proposed approach effectively preserves the merits of incorporating the radius of the MEB and how the resulting optimization is efficiently solved. Moreover, the proposed approach achieves the following advantages over its counterparts: 1) more robust in the presence of outliers or noisy training samples; 2) more computationally efficient by avoiding the quadratic optimization for computing the radius at each iteration; and 3) readily solvable by the existing off-the-shelf MKL packages. Comprehensive experiments are conducted on University of California, Irvine, protein subcellular localization, and Caltech-101 data sets, and the results well demonstrate the effectiveness and efficiency of our approach.

79 citations

Proceedings Article
09 Jul 2012
TL;DR: Experimental results show that the fusion of the HOG+Haar with GMKL outperforms the other three classification schemes and Generalized Multiple Kernel Learning (GMKL) that can learn the trade-off between HOG and Haar descriptors by constructing an optimal kernel with many base kernels.
Abstract: Vehicle detection in wide area motion imagery (WAMI) is an important problem in computer science, which if solved, supports urban traffic management, emergency responder routing, and accident discovery Due to large amount of camera motion, the small number of pixels on target objects, and the low frame rate of the WAMI data, vehicle detection is much more challenging than the task in traditional video imagery Since the object in wide area imagery covers a few pixels, feature information of shape, texture, and appearance information are limited for vehicle detection and classification performance Histogram of Gradients (HOG) and Haar descriptors have been used in human and face detection successfully, only using the intensity of an image, and HOG and Haar descriptors have different advantages In this paper, we propose a classification scheme which combines HOG and Haar descriptors by using Generalized Multiple Kernel Learning (GMKL) that can learn the trade-off between HOG and Haar descriptors by constructing an optimal kernel with many base kernels Due to the large number of Haar features, we first use a cascade of boosting classifier which is a variant of Gentle AdaBoost and has the ability to do feature selection to select a small number of features from a huge feature set Then, we combine the HOG descriptors and the selected Haar features and use GMKL to train the final classifier In our experiments, we evaluate the performance of HOG+Haar with GMKL, HOG with GMKL, Haar with GMKL, and also the cascaded boosting classifier on Columbus Large Image Format (CLIF) dataset Experimental results show that the fusion of the HOG+Haar with GMKL outperforms the other three classification schemes

79 citations

Journal ArticleDOI
TL;DR: A simple yet effective neighbor-kernel-based MKC algorithm that back-projects the solution of the unconstrained counterpart to its principal components and reveals an interesting insight into the exact-rank constraint in ridge regression by careful theoretical analysis.
Abstract: Multiple kernel clustering (MKC) has been intensively studied during the last few decades. Even though they demonstrate promising clustering performance in various applications, existing MKC algorithms do not sufficiently consider the intrinsic neighborhood structure among base kernels, which could adversely affect the clustering performance. In this paper, we propose a simple yet effective neighbor-kernel-based MKC algorithm to address this issue. Specifically, we first define a neighbor kernel, which can be utilized to preserve the block diagonal structure and strengthen the robustness against noise and outliers among base kernels. After that, we linearly combine these base neighbor kernels to extract a consensus affinity matrix through an exact-rank-constrained subspace segmentation. The naturally possessed block diagonal structure of neighbor kernels better serves the subsequent subspace segmentation, and in turn, the extracted shared structure is further refined through subspace segmentation based on the combined neighbor kernels. In this manner, the above two learning processes can be seamlessly coupled and negotiate with each other to achieve better clustering. Furthermore, we carefully design an efficient iterative optimization algorithm with proven convergence to address the resultant optimization problem. As a by-product, we reveal an interesting insight into the exact-rank constraint in ridge regression by careful theoretical analysis: it back-projects the solution of the unconstrained counterpart to its principal components. Comprehensive experiments have been conducted on several benchmark data sets, and the results demonstrate the effectiveness of the proposed algorithm.

79 citations

Journal ArticleDOI
TL;DR: This paper proposes a novel MKC method that is different from those popular approaches, and an efficient two-step iterative algorithm is developed to solve the formulated optimization problem.

79 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114