scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings ArticleDOI
02 Feb 2016
TL;DR: An auto-context modeling approach under the RKHS (Reproducing Kernel Hilbert Space) setting, wherein a series of supervised learners are used to approximate the context model, which leads to improved recognition performance in comparison to using only the image features.
Abstract: In complex visual recognition systems, feature fusion has become crucial to discriminate between a large number of classes. In particular, fusing high-level context information with image appearance models can be effective in object/scene recognition. To this end, we develop an auto-context modeling approach under the RKHS (Reproducing Kernel Hilbert Space) setting, wherein a series of supervised learners are used to approximate the context model. By posing the problem of fusing the context and appearance models using multiple kernel learning, we develop a computationally tractable solution to this challenging problem. Furthermore, we propose to use the marginal probabilities from a kernel SVM classifier to construct the auto-context kernel. In addition to providing better regularization to the learning problem, our approach leads to improved recognition performance in comparison to using only the image features.

4 citations

Dissertation
14 Jul 2016
TL;DR: The objective of this thesis is to build novel mathematical models for finding critical components and connectivity patterns in complex networks that may reveal hidden, yet insightful, information for the investigation of underlying dynamics of the networks.
Abstract: Networks are all around us, and they may be connections of tangible objects in the Euclidean space such as electric power grids, the Internet, highways systems, etc. Among the wide range of areas in the network analysis, finding critical component in the large scale complex networks is one of the most challenging but fascinating problem in the network analysis. Analytical approaches of finding critical components have been widely studied and extensively used to investigate and provide meaningful characterizations of the intrinsic dynamics and properties of complex structures in networked systems. The objective of this thesis is to build novel mathematical models for finding critical components and connectivity patterns in complex networks that may reveal hidden, yet insightful, information for the investigation of underlying dynamics of the networks. In particular: -I propose mixed integer programming (MIP) models to seek k-Cardinality Tree (KCT) ,which address the finding critical components problem. I proposed seven variations of MIP models that are based on connected component constraints and subtour elimination constraints. Through the investigation of polyhedral structures and test results, the best performance model has been chosen and then we compared it with state of the art algorithm in the literature. -I expand our scope to find critical components in the labeled networks. I design two mathematical programming model to determine $k$-sized critical component including the most informative edges to classify the networks. As a first step, we develop mixed integer programming (MIP) model for finding critical components in the networked data classification. Due to the computationally intractability on the large scaled data, I built a branch-and-cut algorithm based on the Benders decomposition. -I also build a mixed integer nonlinear programming (MINLP) model based on the support vector machine (SVM) formulation. Rather than solving this MINLP directly, an efficient iterative algorithm combining with multiple kernel learning is proposed. To demonstrate the utility of the proposed models and solution approaches, synthetic networks and brain functional connectivity networks are used as case points in this thesis. Through the extensive experiments on both data sets, proposed approaches achieve impressive scalability and comparable or even better performance rather than the state-of-the-art methods. On human brain networks, the approaches are used to detect informative regions of interests (ROIs) and their connectivity patterns that may be useful in detecting people who are risk of developing neurological diseases.

4 citations

Proceedings ArticleDOI
04 Mar 2016
TL;DR: Localized Kernel Learning (LKL) as discussed by the authors is an extension of multiple kernel learning (MKL) which aims to learn not only a classifier/regressor but also the best kernel for the training task, usually from a combination of existing kernel functions.
Abstract: Multiple Kernel Learning, or MKL, extends (kernelized) SVM by attempting to learn not only a classifier/regressor but also the best kernel for the training task, usually from a combination of existing kernel functions. Most MKL methods seek the combined kernel that performs best over every training example, sacrificing performance in some areas to seek a global optimum. Localized kernel learning (LKL) overcomes this limitation by allowing the training algorithm to match a component kernel to the examples that can exploit it best. Several approaches to the localized kernel learning problem have been explored in the last several years. We unify many of these approaches under one simple system and design a new algorithm with improved performance. We also develop enhanced versions of existing algorithms, with an eye on scalability and performance.

4 citations

Proceedings ArticleDOI
01 Sep 2015
TL;DR: Experimental results show MKCR converges within reasonable iterations and achieves state-of-the-art performance.
Abstract: We consider the image classification problem via multiple kernel collaborative representation (MKCR). We generalize the kernel collaborative representation based classification to a multi-kernel framework where multiple kernels are jointly learned with the representation coefficients. The intrinsic idea of multiple kernel learning is adopted in our MKCR model. Experimental results show MKCR converges within reasonable iterations and achieves state-of-the-art performance.

4 citations

Proceedings ArticleDOI
07 Mar 2014
TL;DR: This paper investigates the use of the Multiple Kernel Learning (MKL) algorithm to adaptive search for the best linear relation among the considered features and proves the validity of the approach by considering a descriptor composed of multiple features aligned with dense trajectories.
Abstract: Automatic action recognition in videos is a challenging computer vision task that has become an active research area in recent years. Existing strategies usually use kernel-based learning algorithms that considers a simple combination of different features completely disregarding how such features should be integrated to fit the given problem. Since a given feature is most suitable to describe a given image/video property, the adaptive weighting of such features can improve the performance of the learning algorithm. In this paper, we investigated the use of the Multiple Kernel Learning (MKL) algorithm to adaptive search for the best linear relation among the considered features. MKL is an extension of the support vector machines (SVMs) to work with a weighted linear combination of several single kernels. This approach allows to simultaneously estimate the weights for the multiple kernels combination as well as the underlying SVM parameters. In order to prove the validity of the MKL approach, we considered a descriptor composed of multiple features aligned with dense trajectories. We experimented our approach on a database containing 36 cooking actions. Results confirm that the use of MKL improves the classification performance.

4 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114