scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings ArticleDOI
08 Jul 2009
TL;DR: A two-step approach is proposed: at first, the kernel weights are determined by optimizing the kernel-target alignment score and then the combined kernel is used by the standard SVM with a single kernel.
Abstract: In order to achieve good performance in object classification problems, it is necessary to combine information from various image features. Because the large margin classifiers are constructed based on similarity measures between samples called kernels, finding appropriate feature combinations boils down to designing good kernels among a set of candidates, for example, positive mixtures of predetermined base kernels. There are a couple of ways to determine the mixing weights of multiple kernels: (a) uniform weights, (b) a brute force search over a validation set and (c) multiple kernel learning (MKL). MKL is theoretically and technically very attractive, because it learns the kernel weights and the classifier simultaneously based on the margin criterion. However, we often observe that the support vector machine (SVM) with the average kernel works at least as good as MKL. In this paper, we propose as an alternative, a two-step approach: at first, the kernel weights are determined by optimizing the kernel-target alignment score and then the combined kernel is used by the standard SVM with a single kernel. The experimental results with the VOC 2008 data set [8] show that our simple procedure outperforms the average kernel and MKL.

14 citations

Book ChapterDOI
01 Jan 2014
TL;DR: This chapter focuses on two core topics within machine learning, supervised and unsupervised learning, and illustrates their application to interpreting these datasets.
Abstract: Machine learning plays a central role in the interpretation of many datasets generated within the biomedical sciences. In this chapter we focus on two core topics within machine learning, supervised and unsupervised learning, and illustrate their application to interpreting these datasets. For supervised learning, we focus on support vector machines (SVMs), which is a subtopic of kernel-based learning. Kernels can be used to encode many different types of data, from continuous and discrete data through to graph and sequence data. Given the different types of data encountered within bioinformatics, they are therefore a method of choice within this context. With unsupervised learning we are interested in the discovery of structure within data. We start by considering hierarchical cluster analysis (HCA), given its common usage in this context. We then point out the advantages of Bayesian approaches to unsupervised learning, such as a principled approach to model selection (how many clusters are present in the data) through to confidence measures for assignment of datapoints to clusters. We outline five case studies illustrating these methods. For supervised learning we consider prediction of disease progression in cancer and protein fold prediction. For unsupervised learning we apply HCA to a small colon cancer dataset and then illustrate the use of Bayesian unsupervised learning applied to breast and lung cancer datasets. Finally we consider network inference, which can be approached as an unsupervised or supervised learning task depending on the data available.

14 citations

Book ChapterDOI
03 Nov 2013
TL;DR: The experimental results on two abstract image datasets demonstrate the advantage of the multiple kernel learning framework for image affect detection in terms of feature selection, classification performance, and interpretation.
Abstract: Emotional semantic image retrieval systems aim at incorporating the user’s affective states for responding adequately to the user’s interests. One challenge is to select features specific to image affect detection. Another challenge is to build effective learning models or classifiers to bridge the so-called “affective gap”. In this work, we study the affective classification and retrieval of abstract images by applying multiple kernel learning framework. An image can be represented by different feature spaces and multiple kernel learning can utilize all these feature representations simultaneously (i.e., multiview learning), such that it jointly learns the feature representation weights and corresponding classifier in an intelligent manner. Our experimental results on two abstract image datasets demonstrate the advantage of the multiple kernel learning framework for image affect detection in terms of feature selection, classification performance, and interpretation.

14 citations

Journal ArticleDOI
TL;DR: This paper proposes an efficient multi-kernel classification machine with reduced complexity named Nystrom approximation matrix with Multiple KMHKSs (NMKMHKS), which has a tighter generalization risk bound in terms of the Rademacher complexity analysis.
Abstract: Multiple Kernel Learning (MKL) has been demonstrated to improve classification performance effectively. But it will cause a large complexity in some large-scale cases. In this paper, we aim to reduce both the time and space complexities of MKL, and thus propose an efficient multi-kernel classification machine based on the Nystrom approximation. Firstly, we generate different kernel matrices Kps for given data. Secondly, we apply the Nystrom approximation technique into each Kp so as to obtain its corresponding approximation matrix K∼p. Thirdly, we fuse multiple generated K∼ps into the final ensemble matrix G∼ with one certain heuristic rule. Finally, we select the Kernelized Modification of Ho–Kashyap algorithm with Squared approximation of the misclassification errors (KMHKS) as the incorporated paradigm, and meanwhile apply the G∼ into KMHKS. In doing so, we propose a multi-kernel classification machine with reduced complexity named Nystrom approximation matrix with Multiple KMHKSs (NMKMHKS). The experimental results here validate both the effectiveness and efficiency of the proposed NMKMHKS. The contributions of NMKMHKS are that: (1) compared with the existing MKL, NMKMHKS reduces the computational complexity of finding the solution scale from O(Mn3) to O(Mnm2), where M is the number of kernels, n is the number of training samples, and m is the number of the selected columns from Kp. Meanwhile, NMKMHKS reduces the space complexity of storing the kernel matrices from O(Mn2) to O(n2); (2) compared with the original KMHKS, NMKMHKS improves the classification performance but keeps a comparable space complexity; (3) the better recognition of NMKMHKS can be got in a strong correlation between multiple used Kps; and (4) NMKMHKS has a tighter generalization risk bound in terms of the Rademacher complexity analysis.

14 citations

Journal ArticleDOI
TL;DR: A supervised dictionary learning algorithm for action recognition in still images followed by a discriminative weighting model based on Local Fisher Discrimination which takes into account the local manifold structure and discrimination information of local descriptors is proposed.

14 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114