scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings ArticleDOI
14 Nov 2015
TL;DR: This paper introduces MKL to OED-based active learning, specifically the globalised and localized multiple kernel active learning methods, respectively, and demonstrates that the proposed methods have better performance than existing Oed-basedactive learning methods.
Abstract: In classification tasks, labeled data is a necessity but sometimes difficult or expensive to obtain. On the contrary, unlabeled data is usually abundant. Recently, different active learning algorithms are proposed to alleviate this issue by selecting the most informative data points to label. One family of active learning methods comes from Optimum Experimental Design (OED) in statistics. Instead of selecting data points one by one iteratively, OED-based approaches select data in a one-shot manner, that is, a fixed-sized subset is selected from the unlabeled dataset for manually labeling. These methods usually use kernels to represent pair-wise similarities between different data points. It is well known that choosing optimal kernel types (e.g. Gaussian kernel) and kernel parameters (e.g. kernel width) is tricky, and a common way to resolve it is by Multiple Kernel Learning (MKL), i.e., to construct a few candidate kernels and merge them to form a consensus kernel. There would be different ways to combine multiple kernels, one of which, called the the globalised approach is to assign a weight to each candidate kernel. In practice different data points in the same candidate kernel may not have the same contribution in the consensus kernel, this requires assigning different weights to different data points in the same candidate kernel, leading to the localized approach. In this paper, we introduce MKL to OED-based active learning, specifically we propose globalised and localized multiple kernel active learning methods, respectively. Our experiments on six benchmark datasets demonstrate that the proposed methods have better performance than existing OED-based active learning methods.

7 citations

Journal ArticleDOI
Mehmet Gönen1
TL;DR: A Bayesian binary classification framework to integrate gene set analysis and nonlinear predictive modeling is proposed and is able to obtain comparable or even better predictive performance than a baseline Bayesian nonlinear algorithm and to identify sparse sets of relevant genes and gene sets on all datasets.
Abstract: Identifying molecular signatures of disease phenotypes is studied using two mainstream approaches: (i) Predictive modeling methods such as linear classification and regression algorithms are used to find signatures predictive of phenotypes from genomic data, which may not be robust due to limited sample size or highly correlated nature of genomic data. (ii) Gene set analysis methods are used to find gene sets on which phenotypes are linearly dependent by bringing prior biological knowledge into the analysis, which may not capture more complex nonlinear dependencies. Thus, formulating an integrated model of gene set analysis and nonlinear predictive modeling is of great practical importance. In this study, we propose a Bayesian binary classification framework to integrate gene set analysis and nonlinear predictive modeling. We then generalize this formulation to multitask learning setting to model multiple related datasets conjointly. Our main novelty is the probabilistic nonlinear formulation that enables us to robustly capture nonlinear dependencies between genomic data and phenotype even with small sample sizes. We demonstrate the performance of our algorithms using repeated random subsampling validation experiments on two cancer and two tuberculosis datasets by predicting important disease phenotypes from genome-wide gene expression data. We are able to obtain comparable or even better predictive performance than a baseline Bayesian nonlinear algorithm and to identify sparse sets of relevant genes and gene sets on all datasets. We also show that our multitask learning formulation enables us to further improve the generalization performance and to better understand biological processes behind disease phenotypes.

7 citations

Journal ArticleDOI
TL;DR: A novel method for solving facial expression recognition (FER) tasks which uses a self-adaptive weighted synthesised local directional pattern (SW-SLDP) descriptor integrating sparse autoencoder (SA) features based on improved multiple kernel learning (IMKL) strategy is presented.
Abstract: This study presents a novel method for solving facial expression recognition (FER) tasks which uses a self-adaptive weighted synthesised local directional pattern (SW-SLDP) descriptor integrating sparse autoencoder (SA) features based on improved multiple kernel learning (IMKL) strategy. The authors' work includes three parts. Firstly, the authors propose a novel SW-SLDP feature descriptor which divides the facial images into patches and extracts sub-block features synthetically according to both distribution information and directional intensity contrast. Then self-adaptive weights are assigned to each sub-block feature according to the projection error between the expressional image and neutral image of each patch, which can highlight such areas containing more expressional texture information. Secondly, to extract a discriminative high-level feature, they introduce SA for feature representation, which extracts the hidden layer representation including more comprehensive information. Finally, to combine the above two kinds of features, an IMKL strategy is developed by effectively integrating both soft margin learning and intrinsic local constraints, which is robust to noisy condition and thus improve the classification performance. Extensive experimental results indicate their model can achieve competitive or even better performance with existing representative FER methods.

7 citations

Proceedings ArticleDOI
15 Dec 2011
TL;DR: A novel dance posture based annotation model by combining features using Multiple Kernel Learning (MKL) and a novel feature representation which represents the local texture properties of the image is proposed.
Abstract: We present a novel dance posture based annotation model by combining features using Multiple Kernel Learning (MKL). We have proposed a novel feature representation which represents the local texture properties of the image. The annotation model is defined in the direct a cyclic graph structure using the binary MKL algorithm. The bag-of-words model is applied for image representation. The experiments have been performed on the image collection belonging to two Indian classical dances (Bharatnatyam and Odissi). The annotation model has been tested using SIFT and the proposed feature individually and by optimally combining both the features. The experiments have shown promising results.

7 citations

Book ChapterDOI
01 Jan 2013
TL;DR: In this article, the authors discuss a general class of sparsity regularization methods, in which the regularizer can be expressed as the composition of a convex function ω with a linear function.
Abstract: During the past few years there has been an explosion of interest in learning methods based on sparsity regularization. In this chapter, we discuss a general class of such methods, in which the regularizer can be expressed as the composition of a convex function ω with a linear function. This setting includes several methods such as the Group Lasso, the Fused Lasso, multi-task learning and many more. We present a general approach for solving regularization problems of this kind, under the assumption that the proximity operator of the function ω is available. Furthermore, we comment on the application of this approach to support vector machines, a technique pioneered by the groundbreaking work of Vladimir Vapnik.

7 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114