scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings ArticleDOI
01 Nov 2018
TL;DR: This work focuses on regression problems in a black-box learning scenario, and study a family of rather general transfer covariance functions, T_*, that can model the similarity heterogeneity of domains through multiple kernel learning.
Abstract: Transfer covariance functions, which can model domain similarities and adaptively control the knowledge transfer across domains, are widely used in Gaussian process (GP) based transfer learning. We focus on regression problems in a black-box learning scenario, and study a family of rather general transfer covariance functions, T_*, that can model the similarity heterogeneity of domains through multiple kernel learning. A necessary and sufficient condition that (i) validates GPs using T_* for any data and (ii) provides semantic interpretations is given. Moreover, building on this condition, we propose a computationally inexpensive model learning rule that can explicitly capture different sub-similarities of domains. Extensive experiments on one synthetic dataset and four real-world datasets demonstrate the effectiveness of the learned GP on the sub-similarity capture and the transfer performance.

9 citations

Book ChapterDOI
07 Apr 2010
TL;DR: The framework of subdifferential calculus is harnessed for computationally solving the problem of constrained nondifferentiable convex optimization that occurs in the SKM training criterion applicable to arbitrary kernel-based modalities of object representation.
Abstract: The Support Kernel Machine (SKM) and the Relevance Kernel Machine (RKM) are two principles for selectively combining object-representation modalities of different kinds by means of incorporating supervised selectivity into the classical kernel-based SVM. The former principle consists in rigidly selecting a subset of presumably informative support kernels and excluding the others, whereas the latter one assigns positive weights to all of them. The RKM algorithm was fully elaborated in previous publications; however the previous algorithm implementing the SKM principle of selectivity supervision is applicable only to real-valued features. The present paper fills in this gap by harnessing the framework of subdifferential calculus for computationally solving the problem of constrained nondifferentiable convex optimization that occurs in the SKM training criterion applicable to arbitrary kernel-based modalities of object representation.

9 citations

Proceedings ArticleDOI
Xiaobin Zhu, Jing Liu, Jinqiao Wang, Yikai Fang1, Hanqing Lu 
01 Sep 2012
TL;DR: A novel high-frequency feature based on optical flow (HFOF) is introduced and multiple kernel learning (MKL) is adopted to train a classifier for anomaly detection.
Abstract: In this paper, we propose a novel solution of anomaly detection in crowd scene by jointly modeling appearance and dynamics of motion. First, a novel high-frequency feature based on optical flow (HFOF) is introduced. It can well capture the dynamic information of optical flow. Besides, we adopt the other two types of features, namely multi-scale histogram of optical(MHOF), and dynamic textures (DT). MHOF reserves the motion direction information, while DT captures appearance variant property. The three types of features can complement each other in modeling crowd motions. Finally, multiple kernel learning (MKL) is adopted to train a classifier for anomaly detection. Experiments are conducted on a publicly available dataset of escaping scenarios from University of Minnesota and a challenging dataset from Internet. The results of comparative experiments show the promising performance against other related work.

9 citations

Posted Content
TL;DR: This study shows that adversarial features added to a view can make the existing approaches with the min-max formulation in multiplekernel clustering yield unfavorable clusters, and proposes a multiple kernel clustering method with themin-max framework that aims to be robust to such adversarial perturbation.
Abstract: Multiple kernel learning is a type of multiview learning that combines different data modalities by capturing view-specific patterns using kernels. Although supervised multiple kernel learning has been extensively studied, until recently, only a few unsupervised approaches have been proposed. In the meanwhile, adversarial learning has recently received much attention. Many works have been proposed to defend against adversarial examples. However, little is known about the effect of adversarial perturbation in the context of multiview learning, and even less in the unsupervised case. In this study, we show that adversarial features added to a view can make the existing approaches with the min-max formulation in multiple kernel clustering yield unfavorable clusters. To address this problem and inspired by recent works in adversarial learning, we propose a multiple kernel clustering method with the min-max framework that aims to be robust to such adversarial perturbation. We evaluate the robustness of our method on simulation data under different types of adversarial perturbations and show that it outperforms several compared existing methods. In the real data analysis, We demonstrate the utility of our method on a real-world problem.

9 citations

Journal ArticleDOI
TL;DR: Numerical studies demonstrate that the proposed MKL‐based prediction methods work well in finite sample and can potentially outperform models constructed assuming linear effects or ignoring the group knowledge.
Abstract: Attempts to predict prognosis in cancer patients using high-dimensional genomic data such as gene expression in tumor tissue can be made difficult by the large number of features and the potential complexity of the relationship between features and the outcome. Integrating prior biological knowledge into risk prediction with such data by grouping genomic features into pathways and networks reduces the dimensionality of the problem and could improve prediction accuracy. Additionally, such knowledge-based models may be more biologically grounded and interpretable. Prediction could potentially be further improved by allowing for complex nonlinear pathway effects. The kernel machine framework has been proposed as an effective approach for modeling the nonlinear and interactive effects of genes in pathways for both censored and noncensored outcomes. When multiple pathways are under consideration, one may efficiently select informative pathways and aggregate their signals via multiple kernel learning (MKL), which has been proposed for prediction of noncensored outcomes. In this paper, we propose MKL methods for censored survival outcomes. We derive our approach for a general survival modeling framework with a convex objective function and illustrate its application under the Cox proportional hazards and semiparametric accelerated failure time models. Numerical studies demonstrate that the proposed MKL-based prediction methods work well in finite sample and can potentially outperform models constructed assuming linear effects or ignoring the group knowledge. The methods are illustrated with an application to 2 cancer data sets.

9 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114