scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Proceedings ArticleDOI
01 Nov 2020
TL;DR: This work presents SupMMD, a novel technique for generic and update summarization based on the maximum mean discrepancy from kernel two-sample testing that combines both supervised learning for salience and unsupervised learning for coverage and diversity.
Abstract: Most work on multi-document summarization has focused on generic summarization of information present in each individual document set. However, the under-explored setting of update summarization, where the goal is to identify the new information present in each set, is of equal practical interest (e.g., presenting readers with updates on an evolving news topic). In this work, we present SupMMD, a novel technique for generic and update summarization based on the maximum mean discrepancy from kernel two-sample testing. SupMMD combines both supervised learning for salience and unsupervised learning for coverage and diversity. Further, we adapt multiple kernel learning to make use of similarity across multiple information sources (e.g., text features and knowledge based concepts). We show the efficacy of SupMMD in both generic and update summarization tasks by meeting or exceeding the current state-of-the-art on the DUC-2004 and TAC-2009 datasets.

1 citations

Proceedings ArticleDOI
11 Jul 2009
TL;DR: A multiple kernel least squares support vector machine (LSSVM) is developed to realize multiple kernel classification in empirical kernel mapping space and is shown to be feasible and effective.
Abstract: Multiple kernel methods are superior to single kernel methods on treating multiple, heterogeneous data sources. Different from the existing multiple kernel methods which mainly work in implicit kernel space, we propose a novel multiple kernel method in empirical kernel mapping space. In empirical kernel mapping space, the combination of kernels can be treated as the weighted fusion of empirical kernel mapping samples. Based this fact, we developed a multiple kernel least squares support vector machine(LSSVM) to realize multiple kernel classification in empirical kernel mapping space. The experiments here illustrate that the proposed multiple LSSVM method is feasible and effective.

1 citations

Posted Content
TL;DR: In this article, a data-driven method was proposed to learn a mixture of multiple kernels with random features that is certifiabaly robust against adverserial inputs. But, the method requires a large number of training samples.
Abstract: We propose a novel data-driven method to learn a mixture of multiple kernels with random features that is certifiabaly robust against adverserial inputs. Specifically, we consider a distributionally robust optimization of the kernel-target alignment with respect to the distribution of training samples over a distributional ball defined by the Kullback-Leibler (KL) divergence. The distributionally robust optimization problem can be recast as a min-max optimization whose objective function includes a log-sum term. We develop a mini-batch biased stochastic primal-dual proximal method to solve the min-max optimization. To debias the minibatch algorithm, we use the Gumbel perturbation technique to estimate the log-sum term. We establish theoretical guarantees for the performance of the proposed multiple kernel learning method. In particular, we prove the consistency, asymptotic normality, stochastic equicontinuity, and the minimax rate of the empirical estimators. In addition, based on the notion of Rademacher and Gaussian complexities, we establish distributionally robust generalization bounds that are tighter than previous known bounds. More specifically, we leverage matrix concentration inequalities to establish distributionally robust generalization bounds. We validate our kernel learning approach for classification with the kernel SVMs on synthetic dataset generated by sampling multvariate Gaussian distributions with differernt variance structures. We also apply our kernel learning approach to the MNIST data-set and evaluate its robustness to perturbation of input images under different adversarial models. More specifically, we examine the robustness of the proposed kernel model selection technique against FGSM, PGM, C\&W, and DDN adversarial perturbations, and compare its performance with alternative state-of-the-art multiple kernel learning paradigms.

1 citations

Book ChapterDOI
08 Jun 2018
TL;DR: A multi-kernel multiset integrated canonical correlation analysis framework for subspace learning that outperforms the single-kernel-based MICCA and enables MK-MICCA to uncover a variety of different geometrical structures of the original data in the feature spaces.
Abstract: Multiset integrated canonical correlation analysis (MICCA) can distinctly express the integral correlation among multi-group feature. Thus, MICCA is very powerful for multiple feature extraction. However, it is difficult to capture nonlinear relationships with the linear mapping. In order to overcome this problem, we, in this paper, propose a multi-kernel multiset integrated canonical correlation analysis (MK-MICCA) framework for subspace learning. In the MK-MICCA framework, the input data of each feature are mapped into multiple higher dimensional feature spaces by implicitly nonlinear mappings determined by different kernels. This enables MK-MICCA to uncover a variety of different geometrical structures of the original data in the feature spaces. Extensive experimental results on multiple feature database and ORL database show that MK-MICCA is very effective and obviously outperforms the single-kernel-based MICCA.

1 citations

Proceedings ArticleDOI
26 Nov 2021
TL;DR: In this article, a new method, MKL-LP, which utilizes multiple kernel learning (MKL) and label propagation (LP), is presented on the basis of known microbe-disease associations and multiple microbe/diseases similarities.
Abstract: A growing number of clinical evidences have proved that there are considerable associations between microbes and diseases. At present, developing computational models to explore unbeknown microbe-disease associations, rather than using the traditionally experimental method which is usually expensive and costs time, is a hot research trend. In this paper, a new method, MKL-LP, which utilizes Multiple Kernel Learning (MKL) and Label Propagation (LP), is presented on the basis of known microbe-disease associations and multiple microbe/disease similarities. Firstly, multiple microbe/disease similarities are calculated. Secondly, for the more comprehensive input information, multiple microbe/disease similarity kernels are fused by MKL to obtain the fused microbe/disease kernel. Then, considering that many non-associations may be positive, a pre-processing step is applied for estimating the association probability of unknown cases in the association matrix by using the microbe/disease similarity information. Then LP is applied for predicting novel microbe-disease associations. After that, 5-fold cross validation is applied to validate the predictive performance of our method with the comparison of the other four predicting methods. Also, in the case study of Chronic Obstructive Pulmonary Disease (COPD), 10 of the first 15 candidate microbes associated with the corresponding disease have literature proof. These suggest that MKL-LP has played a significant role in discovering novel microbe-disease associations, thus providing important insights into complicated disease mechanisms, as well as facilitating new approaches to the diagnosis and treatment of the disease.

1 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114