scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
Zengmao Wang1, Bo Du1, Weiping Tu1, Lefei Zhang1, Dacheng Tao2 
TL;DR: A multiple kernel active learning framework that incorporates a group regularizer of distribution information into the estimation of uncertainty and takes the advantage of multiple kernel learning to learn the kernel space in which the complex structures can be well captured by kernel weights is proposed.
Abstract: Due to the lack of the labeled data and the complex structures of various data, it is very hard to learn the uncertainty and representativeness accurately in active learning. In this paper, we propose a multiple kernel active learning framework that incorporates a group regularizer of distribution information into the estimation of uncertainty. The proposed method takes the advantage of multiple kernel learning to learn the kernel space in which the complex structures can be well captured by kernel weights. Meanwhile, we have developed an efficient optimization algorithm to solve the proposed method. Experimental results on twelve UCI benchmark data sets and eight subsets of ImageNet show that the proposed method outperforms several state-of-the-art active learning methods. Moreover, we also have applied the proposed method to multiple feature scenario on Caltech101, and the promising results are also obtained compared with single feature scenario.

38 citations

Journal ArticleDOI
TL;DR: A new algorithm termed multiple kernel local Fisher discriminant analysis (MKLFDA) is proposed, which produces nonlinear discriminant features via kernel theory, and considers multiple image features with multiple base kernels.

38 citations

Journal Article
TL;DR: An upper bound is derived on the local Rademacher complexity of lp-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches and consequences regarding excess loss are derived.
Abstract: We derive an upper bound on the local Rademacher complexity of lp-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches. Previous local approaches analyzed the case p = 1 only while our analysis covers all cases 1 ≤ p ≤ ∞, assuming the different feature mappings corresponding to the different kernels to be uncorrelated. We also show a lower bound that shows that the bound is tight, and derive consequences regarding excess loss, namely fast convergence rates of the order O(n-α/1+a), where α is the minimum eigenvalue decay rate of the individual kernels.

38 citations

Journal ArticleDOI
TL;DR: An efficient DDoS attack detection technique based on multilevel auto-encoder based feature learning that outperforms the compared methods in terms of prediction accuracy is proposed.
Abstract: Bidirectional communication infrastructure of smart systems, such as smart grids, are vulnerable to network attacks like distributed denial of services (DDoS) and can be a major concern in the present competitive market. In DDoS attack, multiple compromised nodes in a communication network flood connection requests, bogus data packets or incoming messages to targets like database servers, resulting in denial of services for legitimate users. Recently, machine learning based techniques have been explored by researchers to secure the network from DDoS attacks. Under different attack scenarios on a system, measurements can be observed either in an online manner or batch mode and can be used to build predictive learning systems. In this work, we propose an efficient DDoS attack detection technique based on multilevel auto-encoder based feature learning. We learn multiple levels of shallow and deep auto-encoders in an unsupervised manner which are then used to encode the training and test data for feature generation. A final unified detection model is then learned by combining the multilevel features using and efficient multiple kernel learning (MKL) algorithm. We perform experiments on two benchmark DDoS attack databases and their subsets and compare the results with six recent methods. Results show that the proposed method outperforms the compared methods in terms of prediction accuracy.

38 citations

Journal ArticleDOI
TL;DR: Empirical validation of a wide range of methods on a protein fold recognition data set, where different biological feature types are available, and two object recognition data sets, Caltech101 and Caltech256, where multiple feature spaces are available in terms of different image feature extraction methods are presented.
Abstract: This review examines kernel methods for online learning, in particular, multiclass classification. We examine margin-based approaches, stemming from Rosenblatt's original perceptron algorithm, as well as nonparametric probabilistic approaches that are based on the popular gaussian process framework. We also examine approaches to online learning that use combinations of kernels-online multiple kernel learning. We present empirical validation of a wide range of methods on a protein fold recognition data set, where different biological feature types are available, and two object recognition data sets, Caltech101 and Caltech256, where multiple feature spaces are available in terms of different image feature extraction methods.

38 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114