scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Posted Content
TL;DR: Results shows AdaBoost with SVM outperform other methods for Object Categorization dataset.
Abstract: Object recognition in images involves identifying objects with partial occlusions, viewpoint changes, varying illumination, cluttered backgrounds. Recent work in object recognition uses machine learning techniques SVM-KNN, Local Ensemble Kernel Learning, Multiple Kernel Learning. In this paper, we want to utilize SVM as week learners in AdaBoost. Experiments are done with classifiers like near- est neighbor, k-nearest neighbor, Support vector machines, Local learning(SVM- KNN) and AdaBoost. Models use Scale-Invariant descriptors and Pyramid his- togram of gradient descriptors. AdaBoost is trained with set of week classifier as SVMs, each with kernel distance function on different descriptors. Results shows AdaBoost with SVM outperform other methods for Object Categorization dataset.
01 Jan 2010
TL;DR: It is shown that combined kernel does not perform better than the individual kernels and that MKL does not select the best model for this problem.
Abstract: TYPE-1 DIABETES RISK PREDICTION USING MULTIPLE KERNEL LEARNING by Paras Garg This thesis presents an analysis of multiple kernel learning (MKL) for type-1 diabetes risk prediction. MKL combines different models and representation of data to find a linear combination of these representations of the data. MKL has been successfully been implemented in image detection, splice site detection, ribosomal and membrane protein prediction, etc. In this thesis, this method was applied for Genome-wide association study (GWAS) for classifying cases and controls. This thesis has shown that combined kernel does not perform better than the individual kernels and that MKL does not select the best model for this problem. Also, the effect of normalization on MKL as well as risk prediction has also been analyzed. TYPE-1 DIABETES RISK PREDICTION USING MULTIPLE KERNEL LEARNING
Proceedings ArticleDOI
29 May 2012
TL;DR: F fuzzy rough based kernel weight initialization unlike random initialization in GMKL, which makes GMKK converge faster and the faster and stable convergence of FR-GMKL as compared to General Kernel Learning.
Abstract: Recent advances in kernel methods have positioned it as an attractive tool for many research areas. To reveal precise data similarity, learning of good kernel representation is essential. GMKL formulation based on gradient descent optimization with various regularizations has been well established in the literature. GMKL learns linear, product and exponential combinations of given base kernels which makes it more robust and efficient than traditional Multiple Kernel Learning (MKL). GMKL also has been proven a good tool for feature selection as well. The time taken for convergence of MKL depends upon the initialization of kernel weights. Several optimizations initialize kernel weights randomly which produces variability in convergence time. To tackle this issue, we propose fuzzy rough based kernel weight initialization unlike random initialization in GMKL, which makes GMKL converge faster. The proposed fuzzy rough GMKL (FR-GMKL) is tested on benchmark UCI and microarray databases. Our results show the faster and stable convergence of FR-GMKL as compared to GMKL.
Posted Content
TL;DR: This paper formally defines the learning problem of D-SVM and shows two interpretations of this problem, from both the probabilistic and kernel perspectives, and shows that the learning formulation is actually a MAP estimation on all optimization variables.
Abstract: In this paper we propose a multi-task linear classifier learning problem called D-SVM (Dictionary SVM). D-SVM uses a dictionary of parameter covariance shared by all tasks to do multi-task knowledge transfer among different tasks. We formally define the learning problem of D-SVM and show two interpretations of this problem, from both the probabilistic and kernel perspectives. From the probabilistic perspective, we show that our learning formulation is actually a MAP estimation on all optimization variables. We also show its equivalence to a multiple kernel learning problem in which one is trying to find a re-weighting kernel for features from a dictionary of basis (despite the fact that only linear classifiers are learned). Finally, we describe an alternative optimization scheme to minimize the objective function and present empirical studies to valid our algorithm.
Journal ArticleDOI
TL;DR: The method is improved in this manuscript by a new construction of tensorial kernel wherein a 3-order tensor is adopted to preserve the adjacency relation so that calculation of the above huge matrix is avoided, and hence the computational complexity is significantly reduced.

Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114