scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Book ChapterDOI
01 Jan 2023
TL;DR: Wang et al. as mentioned in this paper proposed a multiple scale multiple layer multiple kernel learning (MS-DKL) method that fuses deep and shallow representations of mineral image features for mining.
Abstract: Identifying sandstone images and judging the types of minerals play an important role in oil and gas reservoir exploration and evaluation. Multiple kernel learning (MKL) method has shown high performance in solving some practical applications. While this method belongs to a shallow structure and cannot handle relatively complex problems well. With the development of deep learning in recent years, many researchers have proposed a deep multiple layer multiple kernel learning (DMLMKL) method based on deep structure. While the existing DMLMKL method only considers the deep representation of the data but ignores the shallow representation between the data. Therefore, this paper propose a multiple scale multiple layer multiple kernel learning (MS-DKL) method that “richer” feature data by fusing deep and shallow representations of mineral image features. Mineral recognition results show that MS-DKL algorithm is higher accuracy in mineral recognition than the MKL and DMLMKL methods.
Posted ContentDOI
13 Apr 2023
TL;DR: In this paper , a comprehensive overview of the application of kernel learning algorithms in survival analysis is provided, which suggests that using multiple kernels instead of one single kernel can make decision functions more interpretable and can improve performance.
Abstract: Abstract Background The time until an event happens is the outcome variable of interest in the statistical data analysis method known as survival analysis. Some researchers have created kernel statistics for various types of data and kernels that allow the association of a set of markers with survival data. Multiple Kernel Learning (MKL) is often considered a linear or convex combination of multiple kernels. This paper aims to provide a comprehensive overview of the application of kernel learning algorithms in survival analysis. Methods We conducted a systematic review which involved an extensive search for relevant literature in the field of biomedicine. After using the keywords in literature searching, 435 articles were identified based on the title and abstract screening. Result In this review, out of a total of 56 selected articles, only 20 articles that have used MKL for high-dimensional data, were included. In most of these articles, the MKL method has been expanded and has been introduced as a novel method. In these studies, the extended MKL models due to the nature of classification or regression have been compared with SVM, Cox PH (Cox), Extreme Learning (ELM), MKCox, Gradient Boosting (GBCox), Parametric Censored Regression Models (PCRM), Elastic-net Cox (EN-Cox), LASSO-Cox, Random Survival Forests (RSF), and Boosting Concordance Index (BoostCI). In most of these articles, the optimal model’s parameters are estimated by 10-fold cross-validation. In addition, the Concordance index (C-index) and the area under the ROC curve (AUC) were calculated to quantitatively measure the performance of all methods for validation. Predictive accuracy is improved by using kernels. Conclusion Our findings suggest that using multiple kernels instead of one single kernel can make decision functions more interpretable and can improve performance.
Book ChapterDOI
07 Feb 2009
TL;DR: By experiments, it is shown that the proposed SKM-based active learning method has quick response suited to interaction with human experts and can find an appropriate kernel among linear combinations of given multiple kernels.
Abstract: Since SVMs have met with significant success in numerous real-world learning, SVM-based active learning has been proposed in the active learning context and it has been successfully applied in the domains like document classification, in which SVMs using linear kernel are known to be effective for the task. However, it is difficult to apply SVM-based active learning to general domains because the kernel used in SVMs should be selected properly before the active learning process but good kernels for the target task is usually unknown. If the pre-selected kernel is inadequate for the target data, both the active learning process and the learned SVM have poor performance. Therefore, new active learning methods are required which effectively find an adequate kernel for the target data as well as the labels of unknown samples in the active learning process. In this paper, we propose a two-phased SKM-based active learning method for the purpose. By experiments, we show that the proposed SKM-based active learning method has quick response suited to interaction with human experts and can find an appropriate kernel among linear combinations of given multiple kernels.
Posted Content
TL;DR: This paper regularizes the posterior of an efficient multi-view latent variable model by explicitly mapping the latent representations extracted from multiple data views to a random Fourier feature space where max-margin classification constraints are imposed.
Abstract: Existing multi-view learning methods based on kernel function either require the user to select and tune a single predefined kernel or have to compute and store many Gram matrices to perform multiple kernel learning. Apart from the huge consumption of manpower, computation and memory resources, most of these models seek point estimation of their parameters, and are prone to overfitting to small training data. This paper presents an adaptive kernel nonlinear max-margin multi-view learning model under the Bayesian framework. Specifically, we regularize the posterior of an efficient multi-view latent variable model by explicitly mapping the latent representations extracted from multiple data views to a random Fourier feature space where max-margin classification constraints are imposed. Assuming these random features are drawn from Dirichlet process Gaussian mixtures, we can adaptively learn shift-invariant kernels from data according to Bochners theorem. For inference, we employ the data augmentation idea for hinge loss, and design an efficient gradient-based MCMC sampler in the augmented space. Having no need to compute the Gram matrix, our algorithm scales linearly with the size of training set. Extensive experiments on real-world datasets demonstrate that our method has superior performance.
Posted Content
TL;DR: This work proposes an efficient strategy to adaptively combine and select these kernels during the training phase, showing that it can outperform classical approaches both in batch and online settings.
Abstract: In kernel methods, temporal information on the data is commonly included by using time-delayed embeddings as inputs. Recently, an alternative formulation was proposed by defining a gamma-filter explicitly in a reproducing kernel Hilbert space, giving rise to a complex model where multiple kernels operate on different temporal combinations of the input signal. In the original formulation, the kernels are then simply combined to obtain a single kernel matrix (for instance by averaging), which provides computational benefits but discards important information on the temporal structure of the signal. Inspired by works on multiple kernel learning, we overcome this drawback by considering the different kernels separately. We propose an efficient strategy to adaptively combine and select these kernels during the training phase. The resulting batch and online algorithms automatically learn to process highly nonlinear temporal information extracted from the input signal, which is implicitly encoded in the kernel values. We evaluate our proposal on several artificial and real tasks, showing that it can outperform classical approaches both in batch and online settings.

Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114