scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A style regularized least squares support vector machine based on multikernel learning is proposed and applied to the recognition of epilepsy abnormal signals.
Abstract: In the field of brain-computer interfaces, it is very common to use EEG signals for disease diagnosis. In this study, a style regularized least squares support vector machine based on multikernel learning is proposed and applied to the recognition of epilepsy abnormal signals. The algorithm uses the style conversion matrix to represent the style information contained in the sample, regularizes it in the objective function, optimizes the objective function through the commonly used alternative optimization method, and simultaneously updates the style conversion matrix and classifier during the iteration process parameter. In order to use the learned style information in the prediction process, two new rules are added to the traditional prediction method, and the style conversion matrix is used to standardize the sample style before classification.
Posted Content
TL;DR: Experimental results show that non-kernel generated using genetic programming gives good accuracy as compared to linear combination of kernels.
Abstract: In Computer Vision, problem of identifying or classifying the objects present in an image is called Object Categorization. It is a challenging problem, especially when the images have clutter background, occlusions or different lighting conditions. Many vision features have been proposed which aid object categorization even in such adverse conditions. Past research has shown that, employing multiple features rather than any single features leads to better recognition. Multiple Kernel Learning (MKL) framework has been developed for learning an optimal combination of features for object categorization. Existing MKL methods use linear combination of base kernels which may not be optimal for object categorization. Real-world object categorization may need to consider complex combination of kernels(non-linear) and not only linear combination. Evolving non-linear functions of base kernels using Genetic Programming is proposed in this report. Experiment results show that non-kernel generated using genetic programming gives good accuracy as compared to linear combination of kernels.
Proceedings ArticleDOI
25 Jun 2010
TL;DR: This work proposes a compositional method for multiple kernels that avoids learning any weight and the importance of the kernel functions are directly derived in the process of learning kernel machines.
Abstract: While classical kernel-based learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Most multiple kernels methods try to average out the kernel matrices in one way or another. There is a risk, however, of losing information in the original kernel matrices. We propose here a compositional method for multiple kernels. The new composed kernel matrix is an extension and union of the original kernel matrices. Generally, multiple kernels approaches relied heavily on the training data and had to learn some weights to indicate the importance of each kernel. Our compositional method avoids learning any weight and the importance of the kernel functions are directly derived in the process of learning kernel machines. The performance of the proposed compositional kernel method is illustrated by some experiments in comparison with single kernel.
Proceedings ArticleDOI
14 Jun 2010
TL;DR: This work studies the problem of learning sparse, nonparametric models from observations drawn from an arbitrary, unknown distribution and proposes an algorithm extending techniques for Multiple Kernel Learning, functional ANOVA models and the Component Selection and Smoothing Operator.
Abstract: This contribution studies the problem of learning sparse, nonparametric models from observations drawn from an arbitrary, unknown distribution. This specific problem leads us to an algorithm extending techniques for Multiple Kernel Learning (MKL), functional ANOVA models and the Component Selection and Smoothing Operator (COSSO). The key element is to use a data-dependent regularization scheme adapting to the specific distribution underlying the data. We then present empirical evidence supporting the proposed learning algorithm.
Proceedings ArticleDOI
15 Oct 2015
TL;DR: Wavelet kernel method, which combined multiple kernel learning algorithm and wavelet kernels was proposed for hyperspectral image classification, has well performance, and would be an appropriate tool for hyperspected image classification.
Abstract: Hyperspectral images have the capability of acquiring images of earth surface with several hundred of spectral bands. Providing such abundant spectral data should increase the abilities in classifying land use/cover type. However, due to the high dimensionality of hyperspectral data, traditional classification methods are not suitable for hyperspectral data classification. The common method to solve this problem is dimensionality reduction by using feature extraction before classification. Kernel methods such as support vector machine (SVM) and multiple kernel learning (MKL) have been successfully applied to hyperspectral images classification. In kernel methods applications, the selection of kernel function plays an important role. The wavelet kernel with multidimensional wavelet functions can find the optimal approximation of data in feature space for classification. The SVM with wavelet kernels (called WSVM) have been also applied to hyperspectral data and improve classification accuracy. In this study, wavelet kernel method combined multiple kernel learning algorithm and wavelet kernels was proposed for hyperspectral image classification. After the appropriate selection of a linear combination of kernel functions, the hyperspectral data will be transformed to the wavelet feature space, which should have the optimal data distribution for kernel learning and classification. Finally, the proposed methods were compared with the existing methods. A real hyperspectral data set was used to analyze the performance of wavelet kernel method. According to the results the proposed wavelet kernel methods in this study have well performance, and would be an appropriate tool for hyperspectral image classification.

Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114