scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper , a support vector machine (SVM) classifier based on the MKL algorithm EasyMKL was proposed to investigate the feasibility of MKL algorithms in EEG-based emotion recognition problems.
Abstract: Emotion recognition based on electroencephalography (EEG) has a wide range of applications and has great potential value, so it has received increasing attention from academia and industry in recent years. Meanwhile, multiple kernel learning (MKL) has also been favored by researchers for its data-driven convenience and high accuracy. However, there is little research on MKL in EEG-based emotion recognition. Therefore, this paper is dedicated to exploring the application of MKL methods in the field of EEG emotion recognition and promoting the application of MKL methods in EEG emotion recognition. Thus, we proposed a support vector machine (SVM) classifier based on the MKL algorithm EasyMKL to investigate the feasibility of MKL algorithms in EEG-based emotion recognition problems. We designed two data partition methods, random division to verify the validity of the MKL method and sequential division to simulate practical applications. Then, tri-categorization experiments were performed for neutral, negative and positive emotions based on a commonly used dataset, the Shanghai Jiao Tong University emotional EEG dataset (SEED). The average classification accuracies for random division and sequential division were 92.25% and 74.37%, respectively, which shows better classification performance than the traditional single kernel SVM. The final results show that the MKL method is obviously effective, and the application of MKL in EEG emotion recognition is worthy of further study. Through the analysis of the experimental results, we discovered that the simple mathematical operations of the features on the symmetrical electrodes could not effectively integrate the spatial information of the EEG signals to obtain better performance. It is also confirmed that higher frequency band information is more correlated with emotional state and contributes more to emotion recognition. In summary, this paper explores research on MKL methods in the field of EEG emotion recognition and provides a new way of thinking for EEG-based emotion recognition research.

5 citations

Journal ArticleDOI
TL;DR: The results indicate that the proposed MKL-SOM scheme outperforms state-of-the-art algorithms, particularly when applied to large HSIs, and its ability to fuse multiscale features, especially in large HSI, is useful for various analysis tasks.
Abstract: Hyperspectral image (HSI) analysis is a growing area in the community of remote sensing, particularly with images exhibiting high spatial and spectral resolutions. Multiple kernel learning (MKL) has been proposed and found to classify HSIs efficiently owing to its capability for handling diverse feature fusion. However, constructing base kernels, selecting key kernels, and adjusting their contributions to the final kernel remain major challenges for MKL. We propose a scheme to generate effective base kernels and optimize their weights, which represent their contribution to the final kernel. In addition, both spatial and spectral information are utilized to improve the classification accuracy. In the proposed scheme, the spatial features of HSIs are introduced through multiscale feature representations that preserve the relationship between the classification process and the pixel context. MKL and self-organizing maps (SOMs) are integrated and used for the unsupervised classification of HSIs. The weights of both the base kernels and neural networks are simultaneously optimized in an unsupervised manner. The results indicate that the proposed MKL-SOM scheme outperforms state-of-the-art algorithms, particularly when applied to large HSIs. Moreover, its ability to fuse multiscale features, especially in large HSIs, is useful for various analysis tasks.

5 citations

Book ChapterDOI
31 Oct 2012
TL;DR: An image classification scheme based on image sparse representation and multiple kernel learning (MKL) for the sake of better classification performance is proposed, leading to state-of-art performance on several benchmarks.
Abstract: In recent researches, image classification of objects and scenes has attracted much attention, but the accuracy of some schemes may drop when dealing with complicated datasets. In this paper, we propose an image classification scheme based on image sparse representation and multiple kernel learning (MKL) for the sake of better classification performance. As the fundamental part of our scheme, sparse coding method is adopted to generate precise representation of images. Besides, feature fusion is utilized and a new MKL method is proposed to fit the multi-feature case. Experiments demonstrate that our scheme remarkably improves the classification accuracy, leading to state-of-art performance on several benchmarks, including some rather complicated datasets such as Caltech-101 and Caltech-256.

5 citations

Journal ArticleDOI
TL;DR: Comprehensive experiments in three real-world applications verify that the proposed novel metric learning algorithms for domain adaptation in an information-theoretic setting outperform state-of-the-art metric learning and domain adaptation methods.
Abstract: Learning an appropriate distance metric plays a substantial role in the success of many learning machines. Conventional metric learning algorithms have limited utility when the training and test samples are drawn from related but different domains (i.e., source domain and target domain). In this letter, we propose two novel metric learning algorithms for domain adaptation in an information-theoretic setting, allowing for discriminating power transfer and standard learning machine propagation across two domains. In the first one, a cross-domain Mahalanobis distance is learned by combining three goals: reducing the distribution difference between different domains, preserving the geometry of target domain data, and aligning the geometry of source domain data with label information. Furthermore, we devote our efforts to solving complex domain adaptation problems and go beyond linear cross-domain metric learning by extending the first method to a multiple kernel learning framework. A convex combination of multiple kernels and a linear transformation are adaptively learned in a single optimization, which greatly benefits the exploration of prior knowledge and the description of data characteristics. Comprehensive experiments in three real-world applications (face recognition, text classification, and object categorization) verify that the proposed methods outperform state-of-the-art metric learning and domain adaptation methods.

5 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114