scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A multiple kernel representation-based extreme learning machine (ELM) is proposed, named integrated multiple kernel ELM (IMK-ELM), for strengthening the localization performance in the cluttered indoor environments utilizing spatiotemporal information.
Abstract: Ubiquitous WiFi signals not only provide fundamental communications for a large number of Internet of Things devices, but also enable to estimate target’s location in a contactless manner. However, most of the existing device-free localization (DFL) methods only utilize the time dynamics of the received WiFi signals, leading to inaccurate DFL in the cluttered indoor environments. Because different layouts of environments and deployments of WiFi devices cause the different mathematical distributions of the data collected from the cluttered indoor environments. In this article, a multiple kernel representation-based extreme learning machine (ELM) is proposed, named integrated multiple kernel ELM (IMK-ELM), for strengthening the localization performance in the cluttered indoor environments utilizing spatiotemporal information. In the proposed IMK-ELM-based DFL, the whole data set is first divided into several subsets depending on their mathematical distributions through the $K$ -means clustering algorithm, and then a corresponding number of local DFL models are built for all the subsets to capture both the time dynamics and spatial properties of the data. Finally, a global DFL model is achieved by seamlessly integrating all the local DFL models due to the consistency mechanism. In addition, the Fresnel zone sensing theory is utilized for helping understand and explain the essence of indoor DFL. Comprehensive experiments indicate that the proposed IMK-ELM-based DFL outperforms state-of-the-art methods in the cluttered indoor environments.

18 citations

Journal ArticleDOI
TL;DR: This study proposes to use the probit classifier with a proper prior structure and multiple kernel learning with a properly kernel construction procedure to perform group-wise feature selection (i.e., eliminating a group of features together if they are not helpful) and shows the validity and effectiveness of the proposed binary classification algorithm variants.
Abstract: Many financial organizations such as banks and retailers use computational credit risk analysis (CRA) tools heavily due to recent financial crises and more strict regulations. This strategy enables them to manage their financial and operational risks within the pool of financial institutes. Machine learning algorithms especially binary classifiers are very popular for that purpose. In real-life applications such as CRA, feature selection algorithms are used to decrease data acquisition cost and to increase interpretability of the decision process. Using feature selection methods directly on CRA data sets may not help due to categorical variables such as marital status. Such features are usually are converted into binary features using 1-of-k encoding and eliminating a subset of features from a group does not help in terms of data collection cost or interpretability. In this study, we propose to use the probit classifier with a proper prior structure and multiple kernel learning with a proper kernel construction procedure to perform group-wise feature selection (i.e., eliminating a group of features together if they are not helpful). Experiments on two standard CRA data sets show the validity and effectiveness of the proposed binary classification algorithm variants.

18 citations

Journal ArticleDOI
TL;DR: This framework optimizes a data-dependent kernel evaluation measure based on the similarity between the composite kernel and an ideal kernel and outperformed the other state-of-the-art MKL algorithms in terms of both classification accuracy and the computational time.
Abstract: Multiple Kernel Learning (MKL) algorithms have recently demonstrated their effectiveness for classifying the data with numerous features. These algorithms aim at learning an optimal composite kernel through combining the basis kernels constructed from different features. Despite their satisfactory results, MKL algorithms assume that the basis kernels are a priori computed. Moreover, they adopt complex optimization methods to train the combination of the basis kernels, which are usually hard to solve and can only handle the binary classification problems. In this paper, a novel MKL framework was introduced in order to address all these issues. This framework optimizes a data-dependent kernel evaluation measure in order to learn both the basis kernels and their combination. The kernel evaluation measure should be able to estimate the goodness of the composite kernel for a multiclass classification problem. In this paper, we defined such a measure based on the similarity between the composite kernel and an ideal kernel. To this end, three different kernel-based similarity measures, namely kernel alignment (KA), centered kernel alignment (CKA), and Hilbert-Schmidt independence criterion (HSIC), were presented. For solving the optimization problem of the proposed MKL framework, we used the metaheuristic optimization algorithms, which in addition to being accurate algorithms can be easily implemented. The performance of the proposed framework was evaluated by classifying the features extracted from two multispectral and hyperspectral datasets. The results showed that this framework outperformed the other state-of-the-art MKL algorithms in terms of both classification accuracy and the computational time.

18 citations

Journal ArticleDOI
TL;DR: A superpixel-based tensor model for RSI classification is proposed, where a multi attribute superpixel tensor (MAST) model is constructed on the top of multiattribute superpixel maps based on the concept of extended morphological profiles (EMAPs).
Abstract: Nowadays, many methods of spatial–spectral classification have been developed and achieved good results for classification with high-resolution remotely sensed images, especially superpixel-based methods. However, these methods generally consider a superpixel as a group of pixels instead of one entity, ignoring the spectral–spatial entirety in the third-order RSI data cube. In order to fully exploit the third-order spectral–spatial information, in this paper, we propose a superpixel-based tensor model for RSI classification, where a multiattribute superpixel tensor (MAST) model is constructed on the top of multiattribute superpixel maps based on the concept of extended morphological profiles (EMAPs). In order to manage the adaptive spatial nature of superpixels, we develop an increment strategy to augment all superpixels with filling up their own envelop rectangles including three different ways, i.e., 0 vector, mean vector of all the pixels within the superpixel, or original pixels. Then, we use CANDECOMP/PARAFAC (CP) decomposition to obtain the features of the unified dimension from the MASTs of various sizes. Especially, CP decomposition can deal with missing data, so we also got a fourth means of constructing the MAST. Finally, base kernels calculated, respectively, from the original spectral feature, EMAP features and MAST features are learned by multiple kernel learning methods, with the optimal kernel fed to a support vector machine to complete the classification task. The experiments conducted on four real RSIs and compared with several well-known methods demonstrate the effectiveness of the proposed model.

18 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: The goal of this paper is to demonstrate that the shallow and simple approach based on string kernels can pass the test of time and reach state-of-the-art performance in the 2017 NLI shared task, despite the recent advances in natural language processing.
Abstract: We describe a machine learning approach for the 2017 shared task on Native Language Identification (NLI). The proposed approach combines several kernels using multiple kernel learning. While most of our kernels are based on character p-grams (also known as n-grams) extracted from essays or speech transcripts, we also use a kernel based on i-vectors, a low-dimensional representation of audio recordings, provided by the shared task organizers. For the learning stage, we choose Kernel Discriminant Analysis (KDA) over Kernel Ridge Regression (KRR), because the former classifier obtains better results than the latter one on the development set. In our previous work, we have used a similar machine learning approach to achieve state-of-the-art NLI results. The goal of this paper is to demonstrate that our shallow and simple approach based on string kernels (with minor improvements) can pass the test of time and reach state-of-the-art performance in the 2017 NLI shared task, despite the recent advances in natural language processing. We participated in all three tracks, in which the competitors were allowed to use only the essays (essay track), only the speech transcripts (speech track), or both (fusion track). Using only the data provided by the organizers for training our models, we have reached a macro F1 score of 86.95% in the closed essay track, a macro F1 score of 87.55% in the closed speech track, and a macro F1 score of 93.19% in the closed fusion track. With these scores, our team (UnibucKernel) ranked in the first group of teams in all three tracks, while attaining the best scores in the speech and the fusion tracks.

18 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114