scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: New similarity-based MKL algorithms to classify remote-sensing images were presented and demonstrated that they performed better than other algorithms, especially when their optimization problems were solved using the convex optimization methods or when few training samples were available.
Abstract: Multiple kernel learning (MKL) algorithms are proposed to address the problems associated with kernel selection of the kernel-based classification algorithms. Using a group of kernels rather than one single kernel, the MKL algorithms aim to provide better classification efficiency. This paper presents new similarity-based MKL algorithms to classify remote-sensing images. These algorithms find the optimal combination of kernels by maximizing the similarity between a combination of kernels and an ideal kernel. In this framework, we initially introduced three similarity measures to be used: kernel alignment, norm of kernel difference, and Hilbert–Schmidt independence criterion. Then, we proposed to solve the optimization problems of the MKL algorithm associated with each similarity measure adopting heuristic and convex optimization methods. The performances of the proposed algorithms were compared with a single kernel support vector machines as well as other MKL algorithms for classifying the features extracted from the high-resolution and hyperspectral images. The results demonstrated that the similarity-based MKL algorithms performed better than other algorithms, especially when their optimization problems were solved using the convex optimization methods or when few training samples were available. Moreover, when the optimization problems of these algorithms were solved using the heuristic optimization methods, they were able to yield acceptable performances and were faster than other MKL algorithms.

13 citations

Journal ArticleDOI
TL;DR: To predict potential associations between drugs and side effects, a novel method called the Triple Matrix Factorization- (TMF-) based model is proposed, built by the biprojection matrix and latent feature of kernels, which is based on Low Rank Approximation (LRA).
Abstract: All drugs usually have side effects, which endanger the health of patients. To identify potential side effects of drugs, biological and pharmacological experiments are done but are expensive and time-consuming. So, computation-based methods have been developed to accurately and quickly predict side effects. To predict potential associations between drugs and side effects, we propose a novel method called the Triple Matrix Factorization- (TMF-) based model. TMF is built by the biprojection matrix and latent feature of kernels, which is based on Low Rank Approximation (LRA). LRA could construct a lower rank matrix to approximate the original matrix, which not only retains the characteristics of the original matrix but also reduces the storage space and computational complexity of the data. To fuse multivariate information, multiple kernel matrices are constructed and integrated via Kernel Target Alignment-based Multiple Kernel Learning (KTA-MKL) in drug and side effect space, respectively. Compared with other methods, our model achieves better performance on three benchmark datasets. The values of the Area Under the Precision-Recall curve (AUPR) are 0.677, 0.685, and 0.680 on three datasets, respectively.

13 citations

Proceedings ArticleDOI
25 Oct 2010
TL;DR: This work boosts the visual similarity measure associated with image relevance, and proposes an enhancement algorithm, called Boosting-MKL, which not only incrementally learns the feature fusion but also generally preserves the initial local ranking.
Abstract: Re-ranking the returned images from a query relies on two important steps to improve its effectiveness: the estimation of the image relevance and the enhancement of the similarity function. However, attaining an effective visual similarity and an accurate re-ranking are quite challenging. We address these issues by first evaluating the image relevance to the query from the dataset according to the visual features and the co-occurrence of local patches of images. Then we boost the visual similarity measure associated with image relevance, and propose an enhancement algorithm, called Boosting-MKL, which not only incrementally learns the feature fusion but also generally preserves the initial local ranking. Specifically, we perform a random walk over a similarity graph for re-ranking. The experimental results demonstrate that our proposed approach significantly improves the effectiveness of visual similarity measure and the performance of image reranking.

13 citations

Journal ArticleDOI
TL;DR: This work introduces the locality preserving constraint into the learning framework to propose a novel Multiple Empirical Kernel Learning with Locality Preserving Constraint (MEKL-LPC), which shows lower generalization error bound than both the Modification of Ho–Kashyap algorithm with Squared approximation of the misclassification error and Multi-Kernel MHKS in terms of Rademacher complexity.
Abstract: Multiple Kernel Learning (MKL) is flexible in dealing with problems involving multiple and heterogeneous data sources. However, the necessity of inner-product form restricts its application since to kernelize the algorithms unsatisfying the inner-product formulation is pretty difficult. To overcome this problem, Multiple Empirical Kernel Learning (MEKL) is proposed by explicitly mapping input samples to feature spaces, in which the mapped feature vectors are explicitly presented. Most existed MEKLs optimize the learning framework by minimizing empirical risk, regularization risk and the loss term of multiple feature spaces. As little attention is paid to preserving local structure among training samples, the learned classifier might lack of locality similarity preserving property, which might result in unfavorable performance. Inspired by Locality Preserving Projection (LPP) which is to seek the optimal projection by preserving the local property of input samples, we introduce the locality preserving constraint into the learning framework to propose a novel Multiple Empirical Kernel Learning with Locality Preserving Constraint (MEKL-LPC). MEKL-LPC shows lower generalization error bound than both the Modification of Ho–Kashyap algorithm with Squared approximation of the misclassification error (MHKS) and Multi-Kernel MHKS (MultiK-MHKS) in terms of Rademacher complexity. Experiments on several real-world datasets demonstrate that MEKL-LPC outperforms the compared algorithms. The contributions of this work are: (i) originally integrating locality preserving constraint into MEKL, (ii) proposing a lower generalization error bound algorithm, i.e. MEKL-LPC.

13 citations

Journal ArticleDOI
05 Nov 2018-Energies
TL;DR: This study aimed to solve the problem of forecasting oil prices by combining deep learning and multiple kernel machines using information from oil, gold, and currency markets and robustly outperformed traditional models and significantly reduced the forecasting errors.
Abstract: Oil is an important energy commodity. The difficulties of forecasting oil prices stem from the nonlinearity and non-stationarity of their dynamics. However, the oil prices are closely correlated with global financial markets and economic conditions, which provides us with sufficient information to predict them. Traditional models are linear and parametric, and are not very effective in predicting oil prices. To address these problems, this study developed a new strategy. Deep (or hierarchical) multiple kernel learning (DMKL) was used to predict the oil price time series. Traditional methods from statistics and machine learning usually involve shallow models; however, they are unable to fully represent complex, compositional, and hierarchical data features. This explains why traditional methods fail to track oil price dynamics. This study aimed to solve this problem by combining deep learning and multiple kernel machines using information from oil, gold, and currency markets. DMKL is good at exploiting multiple information sources. It can effectively identify the relevant information and simultaneously select an apposite data representation. The kernels of DMKL were embedded in a directed acyclic graph (DAG), which is a deep model and efficient at representing complex and compositional data features. This provided a solid foundation for extracting the key features of oil price dynamics. By using real data for empirical testing, our new system robustly outperformed traditional models and significantly reduced the forecasting errors.

13 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114