scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A generic regularizer enables the proposed formulation of Hierarchical Kernel Learning to be employed in the Rule Ensemble Learning (REL) where the goal is to construct an ensemble of conjunctive propositional rules.
Abstract: This paper generalizes the framework of Hierarchical Kernel Learning (HKL) and illustrates its utility in the domain of rule learning. HKL involves Multiple Kernel Learning over a set of given base kernels assumed to be embedded on a directed acyclic graph. This paper proposes a two-fold generalization of HKL: the first is employing a generic l1/lp block-norm regularizer (ρ ∈ (1, 2]) that alleviates a key limitation of the HKL formulation. The second is a generalization to the case of multi-class, multi-label and more generally, multi-task applications. The main technical contribution of this work is the derivation of a highly specialized partial dual of the proposed generalized HKL formulation and an efficient mirror descent based active set algorithm for solving it. Importantly, the generic regularizer enables the proposed formulation to be employed in the Rule Ensemble Learning (REL) where the goal is to construct an ensemble of conjunctive propositional rules. Experiments on benchmark REL data sets illustrate the efficacy of the proposed generalizations.

24 citations

Journal ArticleDOI
TL;DR: Comprehensive experimental results on 13 UCI data sets and another two real-world data sets show that via the joint learning process, the approach not only adaptively identifies the pre-specified kernel, but also achieves superior classification performance to the original ONKL and the related MKL algorithms.
Abstract: Learning an optimal kernel plays a pivotal role in kernel-based methods. Recently, an approach called optimal neighborhood kernel learning (ONKL) has been proposed, showing promising classification performance. It assumes that the optimal kernel will reside in the neighborhood of a “pre-specified” kernel. Nevertheless, how to specify such a kernel in a principled way remains unclear. To solve this issue, this paper treats the pre-specified kernel as an extra variable and jointly learns it with the optimal neighborhood kernel and the structure parameters of support vector machines. To avoid trivial solutions, we constrain the pre-specified kernel with a parameterized model. We first discuss the characteristics of our approach and in particular highlight its adaptivity. After that, two instantiations are demonstrated by modeling the pre-specified kernel as a common Gaussian radial basis function kernel and a linear combination of a set of base kernels in the way of multiple kernel learning (MKL), respectively. We show that the optimization in our approach is a min-max problem and can be efficiently solved by employing the extended level method and Nesterov's method. Also, we give the probabilistic interpretation for our approach and apply it to explain the existing kernel learning methods, providing another perspective for their commonness and differences. Comprehensive experimental results on 13 UCI data sets and another two real-world data sets show that via the joint learning process, our approach not only adaptively identifies the pre-specified kernel, but also achieves superior classification performance to the original ONKL and the related MKL algorithms.

24 citations

Journal ArticleDOI
TL;DR: In this study, eight crop types were identified using gamma naught values and polarimetric parameters calculated from TerraSAR-X (or TanDEM-X) dual-polarimetric (HH/VV) data and the classification accuracy of four widely used machine-learning algorithms was evaluated.
Abstract: Cropland maps are useful for the management of agricultural fields and the estimation of harvest yield. Some local governments have documented field properties, including crop type and location, based on site investigations. This process, which is generally done manually, is labor-intensive, and remote-sensing techniques can be used as alternatives. In this study, eight crop types (beans, beetroot, grass, maize, potatoes, squash, winter wheat, and yams) were identified using gamma naught values and polarimetric parameters calculated from TerraSAR-X (or TanDEM-X) dual-polarimetric (HH/VV) data. Three indices (difference (D-type), simple ratio (SR), and normalized difference (ND)) were calculated using gamma naught values and m-chi decomposition parameters and were evaluated in terms of crop classification. We also evaluated the classification accuracy of four widely used machine-learning algorithms (kernel-based extreme learning machine, support vector machine, multilayer feedforward neural network (FNN), and random forest) and two multiple-kernel methods (multiple kernel extreme learning machine (MKELM) and multiple kernel learning (MKL)). MKL performed best, achieving an overall accuracy of 92.1%, and proved useful for the identification of crops with small sample sizes. The difference (raw or normalized) between double-bounce scattering and odd-bounce scattering helped to improve the identification of squash and yams fields.

24 citations

Proceedings ArticleDOI
10 Dec 2012
TL;DR: This work proposes a multi-view learning approach called co-labeling which is applicable for several machine learning problems where the labels of training samples are uncertain, including semi-supervised learning (SSL), multi-instance learning (MIL) and max-margin clustering (MMC).
Abstract: We propose a multi-view learning approach called co-labeling which is applicable for several machine learning problems where the labels of training samples are uncertain, including semi-supervised learning (SSL), multi-instance learning (MIL) and max-margin clustering (MMC). Particularly, we first unify those problems into a general ambiguous problem in which we simultaneously learn a robust classifier as well as find the optimal training labels from a finite label candidate set. To effectively utilize multiple views of data, we then develop our co-labeling approach for the general multi-view ambiguous problem. In our work, classifiers trained on different views can teach each other by iteratively passing the predictions of training samples from one classifier to the others. The predictions from one classifier are considered as label candidates for the other classifiers. To train a classifier with a label candidate set for each view, we adopt the Multiple Kernel Learning (MKL) technique by constructing the base kernel through associating the input kernel calculated from input features with one label candidate. Compared with the traditional co-training method which was specifically designed for SSL, the advantages of our co-labeling are two-fold: 1) it can be applied to other ambiguous problems such as MIL and MMC, 2) it is more robust by using the MKL method to integrate multiple labeling candidates obtained from different iterations and biases. Promising results on several real-world multi-view data sets clearly demonstrate the effectiveness of our proposed co-labeling for both MIL and SSL.

24 citations

Journal ArticleDOI
TL;DR: A Deep Learning Framework that combines an algorithm of necessary processing of Linear Discriminant Analysis (LDA) and Auto Encoder (AE) Neural Network to classify different features within the profile of gene expression is provided.
Abstract: In the recent past, the Classifiers are based on genetic signatures in which many microarray studies are analyzed to predict medical results for cancer patients However, the Signatures from different studies have been benefitted with low-intensity ratio during the classification of individual datasets has been considered as a significant point of research in the present scenario Hence to overcome the above-discussed issue, this paper provides a Deep Learning Framework that combines an algorithm of necessary processing of Linear Discriminant Analysis (LDA) and Auto Encoder (AE) Neural Network to classify different features within the profile of gene expression Hence, an advanced ensemble classification has been developed based on the Deep Learning (DL) algorithm to assess the clinical outcome of breast cancer Furthermore, numerous independent breast cancer datasets and representations of the signature gene, including the primary method, have been evaluated for the optimization parameters Finally, the experiment results show that the suggested deep learning frameworks achieve 9827% accuracy than many other techniques such as genomic data and pathological images with multiple kernel learning (GPMKL), Multi-Layer Perception (MLP), Deep Learning Diagnosis (DLD), and Spatiotemporal Wavelet Kinetics (SWK)

24 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114