scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: It is found that the suggested algorithm AFO combined with MKSVM (AFO-MKSVM) scales very well to high-dimensional DSs which outperforms the existing approach "Linear Discriminant Analysis-Support Vector Machine (LDA-SVM)" in terms of performance.
Abstract: The data's dimensionality had already risen sharply in the last several decades. The "Dimensionality Curse" (DC) is a problem for conventional learning techniques when dealing with "Big Data (BD)" with a higher level of dimensionality. A learning model's performance degrades when there is a numerous range of features present. "Dimensionality Reduction (DR)" approaches are used to solve the DC issue, and the field of "Machine Learning (ML)" research is significant in this regard. It is a prominent procedure to use "Feature Selection (FS)" to reduce dimensions. Improved learning effectiveness such as greater classification precision, cheaper processing costs, and improved model comprehensibility are all typical outcomes of this approach that selects an optimal portion of the original features based on some relevant assessment criteria. An "Adaptive Firefly Optimization (AFO)" technique based on the "Map Reduce (MR)" platform is developed in this research. During the initial phase (mapping stage) the whole large "DataSet (DS)" is first subdivided into blocks of contexts. The AFO technique is then used to choose features from its large DS. In the final phase (reduction stage), every one of the fragmentary findings is combined into a single feature vector. Then the "Multi Kernel Support Vector Machine (MKSVM)" classifier is used as classification in this research to classify the data for appropriate class from the optimal features obtained from AFO for DR purposes. We found that the suggested algorithm AFO combined with MKSVM (AFO-MKSVM) scales very well to high-dimensional DSs which outperforms the existing approach "Linear Discriminant Analysis-Support Vector Machine (LDA-SVM)" in terms of performance. The evaluation metrics such as Information-Ratio for Dimension-Reduction, Accuracy, and Recall, indicate that the AFO-MKSVM method established a better outcome than the LDA-SVM method.

4 citations

Book ChapterDOI
07 Sep 2015
TL;DR: This work proposes a novel Multi-Task Multiple Kernel Learning framework based on Support Vector Machines for binary classification tasks that offers a high degree of flexibility in determining how similar feature spaces should be, as well as which pairs of tasks should share a common feature space in order to benefit overall performance.
Abstract: When faced with learning a set of inter-related tasks from a limited amount of usable data, learning each task independently may lead to poor generalization performance. (MTL) exploits the latent relations between tasks and overcomes data scarcity limitations by co-learning all these tasks simultaneously to offer improved performance. We propose a novel Multi-Task Multiple Kernel Learning framework based on Support Vector Machines for binary classification tasks. By considering pair-wise task affinity in terms of similarity between a pair's respective feature spaces, the new framework, compared to other similar MTL approaches, offers a high degree of flexibility in determining how similar feature spaces should be, as well as which pairs of tasks should share a common feature space in order to benefit overall performance. The associated optimization problem is solved via a block coordinate descent, which employs a consensus-form Alternating Direction Method of Multipliers algorithm to optimize the Multiple Kernel Learning weights and, hence, to determine task affinities. Empirical evaluation on seven data sets exhibits a statistically significant improvement of our framework's results compared to the ones of several other Clustered Multi-Task Learning methods.

4 citations

Proceedings ArticleDOI
01 Nov 2015
TL;DR: A hybrid model that integrates two methods: Support Vector Machine (SVM) and Multiple Classifier (MC) methods is presented, which adopts the MC approach to train multiple SVMs based on multiple kernel in a multi-layer structure in order to avoid solving the complicated optimization tasks.
Abstract: Kernel Methods have been successfully applied in different tasks and used on a variety of data sample sizes. Multiple Kernel Learning (MKL) and Multilayer Multiple Kernel Learning (MLMKL), as new families of kernel methods, consist of learning the optimal kernel from a set of predefined kernels by using an optimization algorithm. However, learning this optimal combination is considered to be an arduous task. Furthermore, existing algorithms often do not converge to the optimal solution (i.e., weight distribution). They achieve worse results than the simplest method, which is based on the average combination of base kernels, for some real-world applications. In this paper, we present a hybrid model that integrates two methods: Support Vector Machine (SVM) and Multiple Classifier (MC) methods. More precisely, we propose a multiple classifier framework of deep SVMs for classification tasks. We adopt the MC approach to train multiple SVMs based on multiple kernel in a multi-layer structure in order to avoid solving the complicated optimization tasks. Since the average combination of kernels gives high performance, we train multiple models with a predefined combination of kernels. Indeed, we apply a specific distribution of weights for each model. To evaluate the performance of the proposed method, we conducted an extensive set of classification experiments on a number of benchmark data sets. Experimental results show the effectiveness and efficiency of the proposed method as compared to various state-of-the-art MKL and MLMKL algorithms.

4 citations

Book ChapterDOI
01 Jan 2014
TL;DR: This work introduces the data-driven and algorithmic challenges inherent in recognition of a large set of generic visual concepts in images from a perspective of statistical data analysis and machine learning and discusses approaches relying on kernel-based similarities and discriminative methods which are capable of processing large-scale datasets.
Abstract: Recognition of a large set of generic visual concepts in images and ranking of images based on visual semantics is one of the unsolved tasks for future multimedia and scientific applications based on image collections. From that perspective, improvements of the quality of semantic annotations for image data are well matched to the goals of the THESEUS research program with respect to multimedia and scientific services. We will introduce the data-driven and algorithmic challenges inherent in such tasks from a perspective of statistical data analysis and machine learning and discuss approaches relying on kernel-based similarities and discriminative methods which are capable of processing large-scale datasets.

4 citations

Book ChapterDOI
08 Nov 2010
TL;DR: This work proposes a Multiple Scale Learning (MSL) framework to learn the best weights for each scale in the pyramid, which would produce class-specific spatial pyramid image representations and thus provide improved recognition performance.
Abstract: Spatial pyramid matching has recently become a promising technique for image classification. Despite its success and popularity, no prior work has tackled the problem of learning the optimal spatial pyramid representation for the given image data and the associated object category. We propose a Multiple Scale Learning (MSL) framework to learn the best weights for each scale in the pyramid. Our MSL algorithm would produce class-specific spatial pyramid image representations and thus provide improved recognition performance. We approach the MSL problem as solving a multiple kernel learning (MKL) task, which defines the optimal combination of base kernels constructed at different pyramid levels. A wide range of experiments on Oxford flower and Caltech- 101 datasets are conducted, including the use of state-of-the-art feature encoding and pooling strategies. Finally, excellent empirical results reported on both datasets validate the feasibility of our proposed method.

4 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114