scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Book ChapterDOI
25 Nov 2015
TL;DR: This paper investigates the classification performance of different combinations of bag-of-visual words models to find out a generalized set of visual words for different types of fine-grained classification, and combines the feature sets using multi-class multiple learning algorithm.
Abstract: In this paper, we deal with the classification problem of visually similar objects which is also known as fine-grained recognition. We consider both rigid and non-rigid types of objects. We investigate the classification performance of different combinations of bag-of-visual words models to find out a generalized set of visual words for different types of fine-grained classification. We combine the feature sets using multi-class multiple learning algorithm. We evaluate the models on two datasets; in the non-rigid, deformable object category, Oxford 102 class flower dataset is chosen and 17 class make and model recognition car dataset is selected in the rigid category. Results show that our combination of vocabulary sets provides reasonable accuracies of 81.05i¾?% and 96.76i¾?% in the flower and car datasets, respectively.
Proceedings Article
01 Jan 2010
TL;DR: This paper explains why MKL can be used in parameter selection and similarity measure combination, and argues that IK theory is required in order to use MKL within this context, and proposes a configuration that makes use of both concepts.
Abstract: This paper proposes to apply Multiple Kernel Learning and Indefinite Kernels (IK) to combine and tune Similarity Measures within the context of Ontology Instance Matching. We explain why MKL can be used in parameter selection and similarity measure combination; argue that IK theory is required in order to use MKL within this context; propose a configuration that makes use of both concepts; and present, using the IIMB bechmark, results of a prototype to show the feasibility of this idea in comparison with other matching tools.
Proceedings ArticleDOI
01 Jul 2018
TL;DR: A deep learning based multiple-layer multiple-kernel learning algorithm, utilizing the learning ability of deep model to find the best combination of a base set of structured kernel functions achieves superior results with comparison to current state-of-the-art methods.
Abstract: Deep learning methods achieve significant success in many machine learning problems. By increasing the model depth, deep model can learn very complex functions from large scale dataset. Kernel method induce a mapping from input space to certain high dimension feature space via a kernel function. Kernel method can achieve good generalization performance even with small training datasets. A proper kernel function would dramatically affect the model performance. To learn an effective kernel function, we propose a deep learning based multiple-layer multiple-kernel learning algorithm, utilizing the learning ability of deep model to find the best combination of a base set of structured kernel functions. The proposed method updates the weights of a kernel network by optimizing a metric based on the performance of a SVM classifier with current learned kernels, which is similar to the training procedure of deep neural network. The proposed method is applied to the problem of medical image annotation and achieves superior results with comparison to current state-of-the-art methods.
Posted ContentDOI
11 Jul 2018-bioRxiv
TL;DR: An innovative method that learns parameters specific to these latent states of FC dynamics using a graph-theoretic model (temporal Multiple Kernel Learning, tMKL) and finally predicts the grand average functional connectivity of the unseen subjects by leveraging a state transition Markov model is proposed.
Abstract: Over the last decade there has been growing interest in understanding the brain activity in the absence of any task or stimulus captured by the resting-state functional magnetic resonance imaging (rsfMRI). These resting state patterns are not static, but exhibit complex spatio-temporal dynamics. In the recent years substantial effort has been put to characterize different FC configurations while brain states makes transitions over time. The dynamics governing this transitions and their relationship with stationary functional connectivity remains elusive. Over the last years a multitude of methods has been proposed to discover and characterize FC dynamics and one of the most accepted method is sliding window approach. Moreover, as these FC configurations are observed to be cyclically repeating in time there was further motivation to use of a generic clustering scheme to identify latent states of dynamics. We discover the underlying lower-dimensional manifold of the temporal structure which is further parameterized as a set of local density distributions, or latent transient states. We propose an innovative method that learns parameters specific to these latent states using a graph-theoretic model (temporal Multiple Kernel Learning, tMKL) and finally predicts the grand average functional connectivity (FC) of the unseen subjects by leveraging a state transition Markov model. tMKL thus learns a mapping between the underlying anatomical network and the temporal structure. Training and testing were done using the rs-fMRI data of 46 healthy participants and the results establish the viability of the proposed solution. Parameters of the model are learned via state-specific optimization formulations and yet the model performs at par or better than state-of-the-art models for predicting the grand average FC. Moreover, the model shows sensitivity towards subject-specific anatomy. The proposed model performs significantly better than the established models of predicting resting state functional connectivity based on whole-brain dynamic mean-field model, single diffusion kernel model and another version of multiple kernel learning model. In summary, We provide a novel solution that does not make strong assumption about underlying data and is generally applicable to resting or task data to learn subject specific state transitions and successful characterization of SC-dFC-FC relationship through an unifying framework.
Proceedings ArticleDOI
27 Jan 2023
TL;DR: LAKE as discussed by the authors explores the potential of ML to improve decision-making in OS kernels and explores the tradeoff spaces for subsystems such as memory management and process and I/O scheduling that currently rely on hand-tuned heuristics.
Abstract: The complexity of modern operating systems (OSes), rapid diversification of hardware, and steady evolution of machine learning (ML) motivate us to explore the potential of ML to improve decision-making in OS kernels. We conjecture that ML can better manage tradeoff spaces for subsystems such as memory management and process and I/O scheduling that currently rely on hand-tuned heuristics to provide reasonable average-case performance. We explore the replacement of heuristics with ML-driven decision-making in five kernel subsystems, consider the implications for kernel design, shared OS-level components, and access to hardware acceleration. We identify obstacles, address challenges and characterize tradeoffs for the benefits ML can provide that arise in kernel-space. We find that use of specialized hardware such as GPUs is critical to absorbing the additional computational load required by ML decisioning, but that poor accessibility of accelerators in kernel space is a barrier to adoption. We also find that the benefits of ML and acceleration for OSes is subsystem-, workload- and hardware-dependent, suggesting that using ML in kernels will require frameworks to help kernel developers navigate new tradeoff spaces. We address these challenge by building a system called LAKE for supporting ML and exposing accelerators in kernel space. LAKE includes APIs for feature collection and management across abstraction layers and module boundaries. LAKE provides mechanisms for managing the variable profitability of acceleration, and interfaces for mitigating contention for resources between user and kernel space. We show that an ML-backed I/O latency predictor can have its inference time reduced by up to 96% with acceleration.

Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114