scispace - formally typeset
Search or ask a question
Topic

Multiple kernel learning

About: Multiple kernel learning is a research topic. Over the lifetime, 1630 publications have been published within this topic receiving 56082 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a distributed and quantized online multiple kernel learning algorithm is proposed to infer the intended nonlinear function on the fly from data samples that are collected in distributed locations.
Abstract: Kernel-basedlearning has well-documented merits in various machine learning tasks. Most of the kernel-based learning approaches rely on a pre-selected kernel, the choice of which presumes task-specific prior information. In addition, most existing frameworks assume that data are collected centrally at batch. Such a setting may not be feasible especially for large-scale data sets that are collected sequentially over a network. To cope with these challenges, the present work develops an online multi-kernel learning scheme to infer the intended nonlinear function ‘on the fly’ from data samples that are collected in distributed locations. To address communication efficiency among distributed nodes, we study the effects of quantization and develop a distributed and quantized online multiple kernel learning algorithm. We provide regret analysis that indicates our algorithm is capable of achieving sublinear regret. Numerical tests on real datasets show the effectiveness of our algorithm.

2 citations

Journal ArticleDOI
TL;DR: In this article , instead of directly calculating label correlations by cosine distance and so on, the authors introduce a kernel function and an manifold regularization to learn them by iteratively updating.
Abstract: It is important to fully utilize label correlations in multi-label learning. If there is a strong positive correlation between label i and label j , an instance associated with label i also likely has label j simultaneously. So, label correlations can provide some auxiliary information when predicting unseen instances. Existing some multi-label algorithms utilize label correlations to constraint model parameters in the training stage, while label correlations are ignored in the prediction stage. Moreover, it is difficult to obtain relatively accurate label correlations by directly observing data when some labels have few positive instances in the training data. In this paper, instead of directly calculating label correlations by cosine distance and so on, we introduce a kernel function and an manifold regularization to learn them by iteratively updating. Meanwhile, we utilize them and local label information to aid label prediction. Ultimately, unseen instances are predicted by combining auxiliary label predictions and the model outputs. We compare the proposed algorithm with related algorithms on 10 data sets, and the experimental results validate its effectiveness. • Label correlations are used to train a model and predict labels simultaneously. • Use label correlations and local label information to aid label prediction. • Introduce a kernel function and the constraint to learn label correlations. • Extensive experiments verify the effectiveness of the proposed method.

2 citations

Journal ArticleDOI
TL;DR: The experiments show that the multiple kernel learning method can automatically choose relatively appropriate kernel combinations in dimensionality reduction for ship-radiated noise using auditory model features.
Abstract: The analysis of underwater acoustic signals, especially ship-radiated noise received by passive sonar, is of great importance in theelds of defense, military, and scientic research. In this paper, we investigate multiple kernel learning graph embedding using auditory model features in the application of ship-radiated noise feature extraction. We use an auditory model to get auditory model features for each signal sample. In order to have more effective features, iterative multiple kernel learning methods are adopted to conduct dimensionality reduction. Validated by experiments, the proposed method outperforms ordinary kernel-based graph embedding methods. The experiments show that the multiple kernel learning method can automatically choose relatively appropriate kernel combinations in dimensionality reduction for ship-radiated noise using auditory model features. In addition, some worthwhile conclusions can be drawn from our experiments and analysis.

2 citations

Posted Content
TL;DR: The linear Fourier approximation methodology for both single and multiple gradient-based kernel learning is developed and it is shown that it produces fast and accurate predictors on a complex dataset such as the Visual Object Challenge 2011 (VOC2011).
Abstract: Approximations based on random Fourier features have recently emerged as an efficient and formally consistent methodology to design large-scale kernel machines. By expressing the kernel as a Fourier expansion, features are generated based on a finite set of random basis projections, sampled from the Fourier transform of the kernel, with inner products that are Monte Carlo approximations of the original kernel. Based on the observation that different kernel-induced Fourier sampling distributions correspond to different kernel parameters, we show that an optimization process in the Fourier domain can be used to identify the different frequency bands that are useful for prediction on training data. Moreover, the application of group Lasso to random feature vectors corresponding to a linear combination of multiple kernels, leads to efficient and scalable reformulations of the standard multiple kernel learning model \cite{Varma09}. In this paper we develop the linear Fourier approximation methodology for both single and multiple gradient-based kernel learning and show that it produces fast and accurate predictors on a complex dataset such as the Visual Object Challenge 2011 (VOC2011).

2 citations

Book ChapterDOI
14 Jun 2017
TL;DR: A forecasting procedure based on multivariate dynamic kernels to re-examine –under a non linear framework– the experimental tests reported by Welch and Goyal showing that several variables proposed in the academic literature are of no use to predict the equity premium under linear regressions.
Abstract: This paper introduces a forecasting procedure based on multivariate dynamic kernels to re-examine –under a non linear framework– the experimental tests reported by Welch and Goyal showing that several variables proposed in the academic literature are of no use to predict the equity premium under linear regressions. For this approach kernel functions for time series are used with multiple kernel learning in order to represent the relative importance of each of these variables.

2 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
89% related
Deep learning
79.8K papers, 2.1M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202244
202172
2020101
2019113
2018114