scispace - formally typeset
Open AccessProceedings Article

Improved Loss Bounds For Multiple Kernel Learning

Reads0
Chats0
TLDR
In this paper, two generalization error bounds for multiple kernel learning (MKL) were proposed, one of which is a Rademacher complexity bound which is additive in the kernel complexity and margin term.
Abstract
We propose two new generalization error bounds for multiple kernel learning (MKL). First, using the bound of Srebro and BenDavid (2006) as a starting point, we derive a new version which uses a simple counting argument for the choice of kernels in order to generate a tighter bound when 1-norm regularization (sparsity) is imposed in the kernel learning problem. The second bound is a Rademacher complexity bound which is additive in the (logarithmic) kernel complexity and margin term. This dependence is superior to all previously published Rademacher bounds for learning a convex combination of kernels, including the recent bound of Cortes et al. (2010), which exhibits a multiplicative interaction. We illustrate the tightness of our bounds with simulations.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Large-Margin Multi-ViewInformation Bottleneck

TL;DR: This paper forms the problem as one of encoding a communication system with multiple senders, each of which represents one view of the data, and derives the robustness and generalization error bound of the proposed algorithm, and reveals the specific properties of multi-view learning.
Journal ArticleDOI

EasyMKL: a scalable multiple kernel learning algorithm

TL;DR: It is shown empirically that the advantage of using the method proposed in this paper is even clearer when noise features are added, and the proposed method has been compared with other baselines and three state-of-the-art MKL methods showing that the approach is often superior.
Proceedings Article

Generalization Bounds for Domain Adaptation

TL;DR: A new framework to study the generalization bound of the learning process for domain adaptation is provided, which uses the integral probability metric to measure the difference between two domains and develops the specific Hoeffding-type deviation inequality and symmetrization inequality for either kind of domain adaptation.
Proceedings ArticleDOI

Online Joint Multi-Metric Adaptation From Frequent Sharing-Subset Mining for Person Re-Identification

TL;DR: This model simultaneously takes both the sample-specific discriminant and the set-based visual similarity among testing samples into consideration so that the adapted multiple metrics can refine the discriminant of all the given testing samples jointly via a multi-kernel late fusion framework.
Journal ArticleDOI

Bridging deep and multiple kernel learning: A review

TL;DR: This article presents a comprehensive overview of the state-of-the-art approaches that bridge the MKL and deep learning techniques, systematically reviewing the typical hybrid models, training techniques, and their theoretical and practical benefits.
References
More filters
Book

Kernel Methods for Pattern Analysis

TL;DR: This book provides an easy introduction for students and researchers to the growing field of kernel-based pattern analysis, demonstrating with examples how to handcraft an algorithm or a kernel for a new specific application, and covering all the necessary conceptual and mathematical tools to do so.
Proceedings Article

Learning with Kernels

Book ChapterDOI

Rademacher and gaussian complexities: risk bounds and structural results

TL;DR: In this paper, the authors investigate the use of data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities, in a decision theoretic setting and prove general risk bounds in terms of these complexities.
Journal ArticleDOI

Learning the Kernel Matrix with Semidefinite Programming

TL;DR: This paper shows how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques and leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Related Papers (5)