scispace - formally typeset
Open AccessPosted Content

Sparsity-accuracy trade-off in MKL

Reads0
Chats0
TLDR
This work empirically investigates the best trade-off between sparse and uniformly-weighted multiple kernel learning (MKL) using the elastic-net regularization on real and simulated datasets and finds that the best Trade-off parameter depends not only on the sparsity of the true kernel-weight spectrum but also on the linear dependence among kernels and the number of samples.
Abstract
We empirically investigate the best trade-off between sparse and uniformly-weighted multiple kernel learning (MKL) using the elastic-net regularization on real and simulated datasets. We find that the best trade-off parameter depends not only on the sparsity of the true kernel-weight spectrum but also on the linear dependence among kernels and the number of samples.

read more

Citations
More filters
Journal ArticleDOI

Multiple Kernel Learning for Visual Object Recognition: A Review

TL;DR: It is argued that given a sufficient number of training examples and feature/kernel types, MKL is more effective for object recognition than simple kernel combination, and among the various approaches proposed for MKL, the sequential minimal optimization, semi-infinite programming, and level method based ones are computationally most efficient.
Journal ArticleDOI

Multiple Graph Label Propagation by Sparse Integration

TL;DR: This paper addresses the issue of obtaining the optimal linear combination of multiple different graphs under the label propagation setting and proposes a new formulation with the sparsity (in coefficients of graph combination) property which cannot be rightly achieved by any other existing methods.
Journal ArticleDOI

Efficient Sparse Generalized Multiple Kernel Learning

TL;DR: This paper proposes a generalized MKL model with a constraint on a linear combination of the -norm and the squared -norm on the kernel weights to seek the optimal kernel combination weights, which enjoys the favorable sparsity property on the solution and also facilitates the grouping effect.
Proceedings Article

Ultra-Fast Optimization Algorithm for Sparse Multi Kernel Learning

TL;DR: This paper introduces a novel MKL formulation, which mixes elements of p-norm and elastic-net kind of regularization, and proposes a fast stochastic gradient descent method that solves the novelMKL formulation.
Journal ArticleDOI

SpicyMKL: a fast algorithm for Multiple Kernel Learning with thousands of kernels

TL;DR: A new optimization algorithm for Multiple Kernel Learning called SpicyMKL is proposed, which is applicable to general convex loss functions and general types of regularization, and gives a general block-norm formulation of MKL that includes non-sparse regularizations, such as elastic-net and ℓp-norm regularizations.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Journal ArticleDOI

Regularization and variable selection via the elastic net

TL;DR: It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Proceedings ArticleDOI

Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories

TL;DR: This paper presents a method for recognizing scene categories based on approximate global geometric correspondence that exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories.
BookDOI

Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond

TL;DR: Learning with Kernels provides an introduction to SVMs and related kernel methods that provide all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms.
Journal ArticleDOI

Model selection and estimation in regression with grouped variables

TL;DR: In this paper, instead of selecting factors by stepwise backward elimination, the authors focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection.
Related Papers (5)