scispace - formally typeset
Open AccessProceedings Article

Spike and Slab Variational Inference for Multi-Task and Multiple Kernel Learning

Reads0
Chats0
TLDR
A variational Bayesian inference algorithm which can be widely applied to sparse linear models and is based on the spike and slab prior, which is the golden standard for sparse inference is introduced.
Abstract
We introduce a variational Bayesian inference algorithm which can be widely applied to sparse linear models. The algorithm is based on the spike and slab prior which, from a Bayesian perspective, is the golden standard for sparse inference. We apply the method to a general multi-task and multiple kernel learning model in which a common set of Gaussian process functions is linearly combined with task-specific sparse weights, thus inducing relation between tasks. This model unifies several sparse linear models, such as generalized linear models, sparse factor analysis and matrix factorization with missing values, so that the variational algorithm can be applied to all these cases. We demonstrate our approach in multi-output Gaussian process regression, multi-class classification, image processing applications and collaborative filtering.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Representation Learning: A Review and New Perspectives

TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Posted Content

A Survey on Multi-Task Learning

TL;DR: Multi-task learning (MTL) as mentioned in this paper is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks.
Posted Content

Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives

TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, manifold learning, anddeep learning.
Posted Content

Representation Learning: A Review and New Perspectives

TL;DR: The success of machine learning algorithms generally depends on data representation, and this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data as mentioned in this paper.
Journal ArticleDOI

MOFA+: a statistical framework for comprehensive integration of multi-modal single-cell data

TL;DR: This work presents Multi-Omics Factor Analysis v2 (MOFA+), a statistical framework for the comprehensive and scalable integration of single-cell multi-modal data that reconstructs a low-dimensional representation of the data using computationally efficient variational inference and supports flexible sparsity constraints.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Journal ArticleDOI

Sparse bayesian learning and the relevance vector machine

TL;DR: It is demonstrated that by exploiting a probabilistic Bayesian learning framework, the 'relevance vector machine' (RVM) can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while offering a number of additional advantages.
Journal ArticleDOI

Variable selection via Gibbs sampling

TL;DR: In this paper, the Gibbs sampler is used to indirectly sample from the multinomial posterior distribution on the set of possible subset choices to identify the promising subsets by their more frequent appearance in the Gibbs sample.
Proceedings ArticleDOI

Automated Flower Classification over a Large Number of Classes

TL;DR: Results show that learning the optimum kernel combination of multiple features vastly improves the performance, from 55.1% for the best single feature to 72.8% forThe combination of all features.
Journal ArticleDOI

Sparse Representation for Color Image Restoration

TL;DR: This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.