scispace - formally typeset
S

Shakir Mohamed

Researcher at Google

Publications -  85
Citations -  21093

Shakir Mohamed is an academic researcher from Google. The author has contributed to research in topics: Inference & Computer science. The author has an hindex of 40, co-authored 79 publications receiving 16245 citations. Previous affiliations of Shakir Mohamed include University of British Columbia & University of Cambridge.

Papers
More filters
Proceedings Article

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

TL;DR: In this article, a modification of the variational autoencoder (VAE) framework is proposed to learn interpretable factorised latent representations from raw image data in a completely unsupervised manner.
Posted Content

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

TL;DR: In this article, a generative and recognition model is proposed to represent approximate posterior distributions and act as a stochastic encoder of the data, which allows for joint optimisation of the parameters of both the generative model and the recognition model.
Posted Content

Semi-Supervised Learning with Deep Generative Models

TL;DR: It is shown that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.
Proceedings Article

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

TL;DR: This work marries ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning that introduces a recognition model to represent approximate posterior distributions and that acts as a stochastic encoder of the data.
Proceedings Article

Variational Inference with Normalizing Flows

TL;DR: It is demonstrated that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.