scispace - formally typeset
Z

Zoubin Ghahramani

Researcher at University of Cambridge

Publications -  470
Citations -  71371

Zoubin Ghahramani is an academic researcher from University of Cambridge. The author has contributed to research in topics: Bayesian inference & Inference. The author has an hindex of 111, co-authored 466 publications receiving 62945 citations. Previous affiliations of Zoubin Ghahramani include Uber & Carnegie Mellon University.

Papers
More filters
Journal ArticleDOI

An introduction to variational methods for graphical models

TL;DR: This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields), and describes a general framework for generating variational transformations based on convex duality.
Proceedings Article

Semi-supervised learning using Gaussian fields and harmonic functions

TL;DR: An approach to semi-supervised learning is proposed that is based on a Gaussian random field model, and methods to incorporate class priors and the predictions of classifiers obtained by supervised learning are discussed.
Proceedings Article

Dropout as a Bayesian approximation: representing model uncertainty in deep learning

TL;DR: A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Journal ArticleDOI

An Internal Model for Sensorimotor Integration

TL;DR: A sensorimotor integration task was investigated in which participants estimated the location of one of their hands at the end of movements made in the dark and under externally imposed forces, providing direct support for the existence of an internal model.
Posted Content

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

TL;DR: In this article, a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes was developed, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.