scispace - formally typeset
J

Jascha Sohl-Dickstein

Researcher at Google

Publications -  165
Citations -  18485

Jascha Sohl-Dickstein is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 53, co-authored 154 publications receiving 12779 citations. Previous affiliations of Jascha Sohl-Dickstein include Washington University in St. Louis & University of California, Berkeley.

Papers
More filters
Posted Content

Deep Unsupervised Learning using Nonequilibrium Thermodynamics

TL;DR: This work develops an approach to systematically and slowly destroy structure in a data distribution through an iterative forward diffusion process, then learns a reverse diffusion process that restores structure in data, yielding a highly flexible and tractable generative model of the data.
Proceedings Article

Density estimation using Real NVP

TL;DR: The authors extend the space of probabilistic models using real-valued non-volume preserving transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space.
Posted Content

Score-Based Generative Modeling through Stochastic Differential Equations

TL;DR: This work presents a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by Slowly removing the noise.
Posted Content

Density estimation using Real NVP

TL;DR: This work extends the space of probabilistic models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space.
Proceedings Article

Deep Neural Networks as Gaussian Processes

TL;DR: The exact equivalence between infinitely wide deep networks and GPs is derived and it is found that test performance increases as finite-width trained networks are made wider and more similar to a GP, and thus that GP predictions typically outperform those of finite- width networks.