scispace - formally typeset
S

Stefano Ermon

Researcher at Stanford University

Publications -  408
Citations -  22015

Stefano Ermon is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 54, co-authored 346 publications receiving 11846 citations. Previous affiliations of Stefano Ermon include Cornell University & Google.

Papers
More filters
Posted Content

Score-Based Generative Modeling through Stochastic Differential Equations

TL;DR: This work presents a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by Slowly removing the noise.
Journal ArticleDOI

Combining satellite imagery and machine learning to predict poverty

TL;DR: This work shows how a convolutional neural network can be trained to identify image features that can explain up to 75% of the variation in local-level economic outcomes, and could transform efforts to track and target poverty in developing countries.
Posted Content

Denoising Diffusion Implicit Models

TL;DR: Denoising diffusion implicit models (DDIMs) are presented, a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs that can produce high quality samples faster and perform semantically meaningful image interpolation directly in the latent space.
Posted Content

Generative Modeling by Estimating Gradients of the Data Distribution

TL;DR: A new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching, which allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons.
Posted Content

Generative Adversarial Imitation Learning

TL;DR: A new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning, is proposed and a certain instantiation of this framework draws an analogy between imitation learning and generative adversarial networks.