scispace - formally typeset
Open AccessPosted Content

Score Matching Model for Unbounded Data Score

Reads0
Chats0
TLDR
Unbounded Noise Conditional Score Network (UNCSN) as discussed by the authors improves score-based models by analyzing the model at the zero perturbation noise, and the associated loss function mitigates the loss imbalance issue in a mini-batch, and they present a theoretic analysis on the proposed loss to uncover the behind mechanism of the data distribution modeling by the scorebased models.
Abstract
Recent advance in score-based models incorporates the stochastic differential equation (SDE), which brings the state-of-the art performance on image generation tasks. This paper improves such score-based models by analyzing the model at the zero perturbation noise. In real datasets, the score function diverges as the perturbation noise ($\sigma$) decreases to zero, and this observation leads an argument that the score estimation fails at $\sigma=0$ with any neural network structure. Subsequently, we introduce Unbounded Noise Conditional Score Network (UNCSN) that resolves the score diverging problem with an easily applicable modification to any noise conditional score-based models. Additionally, we introduce a new type of SDE, so the exact log likelihood can be calculated from the newly suggested SDE. On top of that, the associated loss function mitigates the loss imbalance issue in a mini-batch, and we present a theoretic analysis on the proposed loss to uncover the behind mechanism of the data distribution modeling by the score-based models.

read more

Citations
More filters
Posted Content

Score-based diffusion models for accelerated MRI

Abstract: Score-based diffusion models provide a powerful way to model images using the gradient of the data distribution. Leveraging the learned score function as a prior, here we introduce a way to sample data from a conditional distribution given the measurements, such that the model can be readily used for solving inverse problems in imaging, especially for accelerated MRI. In short, we train a continuous time-dependent score function with denoising score matching. Then, at the inference stage, we iterate between numerical SDE solver and data consistency projection step to achieve reconstruction. Our model requires magnitude images only for training, and yet is able to reconstruct complex-valued data, and even extends to parallel imaging. The proposed method is agnostic to sub-sampling patterns, and can be used with any sampling schemes. Also, due to its generative nature, our approach can quantify uncertainty, which is not possible with standard regression settings. On top of all the advantages, our method also has very strong performance, even beating the models trained with full supervision. With extensive experiments, we verify the superiority of our method in terms of quality and practicality.
Posted Content

Score-based Generative Modeling in Latent Space

TL;DR: The Latent Score-based Generative Model (LSGM) as mentioned in this paper proposes to train SGMs in a latent space, relying on the variational autoencoder framework.
Posted Content

Densely connected normalizing flows

TL;DR: DenseFlow as discussed by the authors extends normalizing flows by incrementally padding intermediate representations with noise in accordance with previous invertible units, which is referred to as cross-unit coupling.
Posted Content

Hierarchical Transformers Are More Efficient Language Models

TL;DR: Hourglass as discussed by the authors is a hierarchical Transformer language model that uses the best performing upsampling and downsampling layers to create a hierarchical language model and achieves state-of-the-art results on ImageNet32 generation task.
Proceedings ArticleDOI

Hierarchical Transformers Are More Efficient Language Models

TL;DR: Hourglass as mentioned in this paper proposes a hierarchical Transformer language model, which uses the best performing upsampling and downsampling layers to create Hourglass, which achieves state-of-the-art results on ImageNet32 generation task.
References
More filters
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Book

Partial Differential Equations

TL;DR: In this paper, the authors present a theory for linear PDEs: Sobolev spaces Second-order elliptic equations Linear evolution equations, Hamilton-Jacobi equations and systems of conservation laws.
Dissertation

Learning Multiple Layers of Features from Tiny Images

TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Proceedings ArticleDOI

A Style-Based Generator Architecture for Generative Adversarial Networks

TL;DR: This paper proposed an alternative generator architecture for GANs, borrowing from style transfer literature, which leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images.
Proceedings ArticleDOI

Deep Learning Face Attributes in the Wild

TL;DR: A novel deep learning framework for attribute prediction in the wild that cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently.
Related Papers (5)