scispace - formally typeset
Open AccessJournal ArticleDOI

PatchNR: learning from very few images by patch normalizing flow regularization

Reads0
Chats0
TLDR
In this article , a patch normalizing flow regularizer (patchNR) is proposed for the variational modeling of inverse problems in imaging, which is independent of the considered inverse problem such that the same regularizer can be applied for different forward operators acting on the same class of images.
Abstract
Learning neural networks using only few available information is an important ongoing research topic with tremendous potential for applications. In this paper, we introduce a powerful regularizer for the variational modeling of inverse problems in imaging. Our regularizer, called patch normalizing flow regularizer (patchNR), involves a normalizing flow learned on small patches of very few images. In particular, the training is independent of the considered inverse problem such that the same regularizer can be applied for different forward operators acting on the same class of images. By investigating the distribution of patches versus those of the whole image class, we prove that our model is indeed a maximum a posteriori approach. Numerical examples for low-dose and limited-angle computed tomography (CT) as well as superresolution of material images demonstrate that our method provides very high quality results. The training set consists of just six images for CT and one image for superresolution. Finally, we combine our patchNR with ideas from internal learning for performing superresolution of natural images directly from the low-resolution observation without knowledge of any high-resolution image.

read more

Citations
More filters
Journal ArticleDOI

Multilevel Diffusion: Infinite Dimensional Score-Based Diffusion Models for Image Generation

TL;DR: In this paper , a score-based diffusion model (SBDM) is proposed for image generation in the infinite-dimensional setting, that is, the training data are modeled as functions supported on a rectangular domain.
Journal ArticleDOI

Conditional Generative Models are Provably Robust: Pointwise Guarantees for Bayesian Inverse Problems

TL;DR: In this article , the robustness of conditional generative models with respect to perturbations of the observations has been investigated, and it is shown that appropriately learned conditional GAs provide robust results for single observations.
Journal ArticleDOI

Generative Adversarial Learning of Sinkhorn Algorithm Initializations

Jonathan Geuter, +1 more
- 30 Nov 2022 - 
TL;DR: In this article , the authors train a neural network to compute initializations for the Sinkhorn algorithm, which significantly outperform standard initializations and predicts a potential of the optimal transport dual problem, where training is conducted in an adversarial fashion using a second generating network.
Journal ArticleDOI

NF-ULA: Langevin Monte Carlo with Normalizing Flow Prior for Imaging Inverse Problems

TL;DR: The NF-ULA (Unadjusted Langevin algorithms by Normalizing Flows) as discussed by the authors is a Langevin-based sampling algorithm for inverse problems, which learns a normalizing flow as the prior.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Journal ArticleDOI

Nonlinear total variation based noise removal algorithms

TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.
Related Papers (5)