scispace - formally typeset
Open AccessJournal ArticleDOI

Solving ill-posed inverse problems using iterative deep neural networks

Reads0
Chats0
TLDR
The method builds on ideas from classical regularization theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularizing functional to results in a gradient-like iterative scheme.
Abstract
We propose a partially learned approach for the solution of ill posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularization theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularizing functional. The method results in a gradient-like iterative scheme, where the "gradient" component is learned using a convolutional network that includes the gradients of the data discrepancy and regularizer as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against FBP and TV reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the TV reconstruction while being significantly faster, giving reconstructions of 512 x 512 volumes in about 0.4 seconds using a single GPU.

read more

Citations
More filters

Regularization Of Inverse Problems

Lea Fleischer
TL;DR: The regularization of inverse problems is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can download it instantly.
Journal ArticleDOI

CNN-Based Projected Gradient Descent for Consistent Image Reconstruction

TL;DR: In this article, a convolutional neural network (CNN) is used to enforce consistency between the reconstructed image and the original image, which is crucial for inverse problems in biomedical imaging, where reconstructions are used for diagnosis.
Posted Content

LS-Net: Learning to Solve Nonlinear Least Squares for Monocular Stereo.

TL;DR: LS-Net is proposed, a neural nonlinear least squares optimization algorithm which learns to effectively optimize these cost functions even in the presence of adversities and requires no hand-crafted regularizers or priors as these are implicitly learned from the data.
Journal Article

Machine learning inverse problem for topological photonics.

TL;DR: In this paper, a neural network is trained with the Aubry-Andre-Harper band structure model and then adopted for solving the inverse problem to identify the parameters of a complex topological insulator in order to obtain protected edge states at target frequencies.
Posted Content

Learning to solve inverse problems using Wasserstein loss.

TL;DR: It is proved that training with the Wasserstein loss gives a reconstruction operator that correctly compensates for miss-alignments in certain cases, whereas training with a mean squared error gives a smeared reconstruction.
References
More filters
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Journal ArticleDOI

On the limited memory BFGS method for large scale optimization

TL;DR: The numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence, and the convergence properties are studied to prove global convergence on uniformly convex problems.
Journal ArticleDOI

Deep Convolutional Neural Network for Inverse Problems in Imaging

TL;DR: In this paper, the authors proposed a deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems, which combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure.
Journal ArticleDOI

Inverse problems: A Bayesian perspective

TL;DR: The Bayesian approach to regularization is reviewed, developing a function space viewpoint on the subject, which allows for a full characterization of all possible solutions, and their relative probabilities, whilst simultaneously forcing significant modelling issues to be addressed in a clear and precise fashion.
Proceedings Article

Learning to learn by gradient descent by gradient descent

TL;DR: This paper shows how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way.
Related Papers (5)