scispace - formally typeset
Search or ask a question
Author

Riccardo Barbano

Bio: Riccardo Barbano is an academic researcher from University College London. The author has contributed to research in topics: Computer science & Iterative reconstruction. The author has an hindex of 2, co-authored 10 publications receiving 16 citations.

Papers
More filters
Proceedings ArticleDOI
17 Jun 2022
TL;DR: The assumptions behind the linearised Laplace method for estimating model uncertainty are examined, showing that these interact poorly with some now-standard tools of deep learning—stochastic approximation methods and normalisation layers—and make recommendations for how to better adapt this classic method to the modern setting.
Abstract: The linearised Laplace method for estimating model uncertainty has received renewed attention in the Bayesian deep learning community. The method provides reliable error bars and admits a closed form expression for the model evidence, allowing for scalable selection of model hyperparameters. In this work, we examine the assumptions behind this method, particularly in conjunc-tion with model selection. We show that these interact poorly with some now-standard tools of deep learning—stochastic approximation methods and normalisation layers—and make recommendations for how to better adapt this classic method to the modern setting. We provide theoretical support for our recommendations and validate them empirically on MLPs, classic CNNs, residual networks with and without normalisation layers, generative autoencoders and transformers.

16 citations

Posted Content
TL;DR: A scalable, data-driven, knowledge-aided computational framework to quantify the model uncertainty via Bayesian neural networks, and builds on, and extends deep gradient descent, a recently developed greedy iterative training scheme, and recasts it within a probabilistic framework.
Abstract: Recent advances in reconstruction methods for inverse problems leverage powerful data-driven models, e.g., deep neural networks. These techniques have demonstrated state-of-the-art performances for several imaging tasks, but they often do not provide uncertainty on the obtained reconstruction. In this work, we develop a scalable, data-driven, knowledge-aided computational framework to quantify the model uncertainty via Bayesian neural networks. The approach builds on, and extends deep gradient descent, a recently developed greedy iterative training scheme, and recasts it within a probabilistic framework. Scalability is achieved by being hybrid in the architecture: only the last layer of each block is Bayesian, while the others remain deterministic, and by being greedy in training. The framework is showcased on one representative medical imaging modality, viz. computed tomography with either sparse view or limited view data, and exhibits competitive performance with respect to state-of-the-art benchmarks, e.g., total variation, deep gradient descent and learned primal-dual.

11 citations

DOI
08 Nov 2021
TL;DR: This work develops a two-stage learning paradigm to address the computational challenge: (i) a supervised pretraining of the network on a synthetic dataset; (ii) it fine-tune the network’s parameters to adapt to the target reconstruction.
Abstract: Deep image prior [55] was recently introduced as an effective prior for image reconstruction. It represents the image to be recovered as the output of a deep convolutional neural network, and learns the network’s parameters such that the output fits the corrupted observation. Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques. Our work develops a two-stage learning paradigm to address the computational challenge: (i) we perform a supervised pretraining of the network on a synthetic dataset; (ii) we fine-tune the network’s parameters to adapt to the target reconstruction. We showcase that pretraining considerably speeds up the subsequent reconstruction from real-measured micro computed tomography data of biological specimens. The code and additional experimental materials are available at educateddip.github.io/docs.educated_deep_image_prior/.

11 citations

Journal ArticleDOI
TL;DR: This work proposes a novel approach using the linearised deep image prior that allows incorporating information from the pilot measurements into the angle selection criteria, while maintaining the tractability of a conjugate Gaussian-linear model.
Abstract: We investigate adaptive design based on a single sparse pilot scan for generating effective scanning strategies for computed tomography reconstruction. We propose a novel approach using the linearised deep image prior. It allows incorporating information from the pilot measurements into the angle selection criteria, while maintaining the tractability of a conjugate Gaussian-linear model. On a synthetically generated dataset with preferential directions, linearised DIP design allows reducing the number of scans by up to 30% relative to an equidistant angle baseline.

6 citations

Journal ArticleDOI
TL;DR: A Bayesian prior is constructed for tomographic reconstruction, which combines the classical total variation (TV) regulariser with the modern deep image prior (DIP), and an approach based on the linearised Laplace method is developed, which is scalable to highdimensional settings.
Abstract: Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty, hindering their real-world deployment. To address this limitation, we construct a Bayesian prior for tomographic reconstruction, which combines the classical total variation (TV) regulariser with the modern deep image prior (DIP). Specifically, we use a change of variables to connect our prior beliefs on the image TV semi-norm with the hyperparameters of the DIP network. For the inference, we develop an approach based on the linearised Laplace method, which is scalable to highdimensional settings. The resulting framework provides pixel-wise uncertainty estimates and a marginal likelihood objective for hyperparameter optimisation. We demonstrate the method on synthetic and real-measured high-resolution μCT data, and show that it provides superior calibration of uncertainty estimates relative to previous probabilistic formulations of the DIP.

6 citations


Cited by
More filters
01 Jan 2016
TL;DR: The regularization of inverse problems is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can download it instantly.
Abstract: Thank you for downloading regularization of inverse problems. Maybe you have knowledge that, people have search hundreds times for their favorite novels like this regularization of inverse problems, but end up in malicious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some infectious bugs inside their computer. regularization of inverse problems is available in our book collection an online access to it is set as public so you can download it instantly. Our book servers spans in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the regularization of inverse problems is universally compatible with any devices to read.

1,097 citations

Journal ArticleDOI
TL;DR: This study reviews recent advances in UQ methods used in deep learning and investigates the application of these methods in reinforcement learning (RL), and outlines a few important applications of UZ methods.
Abstract: Uncertainty quantification (UQ) plays a pivotal role in reduction of uncertainties during both optimization and decision making processes. It can be applied to solve a variety of real-world applications in science and engineering. Bayesian approximation and ensemble learning techniques are two most widely-used UQ methods in the literature. In this regard, researchers have proposed different UQ methods and examined their performance in a variety of applications such as computer vision (e.g., self-driving cars and object detection), image processing (e.g., image restoration), medical image analysis (e.g., medical image classification and segmentation), natural language processing (e.g., text classification, social media texts and recidivism risk-scoring), bioinformatics, etc. This study reviews recent advances in UQ methods used in deep learning. Moreover, we also investigate the application of these methods in reinforcement learning (RL). Then, we outline a few important applications of UQ methods. Finally, we briefly highlight the fundamental research challenges faced by UQ methods and discuss the future research directions in this field.

809 citations

Journal ArticleDOI
TL;DR: Uncertainty quantification (UQ) methods play a pivotal role in reducing the impact of uncertainties during both optimization and decision making processes as mentioned in this paper, and have been applied to solve a variety of real-world problems in science and engineering Bayesian approximation and ensemble learning techniques are two widely-used types of uncertainty quantification.

77 citations

Proceedings Article
23 Feb 2022
TL;DR: It is shown how marginal likelihood can be negatively correlated with generalization, with implications for neural architecture search, and can lead to both underfitting and overfitting in hyperparameter learning.
Abstract: How do we compare between hypotheses that are entirely consistent with observations? The marginal likelihood (aka Bayesian evidence), which represents the probability of generating our observations from a prior, provides a distinctive approach to this foundational question, automatically encoding Occam's razor. Although it has been observed that the marginal likelihood can overfit and is sensitive to prior assumptions, its limitations for hyperparameter learning and discrete model comparison have not been thoroughly investigated. We first revisit the appealing properties of the marginal likelihood for learning constraints and hypothesis testing. We then highlight the conceptual and practical issues in using the marginal likelihood as a proxy for generalization. Namely, we show how marginal likelihood can be negatively correlated with generalization, with implications for neural architecture search, and can lead to both underfitting and overfitting in hyperparameter learning. We also re-examine the connection between the marginal likelihood and PAC-Bayes bounds and use this connection to further elucidate the shortcomings of the marginal likelihood for model selection. We provide a partial remedy through a conditional marginal likelihood, which we show is more aligned with generalization, and practically valuable for large-scale hyperparameter learning, such as in deep kernel learning.

22 citations