scispace - formally typeset
Search or ask a question

Showing papers by "Anat Levin published in 2011"


Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper derives a simple approximated MAPk algorithm which involves only a modest modification of common MAPx, k algorithms, and shows that MAPk can, in fact, be optimized easily, with no additional computational complexity.
Abstract: In blind deconvolution one aims to estimate from an input blurred image y a sharp image x and an unknown blur kernel k. Recent research shows that a key to success is to consider the overall shape of the posterior distribution p(x, k\y) and not only its mode. This leads to a distinction between MAP x, k strategies which estimate the mode pair x, k and often lead to undesired results, and MAP k strategies which select the best k while marginalizing over all possible x images. The MAP k principle is significantly more robust than the MAP x, k one, yet, it involves a challenging marginalization over latent images. As a result, MAP k techniques are considered complicated, and have not been widely exploited. This paper derives a simple approximated MAP k algorithm which involves only a modest modification of common MAP x, k algorithms. We show that MAP k can, in fact, be optimized easily, with no additional computational complexity.

623 citations


Journal ArticleDOI
TL;DR: The previously reported failure of the naive MAP approach is explained by demonstrating that it mostly favors no-blur explanations and it is shown that, using reasonable image priors, a naive simulations MAP estimation of both latent image and blur kernel is guaranteed to fail even with infinitely large images sampled from the prior.
Abstract: Blind deconvolution is the recovery of a sharp version of a blurred image when the blur kernel is unknown. Recent algorithms have afforded dramatic progress, yet many aspects of the problem remain challenging and hard to understand. The goal of this paper is to analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. We explain the previously reported failure of the naive MAP approach by demonstrating that it mostly favors no-blur explanations. We show that, using reasonable image priors, a naive simulations MAP estimation of both latent image and blur kernel is guaranteed to fail even with infinitely large images sampled from the prior. On the other hand, we show that since the kernel size is often smaller than the image size, a MAP estimation of the kernel alone is well constrained and is guaranteed to succeed to recover the true blur. The plethora of recent deconvolution techniques makes an experimental evaluation on ground-truth data important. As a first step toward this experimental evaluation, we have collected blur data with ground truth and compared recent algorithms under equal settings. Additionally, our data demonstrate that the shift-invariant blur assumption made by most algorithms is often violated.

416 citations


Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper takes a non parametric approach and represents the distribution of natural images using a huge set of 1010 patches and derives a simple statistical measure which provides a lower bound on the optimal Bayesian minimum mean square error (MMSE).
Abstract: The goal of natural image denoising is to estimate a clean version of a given noisy image, utilizing prior knowledge on the statistics of natural images. The problem has been studied intensively with considerable progress made in recent years. However, it seems that image denoising algorithms are starting to converge and recent algorithms improve over previous ones by only fractional dB values. It is thus important to understand how much more can we still improve natural image denoising algorithms and what are the inherent limits imposed by the actual statistics of the data. The challenge in evaluating such limits is that constructing proper models of natural image statistics is a long standing and yet unsolved problem. To overcome the absence of accurate image priors, this paper takes a non parametric approach and represents the distribution of natural images using a huge set of 1010 patches. We then derive a simple statistical measure which provides a lower bound on the optimal Bayesian minimum mean square error (MMSE). This imposes a limit on the best possible results of denoising algorithms which utilize a fixed support around a denoised pixel and a generic natural image prior. Our findings suggest that for small windows, state of the art denoising algorithms are approaching optimality and cannot be further improved beyond ∼ 0.1dB values.

256 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: It is discussed how occlusion geometry can help invert diffuse reflectance to recover lighting or surface albedo, and a novel reconstruction method based on high-resolution photography is proposed, taking advantage of visibility changes near Occlusion boundaries.
Abstract: Diffuse objects generally tell us little about the surrounding lighting, since the radiance they reflect blurs together incident lighting from many directions. In this paper we discuss how occlusion geometry can help invert diffuse reflectance to recover lighting or surface albedo. Self-occlusion in the scene can be regarded as a form of coding, creating high frequencies that improve the conditioning of diffuse light transport. Our analysis builds on a basic observation that diffuse reflectors with sufficiently detailed geometry can fully resolve the incident lighting. Using a Bayesian framework, we propose a novel reconstruction method based on high-resolution photography, taking advantage of visibility changes near occlusion boundaries. We also explore the limits of single-pixel observations as the diffuse reflector (and potentially the lighting) vary over time. Diffuse reflectance imaging is particularly relevant for astronomy applications, where diffuse reflectors arise naturally but the incident lighting and camera position cannot be controlled. To test our approaches, we first study the feasibility of using the moon as a diffuse reflector to observe the earth as seen from space. Next we present a reconstruction of Mars using historical photometry measurements not previously used for this purpose. As our results suggest, diffuse reflectance imaging expands our notion of what can qualify as a camera.

14 citations