scispace - formally typeset
Search or ask a question
Author

I. Mor

Bio: I. Mor is an academic researcher from Ben-Gurion University of the Negev. The author has contributed to research in topics: Image restoration & Image processing. The author has an hindex of 2, co-authored 2 publications receiving 176 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This work proposes a straightforward method to restore motion-blurred images given only the blurred image itself, and identifies the point-spread function (PSF) of the blur and uses it to restore the blur image.
Abstract: We deal with the problem of restoration of images blurred by relative motion between the camera and the object of interest. This problem is common when the imaging system is in moving vehicles or held by human hands, and in robot vision. For correct restoration of the degraded image, it is useful to know the point-spread function (PSF) of the blurring system. We propose a straightforward method to restore motion-blurred images given only the blurred image itself. The method first identifies the PSF of the blur and then uses it to restore the blurred image. The blur identification here is based on the concept that image characteristics along the direction of motion are affected mostly by the blur and are different from the characteristics in other directions. By filtering the blurred image, we emphasize the PSF correlation properties at the expense of those of the original image. Experimental results for image restoration are presented for both synthetic and real motion blur.

175 citations

Proceedings ArticleDOI
22 Sep 1997
TL;DR: In this paper, the authors proposed a method to identify important parameters with which to characterize the point spread function (PSF) of the blur, given only the blurred image itself, based on the concept that image characteristics along the direction of motion are different than the characteristics in other directions.
Abstract: This paper deals with the problem of restoration of images blurred by relative motion between the camera and the object of interest. This problem is common when the imaging system is in moving vehicles or held by human hands, and in robot vision. For correct restoration of the degraded image we need to know the point spread function (PSF) of the blurring system. In this paper we propose a method to identify important parameters with which to characterize the PSF of the blur, given only the blurred image itself. The identification method here is based on the concept that image characteristics along the direction of motion are different than the characteristics in other directions. Depending on the PSF shape, the homogeneity and the smoothness of the blurred image in the motion direction are higher than in other directions. By filtering the blurred image we emphasize the PSF characteristics at the expense of the image characteristics. The method proposed here identifies the direction and the extent of the PSF of the blur and finally identifies the modulation transfer function (MTF) of the blurring system.

2 citations


Cited by
More filters
Journal ArticleDOI
01 Dec 2009
TL;DR: A fastdeblurring method that produces a deblurring result from a single image of moderate size in a few seconds by introducing a novel prediction step and working with image derivatives rather than pixel values, which gives faster convergence.
Abstract: This paper presents a fast deblurring method that produces a deblurring result from a single image of moderate size in a few seconds. We accelerate both latent image estimation and kernel estimation in an iterative deblurring process by introducing a novel prediction step and working with image derivatives rather than pixel values. In the prediction step, we use simple image processing techniques to predict strong edges from an estimated latent image, which will be solely used for kernel estimation. With this approach, a computationally efficient Gaussian prior becomes sufficient for deconvolution to estimate the latent image, as small deconvolution artifacts can be suppressed in the prediction. For kernel estimation, we formulate the optimization function using image derivatives, and accelerate the numerical process by reducing the number of Fourier transforms needed for a conjugate gradient method. We also show that the formulation results in a smaller condition number of the numerical system than the use of pixel values, which gives faster convergence. Experimental results demonstrate that our method runs an order of magnitude faster than previous work, while the deblurring quality is comparable. GPU implementation facilitates further speed-up, making our method fast enough for practical use.

1,062 citations

Proceedings ArticleDOI
29 Jul 2007
TL;DR: This paper shows in this paper how to produce a high quality image that cannot be obtained by simply denoising the noisy image, or deblurring the blurred image alone, by combining information extracted from both blurred and noisy images.
Abstract: Taking satisfactory photos under dim lighting conditions using a hand-held camera is challenging. If the camera is set to a long exposure time, the image is blurred due to camera shake. On the other hand, the image is dark and noisy if it is taken with a short exposure time but with a high camera gain. By combining information extracted from both blurred and noisy images, however, we show in this paper how to produce a high quality image that cannot be obtained by simply denoising the noisy image, or deblurring the blurred image alone. Our approach is image deblurring with the help of the noisy image. First, both images are used to estimate an accurate blur kernel, which otherwise is difficult to obtain from a single blurred image. Second, and again using both images, a residual deconvolution is proposed to significantly reduce ringing artifacts inherent to image deconvolution. Third, the remaining ringing artifacts in smooth image regions are further suppressed by a gain-controlled deconvolution process. We demonstrate the effectiveness of our approach using a number of indoor and outdoor images taken by off-the-shelf hand-held cameras in poor lighting environments.

929 citations

Journal ArticleDOI
01 Jul 2006
TL;DR: It is demonstrated that manually-specified point spread functions are sufficient for several challenging cases of motion-blur removal including extremely large motions, textured backgrounds and partial occluders.
Abstract: In a conventional single-exposure photograph, moving objects or moving cameras cause motion blur. The exposure time defines a temporal box filter that smears the moving object across the image by convolution. This box filter destroys important high-frequency spatial details so that deblurring via deconvolution becomes an ill-posed problem.Rather than leaving the shutter open for the entire exposure duration, we "flutter" the camera's shutter open and closed during the chosen exposure time with a binary pseudo-random sequence. The flutter changes the box filter to a broad-band filter that preserves high-frequency spatial details in the blurred image and the corresponding deconvolution becomes a well-posed problem. We demonstrate that manually-specified point spread functions are sufficient for several challenging cases of motion-blur removal including extremely large motions, textured backgrounds and partial occluders.

592 citations

Journal ArticleDOI
TL;DR: The fundamental trade off between spatial resolution and temporal resolution is exploited to construct a hybrid camera that can measure its own motion during image integration and show that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem.
Abstract: Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motion can be arbitrary, deblurring of motion blurred images is a hard problem. Previous methods to deal with this problem have included blind restoration of motion blurred images, optical correction using stabilized lenses, and special CMOS sensors that limit the exposure time in the presence of motion. In this paper, we exploit the fundamental trade off between spatial resolution and temporal resolution to construct a hybrid camera that can measure its own motion during image integration. The acquired motion information is used to compute a point spread function (PSF) that represents the path of the camera during integration. This PSF is then used to deblur the image. To verify the feasibility of hybrid imaging for motion deblurring, we have implemented a prototype hybrid camera. This prototype system was evaluated in different indoor and outdoor scenes using long exposures and complex camera motion paths. The results show that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem. We conclude with a brief discussion on how our ideas can be extended beyond the case of global camera motion to the case where individual objects in the scene move with different velocities.

511 citations

Proceedings Article
04 Dec 2006
TL;DR: This work addresses the problem of blind motion deblurring from a single image, caused by a few moving objects, and relies on the observation that the statistics of derivative filters in images are significantly changed by blur.
Abstract: We address the problem of blind motion deblurring from a single image, caused by a few moving objects. In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. Our approach relies on the observation that the statistics of derivative filters in images are significantly changed by blur. Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. Those distributions are surprisingly powerful in discriminating regions with different blurs. The approach produces convincing deconvolution results on real world images with rich texture.

459 citations