scispace - formally typeset
Search or ask a question
Book ChapterDOI

HDR imaging under non-uniform blurring

TL;DR: A technique to obtain the high dynamic range (HDR) irradiance of a scene from a set of differently exposed images captured using a hand-held camera and a transformation spread function (TSF) that represents space-variant blurring as a weighted average of differently transformed versions of the latent image.
Abstract: Knowledge of scene irradiance is necessary in many computer vision algorithms. In this paper, we develop a technique to obtain the high dynamic range (HDR) irradiance of a scene from a set of differently exposed images captured using a hand-held camera. Any incidental motion induced by camera-shake can result in non-uniform motion blur. This is particularly true for frames captured with high exposure durations. We model the motion blur using a transformation spread function (TSF) that represents space-variant blurring as a weighted average of differently transformed versions of the latent image. We initially estimate the TSF of the blurred frames and then estimate the latent irradiance of the scene.
Citations
More filters
Journal ArticleDOI
TL;DR: This work proposes a passive method to automatically detect image splicing using blur as a cue and can expose the presence of splicing by evaluating inconsistencies in motion blur even under space-variant blurring situations.
Abstract: The extensive availability of sophisticated image editing tools has rendered it relatively easy to produce fake images. Image splicing is a form of tampering in which an original image is altered by copying a portion from a different source. Because the phenomenon of motion blur is a common occurrence in hand-held cameras, we propose a passive method to automatically detect image splicing using blur as a cue. Specifically, we address the scenario of a static scene in which the cause of blur is due to hand shake. Existing methods for dealing with this problem work only in the presence of uniform space-invariant blur. In contrast, our method can expose the presence of splicing by evaluating inconsistencies in motion blur even under space-variant blurring situations. We validate our method on several examples for different scene situations and camera motions of interest.

36 citations


Additional excerpts

  • ...We estimate the weights of these homographies also referred to as Transformation Spread Function (TSF) [22], [32] from a single blurred image....

    [...]

Journal ArticleDOI
TL;DR: A method is developed that takes input non-uniformly blurred and differently exposed images to extract the deblurred, latent irradiance image and estimates the TSFs of the blurred images from locally derived point spread functions by exploiting their linear relationship.
Abstract: Hand-held cameras inevitably result in blurred images caused by camera-shake, and even more so in high dynamic range imaging applications where multiple images are captured over a wide range of exposure settings. The degree of blurring depends on many factors such as exposure time, stability of the platform, and user experience. Camera shake involves not only translations but also rotations resulting in nonuniform blurring. In this paper, we develop a method that takes input non-uniformly blurred and differently exposed images to extract the deblurred, latent irradiance image. We use transformation spread function (TSF) to effectively model the blur caused by camera motion. We first estimate the TSFs of the blurred images from locally derived point spread functions by exploiting their linear relationship. The scene irradiance is then estimated by minimizing a suitably derived cost functional. Two important cases are investigated wherein 1) only the higher exposures are blurred and 2) all the captured frames are blurred.

19 citations


Cites background from "HDR imaging under non-uniform blurr..."

  • ...In a preliminary version of this work [32], we had considered the input data to consist of both non-blurred and blurred observations, assuming that the effect of camera-shake is significant only beyond a certain exposure duration....

    [...]

Proceedings ArticleDOI
11 Jul 2016
TL;DR: This work presents a robust HDR imaging system which can deal with blurry LDR images, overcoming the limitations of most existing HDR methods.
Abstract: High dynamic range (HDR) images can show more details and luminance information in general display device than low dynamic image (LDR) images. We present a robust HDR imaging system which can deal with blurry LDR images, overcoming the limitations of most existing HDR methods. Experiments on real images show the effectiveness and competitiveness of the proposed method.

1 citations


Cites background from "HDR imaging under non-uniform blurr..."

  • ...[6] discussed a complex approach to obtain the HDR image from a set of frames where images captured with high exposures were non-uniformly blurred due to camera shake....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes a method for image deblurring and reconstruction of HDR images using transformation spread functions (TSFs), which is directly estimated from locally derived point spread function (PSFs) by exploiting their relationship.
Abstract: Image blur is difficult to avoid in many situations and can often ruin a photograph. Image deblurring and restoration is necessary in digital image processing. Image deblurring is a process, which is used to make pictures sharp and useful by using mathematical model. Image deblurring have wide applications from consumer photography, e.g., remove motion blur due to camera shake, to radar imaging and tomography, e.g., remove the effect of imaging system response. In this paper we propose a method for image deblurring and reconstruction of HDR images using transformation spread functions (TSFs), which is directly estimated from locally derived point spread functions (PSFs) by exploiting their relationship. We are also calculating the quality Measurement parameters of images.

1 citations

Journal ArticleDOI
TL;DR: This work focuses on the tone mapping techniques with a practice to dynamically determine the suitable exposure parameter for LDR images agreeing to the property of each scene to be seized, which provides approximately 10% improvement in UIQI in comparison to C.Vijay method.
Abstract: It is the age of fast and good quality Digital images, those are subject to blurring due to many hardware limitations, such as atmospheric trouble, apparatus noise and poor focus quality. A high quality image restoration is done by making a High dynamic range (HDR). The (HDR) image generation had been studied in past years. Due to the expense and lack of HDR cameras, a lot of works try to generate HDR images using several low dynamic range (LDR) images with different exposure setting. To ensure high-class HDR image generation and details of the scene should be retained in different LDR images and the exposure parameters of LDR image should be chosen carefully. In this paper our proposed work focuses on the tone mapping techniques with a practice to dynamically determine the suitable exposure parameter for LDR images agreeing to the property of each scene to be seized. The proposed method provides approximately 10% improvement in UIQI in comparison to C.S.Vijay method. Simulation results reveal that better HDR images are always generated with the LDR images held by the determined exposure, compared to those produced by the method with fixed exposures.

1 citations

References
More filters
Proceedings ArticleDOI
20 Jun 2009
TL;DR: The previously reported failure of the naive MAP approach is explained by demonstrating that it mostly favors no-blur explanations and it is shown that since the kernel size is often smaller than the image size a MAP estimation of the kernel alone can be well constrained and accurately recover the true blur.
Abstract: Blind deconvolution is the recovery of a sharp version of a blurred image when the blur kernel is unknown. Recent algorithms have afforded dramatic progress, yet many aspects of the problem remain challenging and hard to understand. The goal of this paper is to analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. We explain the previously reported failure of the naive MAP approach by demonstrating that it mostly favors no-blur explanations. On the other hand we show that since the kernel size is often smaller than the image size a MAP estimation of the kernel alone can be well constrained and accurately recover the true blur. The plethora of recent deconvolution techniques makes an experimental evaluation on ground-truth data important. We have collected blur data with ground truth and compared recent algorithms under equal settings. Additionally, our data demonstrates that the shift-invariant blur assumption made by most algorithms is often violated.

1,219 citations

Book ChapterDOI
05 Sep 2010
TL;DR: A novel method for unsupervised class segmentation on a set of images that alternates between segmenting object instances and learning a class model based on a segmentation energy defined over all images at the same time, which can be optimized efficiently by techniques used before in interactive segmentation.
Abstract: We propose a novel method for unsupervised class segmentation on a set of images. It alternates between segmenting object instances and learning a class model. The method is based on a segmentation energy defined over all images at the same time, which can be optimized efficiently by techniques used before in interactive segmentation. Over iterations, our method progressively learns a class model by integrating observations over all images. In addition to appearance, this model captures the location and shape of the class with respect to an automatically determined coordinate frame common across images. This frame allows us to build stronger shape and location models, similar to those used in object class detection. Our method is inspired by interactive segmentation methods [1], but it is fully automatic and learns models characteristic for the object class rather than specific to one particular object/image. We experimentally demonstrate on the Caltech4, Caltech101, and Weizmann horses datasets that our method (a) transfers class knowledge across images and this improves results compared to segmenting every image independently; (b) outperforms Grabcut [1] for the task of unsupervised segmentation; (c) offers competitive performance compared to the state-of-the-art in unsupervised segmentation and in particular it outperforms the topic model [2].

1,028 citations

Proceedings ArticleDOI
29 Jul 2007
TL;DR: This paper shows in this paper how to produce a high quality image that cannot be obtained by simply denoising the noisy image, or deblurring the blurred image alone, by combining information extracted from both blurred and noisy images.
Abstract: Taking satisfactory photos under dim lighting conditions using a hand-held camera is challenging. If the camera is set to a long exposure time, the image is blurred due to camera shake. On the other hand, the image is dark and noisy if it is taken with a short exposure time but with a high camera gain. By combining information extracted from both blurred and noisy images, however, we show in this paper how to produce a high quality image that cannot be obtained by simply denoising the noisy image, or deblurring the blurred image alone. Our approach is image deblurring with the help of the noisy image. First, both images are used to estimate an accurate blur kernel, which otherwise is difficult to obtain from a single blurred image. Second, and again using both images, a residual deconvolution is proposed to significantly reduce ringing artifacts inherent to image deconvolution. Third, the remaining ringing artifacts in smooth image regions are further suppressed by a gain-controlled deconvolution process. We demonstrate the effectiveness of our approach using a number of indoor and outdoor images taken by off-the-shelf hand-held cameras in poor lighting environments.

929 citations

Proceedings ArticleDOI
23 Jun 1999
TL;DR: A simple algorithm is described that computes the radiometric response function of an imaging system, from images of an arbitrary scene taken using different exposures, to fuse the multiple images into a single high dynamic range radiance image.
Abstract: A simple algorithm is described that computes the radiometric response function of an imaging system, from images of an arbitrary scene taken using different exposures. The exposure is varied by changing either the aperture setting or the shutter speed. The algorithm does not require precise estimates of the exposures used. Rough estimates of the ratios of the exposures (e.g. F-number settings on an inexpensive lens) are sufficient for accurate recovery of the response function as well as the actual exposure ratios. The computed response function is used to fuse the multiple images into a single high dynamic range radiance image. Robustness is tested using a variety of scenes and cameras as well as noisy synthetic images generated using 100 randomly selected response curves. Automatic rejection of image areas that have large vignetting effects or temporal scene variations make the algorithm applicable to not just photographic but also video cameras.

837 citations