scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Defocus map estimation from a single image

01 Sep 2011-Pattern Recognition (Pergamon)-Vol. 44, Iss: 9, pp 1852-1858
TL;DR: This paper presents a simple yet effective approach to estimate the amount of spatially varying defocus blur at edge locations, and demonstrates the effectiveness of this method in providing a reliable estimation of the defocus map.
About: This article is published in Pattern Recognition.The article was published on 2011-09-01. It has received 370 citations till now. The article focuses on the topics: Image processing & Real image.
Citations
More filters
Proceedings ArticleDOI
01 Dec 2013
TL;DR: A novel salient region detection algorithm by integrating three important visual cues namely uniqueness, focus ness and objectness (UFO), which shows that, even with a simple pixel level combination of the three components, the proposed approach yields significant improvement compared with previously reported methods.
Abstract: The goal of saliency detection is to locate important pixels or regions in an image which attract humans' visual attention the most. This is a fundamental task whose output may serve as the basis for further computer vision tasks like segmentation, resizing, tracking and so forth. In this paper we propose a novel salient region detection algorithm by integrating three important visual cues namely uniqueness, focus ness and objectness (UFO). In particular, uniqueness captures the appearance-derived visual contrast, focus ness reflects the fact that salient regions are often photographed in focus, and objectness helps keep completeness of detected salient regions. While uniqueness has been used for saliency detection for long, it is new to integrate focus ness and objectness for this purpose. In fact, focus ness and objectness both provide important saliency information complementary of uniqueness. In our experiments using public benchmark datasets, we show that, even with a simple pixel level combination of the three components, the proposed approach yields significant improvement compared with previously reported methods.

336 citations


Cites background or methods from "Defocus map estimation from a singl..."

  • ...A thin lens model for image blur (revised from [41])....

    [...]

  • ...Focusness or blurriness has been used for many purposes such as depth recovery [41] and defocus magnification [31]....

    [...]

  • ...[41] use image matting method to compute the blurriness of non-edge pixels....

    [...]

  • ...Compared with the previous propagation methods [31, 41], ours is simple, stable and able to process regions with non-smooth interiors....

    [...]

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This work proposes a simple yet effective blur feature via sparse representation and image decomposition that directly establishes correspondence between sparse edge representation and blur strength estimation.
Abstract: We tackle a fundamental problem to detect and estimate just noticeable blur (JNB) caused by defocus that spans a small number of pixels in images. This type of blur is common during photo taking. Although it is not strong, the slight edge blurriness contains informative clues related to depth. We found existing blur descriptors based on local information cannot distinguish this type of small blur reliably from unblurred structures. We propose a simple yet effective blur feature via sparse representation and image decomposition. It directly establishes correspondence between sparse edge representation and blur strength estimation. Extensive experiments manifest the generality and robustness of this feature.

196 citations


Cites background or methods from "Defocus map estimation from a singl..."

  • ...We show quantitatively comparison on our data via precision-recall (PR) in (h) Shi et al. [22] (g) Liu et al. [19] (i) Su et al. [23] (e) Zhuo and Sim [31] (d) Bae and Durand [2] (f) Zhu et al. [30] (a) Input (b) Ground truth (c) Chakrabarti et al. [4] (j) Our raw feature (k) Our final blur map (l) Our binary map Figure 7....

    [...]

  • ...We compare our sparsity based method with other blur estimation approaches including [2, 19, 4, 31, 30, 23, 22] in (c)-(i)....

    [...]

  • ...[23] (e) Zhuo and Sim [31] (d) Bae and Durand [2]...

    [...]

  • ...Note local gradient distribution features were used in [8, 19, 14, 31] and local frequency based metrics include slope of average power spectrum [19, 22], wavelet response [29] and Gabor filter....

    [...]

  • ...The methods of (d) and (e) estimate blur at strong edge regions, and then propagate them to get final results [2, 31]....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes a novel boundary finding based multi-focus image fusion algorithm, in which the task of detecting the focused regions is treated as finding the boundaries between the focused and defocused regions from the source images.

196 citations

Journal ArticleDOI
TL;DR: A deep learning-based approach was proposed to mitigate the quantum noise in low-dose computed tomography, using an adversarially trained network and a sharpness detection network to guide the training process.
Abstract: Low-dose computed tomography (LDCT) has offered tremendous benefits in radiation-restricted applications, but the quantum noise as resulted by the insufficient number of photons could potentially harm the diagnostic performance. Current image-based denoising methods tend to produce a blur effect on the final reconstructed results especially in high noise levels. In this paper, a deep learning-based approach was proposed to mitigate this problem. An adversarially trained network and a sharpness detection network were trained to guide the training process. Experiments on both simulated and real dataset show that the results of the proposed method have very small resolution loss and achieves better performance relative to state-of-the-art methods both quantitatively and visually.

169 citations


Cites background from "Defocus map estimation from a singl..."

  • ...There are other works that can produce a sharpness map, such as in depth map estimation [77], or blur segmentation [60], but the depth map is not necessarily corresponding to the amount of sharpness and they tend to highlight blurred edges or insensitive to the change of small amount of blur....

    [...]

Journal ArticleDOI
TL;DR: A sharpness metric based on local binary patterns and a robust segmentation algorithm to separate in- and out-of-focus image regions are proposed and obtained high-quality sharpness maps.
Abstract: Defocus blur is extremely common in images captured using optical imaging systems. It may be undesirable, but may also be an intentional artistic effect, thus it can either enhance or inhibit our visual perception of the image scene. For tasks, such as image restoration and object recognition, one might want to segment a partially blurred image into blurred and non-blurred regions. In this paper, we propose a sharpness metric based on local binary patterns and a robust segmentation algorithm to separate in- and out-of-focus image regions. The proposed sharpness metric exploits the observation that most local image patches in blurry regions have significantly fewer of certain local binary patterns compared with those in sharp regions. Using this metric together with image matting and multi-scale inference, we obtained high-quality sharpness maps. Tests on hundreds of partially blurred images were used to evaluate our blur segmentation algorithm and six comparator methods. The results show that our algorithm achieves comparative segmentation results with the state of the art and have big speed advantage over the others.

123 citations

References
More filters
Journal ArticleDOI
TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.
Abstract: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

28,073 citations


"Defocus map estimation from a singl..." refers methods in this paper

  • ...In our implementation, We set the re-blurring s0 1⁄4 1 and use Canny edge detector [14] to perform the edge detection....

    [...]

Journal ArticleDOI
TL;DR: A simple but effective image prior - dark channel prior to remove haze from a single input image is proposed, based on a key observation - most local patches in haze-free outdoor images contain some pixels which have very low intensities in at least one color channel.
Abstract: In this paper, we propose a simple but effective image prior-dark channel prior to remove haze from a single input image. The dark channel prior is a kind of statistics of outdoor haze-free images. It is based on a key observation-most local patches in outdoor haze-free images contain some pixels whose intensity is very low in at least one color channel. Using this prior with the haze imaging model, we can directly estimate the thickness of the haze and recover a high-quality haze-free image. Results on a variety of hazy images demonstrate the power of the proposed prior. Moreover, a high-quality depth map can also be obtained as a byproduct of haze removal.

3,668 citations

Journal ArticleDOI
TL;DR: A closed-form solution to natural image matting that allows us to find the globally optimal alpha matte by solving a sparse linear system of equations and predicts the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms.
Abstract: Interactive digital matting, the process of extracting a foreground object from an image based on limited user input, is an important task in image and video editing. From a computer vision perspective, this task is extremely challenging because it is massively ill-posed - at each pixel we must estimate the foreground and the background colors, as well as the foreground opacity ("alpha matte") from a single color measurement. Current approaches either restrict the estimation to a small part of the image, estimating foreground and background colors based on nearby pixels where they are known, or perform iterative nonlinear estimation by alternating foreground and background color estimation with alpha estimation. In this paper, we present a closed-form solution to natural image matting. We derive a cost function from local smoothness assumptions on foreground and background colors and show that in the resulting expression, it is possible to analytically eliminate the foreground and background colors to obtain a quadratic cost function in alpha. This allows us to find the globally optimal alpha matte by solving a sparse linear system of equations. Furthermore, the closed-form formula allows us to predict the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms. We show that high-quality mattes for natural images may be obtained from a small amount of user input.

1,851 citations


"Defocus map estimation from a singl..." refers methods in this paper

  • ...Here, we apply the matting Laplacian [18] to perform the defocus map interpolation....

    [...]

  • ...L is the matting Laplacian matrix and D is a diagonal matrix whose element Dii is 1 if pixel i is at the edge location, and 0 otherwise....

    [...]

Journal ArticleDOI
01 Aug 2004
TL;DR: This paper presents a simple colorization method that requires neither precise image segmentation, nor accurate region tracking, and demonstrates that high quality colorizations of stills and movie clips may be obtained from a relatively modest amount of user input.
Abstract: Colorization is a computer-assisted process of adding color to a monochrome image or movie The process typically involves segmenting images into regions and tracking these regions across image sequences Neither of these tasks can be performed reliably in practice; consequently, colorization requires considerable user intervention and remains a tedious, time-consuming, and expensive taskIn this paper we present a simple colorization method that requires neither precise image segmentation, nor accurate region tracking Our method is based on a simple premise; neighboring pixels in space-time that have similar intensities should have similar colors We formalize this premise using a quadratic cost function and obtain an optimization problem that can be solved efficiently using standard techniques In our approach an artist only needs to annotate the image with a few color scribbles, and the indicated colors are automatically propagated in both space and time to produce a fully colorized image or sequence We demonstrate that high quality colorizations of stills and movie clips may be obtained from a relatively modest amount of user input

1,505 citations

Proceedings ArticleDOI
29 Jul 2007
TL;DR: A simple modification to a conventional camera is proposed to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture, and introduces a criterion for depth discriminability which is used to design the preferred aperture pattern.
Abstract: A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.

1,489 citations


"Defocus map estimation from a singl..." refers methods in this paper

  • ...The coded aperture method [7] changes the shape of camera aperture to make defocus deblurring more reliable....

    [...]