scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Resolving Focal Plane Ambiguity using Chromatic Aberration and Color Uniformity Principle

TL;DR: A novel region based method using Color Uniformity Principle (CUP) for detecting the ordering of defocus blurs in R, G and B color planes to resolve the FPA.
Abstract: Focal Plane Ambiguity (FPA) is a fundamental limitation of the Depth from Defocus (DFD) technique and refers to ambiguity of two possible distances corresponding to a single defocus blur value. Since, mix-sided scenes exist frequently in images and image-sequences, the assumption of a one sided focused scene often does not hold true. This leads to errors in the estimated defocus map. However, the inherent ordering of defocus blurs at the edges due to chromatic aberration in the R, G and B color planes can be used to correct this ambiguity. But, in highly defocused regions the ordering of defocus blurs becomes unreliable as the detection of edges becomes erroneous. In this paper, we propose a novel region based method using Color Uniformity Principle (CUP) for detecting the ordering of defocus blurs in R, G and B color planes to resolve the FPA.
References
More filters
Journal ArticleDOI
TL;DR: A closed-form solution to natural image matting that allows us to find the globally optimal alpha matte by solving a sparse linear system of equations and predicts the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms.
Abstract: Interactive digital matting, the process of extracting a foreground object from an image based on limited user input, is an important task in image and video editing. From a computer vision perspective, this task is extremely challenging because it is massively ill-posed - at each pixel we must estimate the foreground and the background colors, as well as the foreground opacity ("alpha matte") from a single color measurement. Current approaches either restrict the estimation to a small part of the image, estimating foreground and background colors based on nearby pixels where they are known, or perform iterative nonlinear estimation by alternating foreground and background color estimation with alpha estimation. In this paper, we present a closed-form solution to natural image matting. We derive a cost function from local smoothness assumptions on foreground and background colors and show that in the resulting expression, it is possible to analytically eliminate the foreground and background colors to obtain a quadratic cost function in alpha. This allows us to find the globally optimal alpha matte by solving a sparse linear system of equations. Furthermore, the closed-form formula allows us to predict the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms. We show that high-quality mattes for natural images may be obtained from a small amount of user input.

1,851 citations

Proceedings ArticleDOI
29 Jul 2007
TL;DR: A simple modification to a conventional camera is proposed to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture, and introduces a criterion for depth discriminability which is used to design the preferred aperture pattern.
Abstract: A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.

1,489 citations

Proceedings ArticleDOI
17 Jun 2006
TL;DR: A closed-form solution to natural image matting that allows us to find the globally optimal alpha matte by solving a sparse linear system of equations and predicts the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms.
Abstract: Interactive digital matting, the process of extracting a foreground object from an image based on limited user input, is an important task in image and video editing. From a computer vision perspective, this task is extremely challenging because it is massively ill-posed - at each pixel we must estimate the foreground and the background colors, as well as the foreground opacity ("alpha matte") from a single color measurement. Current approaches either restrict the estimation to a small part of the image, estimating foreground and background colors based on nearby pixels where they are known, or perform iterative nonlinear estimation by alternating foreground and background color estimation with alpha estimation. In this paper we present a closed form solution to natural image matting. We derive a cost function from local smoothness assumptions on foreground and background colors, and show that in the resulting expression it is possible to analytically eliminate the foreground and background colors to obtain a quadratic cost function in alpha. This allows us to find the globally optimal alpha matte by solving a sparse linear system of equations. Furthermore, the closed form formula allows us to predict the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms. We show that high quality mattes can be obtained on natural images from a surprisingly small amount of user input.

876 citations


"Resolving Focal Plane Ambiguity usi..." refers methods in this paper

  • ...Finally, we generate the full depth map from aligned sparse defocus map using Levin’s closed form matting formulation [12]....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a simple yet effective approach to estimate the amount of spatially varying defocus blur at edge locations, and demonstrates the effectiveness of this method in providing a reliable estimation of the defocus map.

370 citations


"Resolving Focal Plane Ambiguity usi..." refers background or methods in this paper

  • ...To highlight the effectiveness of the proposed method, we compare the generated full depth maps with depth maps generated using state of the art Zhuo’s [2] and Kumar’s [10] methods....

    [...]

  • ...We observe from the table that the proposed method results in corrected defocus maps with higher accuracy compared to state of the art methods [2] and [10]....

    [...]

  • ...We estimate the defocus blur amount σ using Zhuo’s method [2]....

    [...]

  • ...Image Zhuo Kumar Proposed [2] [10] 1 0....

    [...]

  • ...Zhuo [2] has explicitly mentioned the focal plane ambiguity problem and resolved it by assuming scene as a one sided scene i.e scene contains only far regions with respect to focused point....

    [...]

Proceedings ArticleDOI
05 Dec 1988
TL;DR: In this paper, a new method is described for recovering the distance of objects in a scene from images formed by lenses, based on measuring the change in the scene's image due to a known change in three intrinsic camera parameters: (i) distance between the lens and the image detector, (ii) focal length, and (iii) diameter of the lens aperture.
Abstract: A new method is described for recovering the distance of objects in a scene from images formed by lenses. The recovery is based on measuring the change in the scene’s image due to a known change in the three intrinsic camera parameters: (i) distance between the lens md the image detector, (ii) focal length of the lens, and (iii) diameter of the lens aperture. The method is parallel involving simple local computations. In comparison with stereo vision and structure-from-motion methods, the correspondence problem does not arise. This method for depthmap recovery may also be used for (i) obtaining focused images (i.e. images having large depth of field) from two images having finite depth of field, and (ii) rapid autofocusing of computer controlled video cameras.

231 citations