scispace - formally typeset
Search or ask a question

Showing papers by "Anat Levin published in 2010"


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper argues that the fundamental difference between different acquisition and rendering techniques is a difference between prior assumptions on the light field, and proposes a new light field prior which is a Gaussian assigning a non-zero variance mostly to a 3D subset of entries.
Abstract: Acquiring and representing the 4D space of rays in the world (the light field) is important for many computer vision and graphics applications. Yet, light field acquisition is costly due to their high dimensionality. Existing approaches either capture the 4D space explicitly, or involve an error-sensitive depth estimation process. This paper argues that the fundamental difference between different acquisition and rendering techniques is a difference between prior assumptions on the light field. We use the previously reported dimensionality gap in the 4D light field spectrum to propose a new light field prior. The new prior is a Gaussian assigning a non-zero variance mostly to a 3D subset of entries. Since there is only a low-dimensional subset of entries with non-zero variance, we can reduce the complexity of the acquisition process and render the 4D light field from 3D measurement sets. Moreover, the Gaussian nature of the prior leads to linear and depth invariant reconstruction algorithms. We use the new prior to render the 4D light field from a 3D focal stack sequence and to interpolate sparse directional samples and aliased spatial measurements. In all cases the algorithm reduces to a simple spatially invariant deconvolution which does not involve depth estimation.

149 citations


Proceedings ArticleDOI
29 Mar 2010
TL;DR: In this paper, the problem of removing blur from objects moving at constant velocities in arbitrary 2D directions is addressed by capturing two images of the scene with a parabolic motion in two orthogonal directions.
Abstract: Object movement during exposure generates blur. Removing blur is challenging because one has to estimate the motion blur, which can spatially vary over the image. Even if the motion is successfully identified, blur removal can be unstable because the blur kernel attenuates high frequency image contents. We address the problem of removing blur from objects moving at constant velocities in arbitrary 2D directions. Our solution captures two images of the scene with a parabolic motion in two orthogonal directions. We show that our strategy near-optimally preserves image content, and allows for stable blur inversion. Taking two images of a scene helps us estimate spatially varying object motions. We present a prototype camera and demonstrate successful motion deblurring on real motions.

62 citations


Book ChapterDOI
05 Sep 2010
TL;DR: This paper addresses the problem of depth discrimination from J images captured using J arbitrary codes placed within one fixed lens aperture, and analyzes the desired properties of discriminative codes under a geometric optics model and proposes an upper bound on the best possible discrimination.
Abstract: Computational depth estimation is a central task in computer vision and graphics. A large variety of strategies have been introduced in the past relying on viewpoint variations, defocus changes and general aperture codes. However, the tradeoffs between such designs are not well understood. Depth estimation from computational camera measurements is a highly non-linear process and therefore most research attempts to evaluate depth estimation strategies rely on numerical simulations. Previous attempts to design computational cameras with good depth discrimination optimized highly non-linear and nonconvex scores, and hence it is not clear if the constructed designs are optimal. In this paper we address the problem of depth discrimination from J images captured using J arbitrary codes placed within one fixed lens aperture. We analyze the desired properties of discriminative codes under a geometric optics model and propose an upper bound on the best possible discrimination. We show that under a multiplicative noise model, the half ring codes discovered by Zhou et al. [1] are near-optimal. When a large number of images are allowed, a multiaperture camera [2] dividing the aperture into multiple annular rings provides near-optimal discrimination. In contrast, the plenoptic camera of [5] which divides the aperture into compact support circles can achieve at most 50% of the optimal discrimination bound.

39 citations


Journal ArticleDOI
01 Jan 2010-Synlett
TL;DR: In this article, the stereodivergent carbometalation of substituted ynol ethers is reported and both isomers could be obtained at will depending of the nature of the OR group.
Abstract: The stereodivergent carbometalation of substituted ynol ethers is reported and both isomers could be obtained at will depending of the nature of the OR group.

7 citations