scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Divide and Conquer for Full-Resolution Light Field Deblurring

01 Jun 2018-pp 6421-6429
TL;DR: A new blind motion deblurring strategy for LFs which alleviates limitations significantly and is CPU-efficient computationally and can effectively deblur full-resolution LFs.
Abstract: The increasing popularity of computational light field (LF) cameras has necessitated the need for tackling motion blur which is a ubiquitous phenomenon in hand-held photography. The state-of-the-art method for blind deblurring of LFs of general 3D scenes is limited to handling only downsampled LF, both in spatial and angular resolution. This is due to the computational overhead involved in processing data-hungry full-resolution 4D LF altogether. Moreover, the method warrants high-end GPUs for optimization and is ineffective for wide-angle settings and irregular camera motion. In this paper, we introduce a new blind motion deblurring strategy for LFs which alleviates these limitations significantly. Our model achieves this by isolating 4D LF motion blur across the 2D subaperture images, thus paving the way for independent deblurring of these subaperture images. Furthermore, our model accommodates common camera motion parameterization across the subaperture images. Consequently, blind deblurring of any single subaperture image elegantly paves the way for cost-effective non-blind deblurring of the other subaperture images. Our approach is CPU-efficient computationally and can effectively deblur full-resolution LFs.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This work generates a complex blurry light field dataset and proposes a learning-based deblurring approach that is about 16K times faster than Srinivasan et.
Abstract: Restoring a sharp light field image from its blurry input has become essential due to the increasing popularity of parallax-based image processing. State-of-the-art blind light field deblurring methods suffer from several issues such as slow processing, reduced spatial size, and a limited motion blur model. In this work, we address these challenging problems by generating a complex blurry light field dataset and proposing a learning-based deblurring approach. In particular, we model the full 6-degree of freedom (6-DOF) light field camera motion, which is used to create the blurry dataset using a combination of real light fields captured with a Lytro Illum camera, and synthetic light field renderings of 3D scenes. Furthermore, we propose a light field deblurring network that is built with the capability of large receptive fields. We also introduce a simple strategy of angular sampling to train on the large-scale blurry light field effectively. We evaluate our method through both quantitative and qualitative measurements and demonstrate superior performance compared to the state-of-the-art method with a massive speedup in execution time. Our method is about 16K times faster than Srinivasan et. al. [22] and can deblur a full-resolution light field in less than 2 seconds.

12 citations


Cites background or methods from "Divide and Conquer for Full-Resolut..."

  • ...These problems were solved partially by recent LF deblurring works [14, 16] but are still inapplicable on any LF camera as the post-capture processing, due to their slow execution time (∼30 minutes)....

    [...]

  • ...Although their model was able to produce a better result than the state-of-the-art [22], the MDF model did not include out-of-plane translation (z-axis translation)....

    [...]

  • ...This problem is solved by Mahesh Mohan and Rajagopalan [16] who implemented 2-DOF in-plane translation and 1-DOF z-axis rotation model (3-DOF) following the model of motion density function (MDF) [6]....

    [...]

  • ...Our method is designed to address the limitation of previous works that assume 3-DOF translational [22] and 3-DOF motion density function (MDF) [16]....

    [...]

  • ...The blur model is designed within 6-DOF motion as opposed to the 3-DOF model from previous approaches [16, 22]....

    [...]

Proceedings ArticleDOI
01 Oct 2019
TL;DR: A generalized blur model is proposed that elegantly explains the intrinsically coupled image formation model for dual-lens set-up, which are by far most predominant in smartphones and reveals an intriguing challenge that stems from an inherent ambiguity unique to this problem which naturally disrupts this coherence.
Abstract: Recently, there has been a renewed interest in leveraging multiple cameras, but under unconstrained settings. They have been quite successfully deployed in smartphones, which have become de facto choice for many photographic applications. However, akin to normal cameras, the functionality of multi-camera systems can be marred by motion blur which is a ubiquitous phenomenon in hand-held cameras. Despite the far-reaching potential of unconstrained camera arrays, there is not a single deblurring method for such systems. In this paper, we propose a generalized blur model that elegantly explains the intrinsically coupled image formation model for dual-lens set-up, which are by far most predominant in smartphones. While image aesthetics is the main objective in normal camera deblurring, any method conceived for our problem is additionally tasked with ascertaining consistent scene-depth in the deblurred images. We reveal an intriguing challenge that stems from an inherent ambiguity unique to this problem which naturally disrupts this coherence. We address this issue by devising a judicious prior, and based on our model and prior propose a practical blind deblurring method for dual-lens cameras, that achieves state-of-the-art performance.

8 citations


Cites background or methods from "Divide and Conquer for Full-Resolut..."

  • ...Second, any method for DL-BMD must ensure scene-consistent disparities in the deblurred imagepair (akin to angular coherence in light fields [23, 40]), which also incidentally opens up many potential applications [14, 29, 37, 24]....

    [...]

  • ...Following [23, 26, 42, 52, 48], we consider a blurred image as the integration of rotation-induced projections of world over the exposure time, the rotations being caused by camera shake, but do not constrain the COR to be only at the optical center....

    [...]

  • ...For the case of light field cameras, existing methods constrain all multi-view images to share identical camera settings and ego-motions [18, 5, 23, 40]....

    [...]

  • ...Also, the imaging principle of light field is quite different due to the lens effect [5, 23]....

    [...]

  • ...For computational cameras, we considered state-of-the-art stereo BMD [51] and light field BMD [23]....

    [...]

Journal ArticleDOI
TL;DR: The experiment results show that the proposed blur model can maintain the parallax information (depth-dependent blur) in a light field image and produce a synthetic blurry light field dataset based on the 6-DOF model.
Abstract: Motion deblurring is essential for reconstructing sharp images from given a blurry input caused by the camera motion. The complexity of this problem increases in a light field due to its depth-dependent blur constraint. A method of generating synthetic 3 degree-of-freedom (3-DOF) translation blur on a light field image without camera rotation has been introduced. In this study, we generate a camera translation and rotation (6-DOF) motion blur model that preserves the consistency of the light field image. Our experiment results show that the proposed blur model can maintain the parallax information (depth-dependent blur) in a light field image. Furthermore, we produce a synthetic blurry light field dataset based on the 6-DOF model. Finally, to validate the usability of the synthetic dataset, we conduct extensive benchmarking using state-of-the-art motion deblurring algorithms.

3 citations


Additional excerpts

  • ...To the best of our knowledge, only few previous studies that work on light field deblurring [14, 18, 22] and surprisingly, no previous studies that work on generating blurry light field dataset....

    [...]

Journal ArticleDOI
TL;DR: Compared with existing state-of- the-art single and LF image SR methods, the proposed LF-DAnet method achieves superior SR performance under a wide range of degradations, and generalizes better to real LF images.
Abstract: —Recent years have witnessed the great advances of deep neural networks (DNNs) in light field (LF) image super-resolution (SR). However, existing DNN-based LF image SR methods are developed on a single fixed degradation (e.g., bicubic downsampling), and thus cannot be applied to super-resolve real LF images with diverse degradations. In this paper, we propose the first method to handle LF image SR with multiple degradations. In our method, a practical LF degradation model that considers blur and noise is developed to approximate the degradation process of real LF images. Then, a degradation-adaptive network (LF-DAnet) is designed to incorporate the degradation prior into the SR process. By training on LF images with multiple synthetic degradations, our method can learn to adapt to different degradations while incorporating the spatial and angular information. Extensive experiments on both synthetically degraded and real-world LFs demonstrate the effectiveness of our method. Compared with existing state-of- the-art single and LF image SR methods, our method achieves superior SR performance under a wide range of degradations, and generalizes better to real LF images. Codes and models are available at https://github.com/YingqianWang/LF-DAnet.

2 citations

References
More filters
Proceedings ArticleDOI
01 Jul 2017
TL;DR: Hardware experimental results demonstrate that, for the first time to the authors' knowledge, a 5D hyperspectral light field containing 9x9 angular views and 27 spectral bands can be acquired in a single shot.
Abstract: This paper presents the first snapshot hyperspectral light field imager in practice. Specifically, we design a novel hybrid camera system to obtain two complementary measurements that sample the angular and spectral dimensions respectively. To recover the full 5D hyperspectral light field from the severely undersampled measurements, we then propose an efficient computational reconstruction algorithm by exploiting the large correlations across the angular and spectral dimensions through self-learned dictionaries. Simulation on an elaborate hyperspectral light field dataset validates the effectiveness of the proposed approach. Hardware experimental results demonstrate that, for the first time to our knowledge, a 5D hyperspectral light field containing 9x9 angular views and 27 spectral bands can be acquired in a single shot.

32 citations

Proceedings ArticleDOI
18 Apr 2017
TL;DR: In this paper, the authors studied deblurring light fields of general 3D scenes captured under 3D camera motion and developed intuition into the effects of camera motion on the light field, and showed the advantages of capturing a 4D light field instead of a conventional 2D image for motion deblur.
Abstract: We study the problem of deblurring light fields of general 3D scenes captured under 3D camera motion and present both theoretical and practical contributions. By analyzing the motion-blurred light field in the primal and Fourier domains, we develop intuition into the effects of camera motion on the light field, show the advantages of capturing a 4D light field instead of a conventional 2D image for motion deblurring, and derive simple analytical methods of motion deblurring in certain cases. We then present an algorithm to blindly deblur light fields of general scenes without any estimation of scene geometry, and demonstrate that we can recover both the sharp light field and the 3D camera motion path of real and synthetically-blurred light fields.

30 citations

Proceedings ArticleDOI
01 Jul 2017
TL;DR: This work presents a variational energy minimization framework for robust recovery of shape in multiview stereo with complex, unknown BRDFs and albedos that consistently achieves errors lower than Lambertian baselines and is more robust than prior BRDF-invariant reconstruction methods.
Abstract: Highly effective optimization frameworks have been developed for traditional multiview stereo relying on lambertian photoconsistency. However, they do not account for complex material properties. On the other hand, recent works have explored PDE invariants for shape recovery with complex BRDFs, but they have not been incorporated into robust numerical optimization frameworks. We present a variational energy minimization framework for robust recovery of shape in multiview stereo with complex, unknown BRDFs. While our formulation is general, we demonstrate its efficacy on shape recovery using a single light field image, where the microlens array may be considered as a realization of a purely translational multiview stereo setup. Our formulation automatically balances contributions from texture gradients, traditional Lambertian photoconsistency, an appropriate BRDF-invariant PDE and a smoothness prior. Unlike prior works, our energy function inherently handles spatially-varying BRDFs and albedos. Extensive experiments with synthetic and real data show that our optimization framework consistently achieves errors lower than Lambertian baselines and further, is more robust than prior BRDF-invariant reconstruction methods.

13 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This work proposes a model for RS blind motion deblurring that mitigates many constraints including heavy computational cost, need for precise sensor information, and inability to deal with wide-angle systems and irregular camera trajectory.
Abstract: Most present-day imaging devices are equipped with CMOS sensors. Motion blur is a common artifact in handheld cameras. Because CMOS sensors mostly employ a rolling shutter (RS), the motion deblurring problem takes on a new dimension. Although few works have recently addressed this problem, they suffer from many constraints including heavy computational cost, need for precise sensor information, and inability to deal with wide-angle systems (which most cell-phone and drone cameras are) and irregular camera trajectory. In this work, we propose a model for RS blind motion deblurring that mitigates these issues significantly. Comprehensive comparisons with state-of-the-art methods reveal that our approach not only exhibits significant computational gains and unconstrained functionality but also leads to improved deblurring performance.

13 citations


"Divide and Conquer for Full-Resolut..." refers background in this paper

  • ...Inplane rotation common to both the approximations are necessary to capture wide angle settings [15, 21]....

    [...]

  • ...7(d-e)), mainly due to shift-ambiguity of latent image-MDF pair [15], and relative estimation-error of different MDFs....

    [...]

Posted Content
TL;DR: In this article, the authors generalize Richardson-Lucy deblurring to 4D light fields by replacing the convolution steps with light field rendering of motion blur, which deals correctly with blur caused by 6-degree-of-freedom camera motion in complex 3D scenes.
Abstract: We generalize Richardson-Lucy (RL) deblurring to 4-D light fields by replacing the convolution steps with light field rendering of motion blur The method deals correctly with blur caused by 6-degree-of-freedom camera motion in complex 3-D scenes, without performing depth estimation We introduce a novel regularization term that maintains parallax information in the light field while reducing noise and ringing We demonstrate the method operating effectively on rendered scenes and scenes captured using an off-the-shelf light field camera An industrial robot arm provides repeatable and known trajectories, allowing us to establish quantitative performance in complex 3-D scenes Qualitative and quantitative results confirm the effectiveness of the method, including commonly occurring cases for which previously published methods fail We include mathematical proof that the algorithm converges to the maximum-likelihood estimate of the unblurred scene under Poisson noise We expect extension to blind methods to be possible following the generalization of 2-D Richardson-Lucy to blind deconvolution

10 citations