scispace - formally typeset
Proceedings ArticleDOI

Divide and Conquer for Full-Resolution Light Field Deblurring

01 Jun 2018-pp 6421-6429
TL;DR: A new blind motion deblurring strategy for LFs which alleviates limitations significantly and is CPU-efficient computationally and can effectively deblur full-resolution LFs.

...read more

Abstract: The increasing popularity of computational light field (LF) cameras has necessitated the need for tackling motion blur which is a ubiquitous phenomenon in hand-held photography. The state-of-the-art method for blind deblurring of LFs of general 3D scenes is limited to handling only downsampled LF, both in spatial and angular resolution. This is due to the computational overhead involved in processing data-hungry full-resolution 4D LF altogether. Moreover, the method warrants high-end GPUs for optimization and is ineffective for wide-angle settings and irregular camera motion. In this paper, we introduce a new blind motion deblurring strategy for LFs which alleviates these limitations significantly. Our model achieves this by isolating 4D LF motion blur across the 2D subaperture images, thus paving the way for independent deblurring of these subaperture images. Furthermore, our model accommodates common camera motion parameterization across the subaperture images. Consequently, blind deblurring of any single subaperture image elegantly paves the way for cost-effective non-blind deblurring of the other subaperture images. Our approach is CPU-efficient computationally and can effectively deblur full-resolution LFs.

...read more

Topics: Deblurring (60%), Motion blur (56%)
Citations
More filters

Journal ArticleDOI
Sidong Wu1, Gexiang Zhang1, Ferrante Neri2, Ming Zhu3  +2 moreInstitutions (4)

10 citations


18


Journal ArticleDOI
TL;DR: This work generates a complex blurry light field dataset and proposes a learning-based deblurring approach that is about 16K times faster than Srinivasan et.

...read more

Abstract: Restoring a sharp light field image from its blurry input has become essential due to the increasing popularity of parallax-based image processing. State-of-the-art blind light field deblurring methods suffer from several issues such as slow processing, reduced spatial size, and a limited motion blur model. In this work, we address these challenging problems by generating a complex blurry light field dataset and proposing a learning-based deblurring approach. In particular, we model the full 6-degree of freedom (6-DOF) light field camera motion, which is used to create the blurry dataset using a combination of real light fields captured with a Lytro Illum camera, and synthetic light field renderings of 3D scenes. Furthermore, we propose a light field deblurring network that is built with the capability of large receptive fields. We also introduce a simple strategy of angular sampling to train on the large-scale blurry light field effectively. We evaluate our method through both quantitative and qualitative measurements and demonstrate superior performance compared to the state-of-the-art method with a massive speedup in execution time. Our method is about 16K times faster than Srinivasan et. al. [22] and can deblur a full-resolution light field in less than 2 seconds.

...read more

6 citations


Cites background or methods from "Divide and Conquer for Full-Resolut..."

  • ...These problems were solved partially by recent LF deblurring works [14, 16] but are still inapplicable on any LF camera as the post-capture processing, due to their slow execution time (∼30 minutes)....

    [...]

  • ...Although their model was able to produce a better result than the state-of-the-art [22], the MDF model did not include out-of-plane translation (z-axis translation)....

    [...]

  • ...This problem is solved by Mahesh Mohan and Rajagopalan [16] who implemented 2-DOF in-plane translation and 1-DOF z-axis rotation model (3-DOF) following the model of motion density function (MDF) [6]....

    [...]

  • ...Our method is designed to address the limitation of previous works that assume 3-DOF translational [22] and 3-DOF motion density function (MDF) [16]....

    [...]

  • ...The blur model is designed within 6-DOF motion as opposed to the 3-DOF model from previous approaches [16, 22]....

    [...]


Proceedings ArticleDOI
01 Oct 2019-
TL;DR: A generalized blur model is proposed that elegantly explains the intrinsically coupled image formation model for dual-lens set-up, which are by far most predominant in smartphones and reveals an intriguing challenge that stems from an inherent ambiguity unique to this problem which naturally disrupts this coherence.

...read more

Abstract: Recently, there has been a renewed interest in leveraging multiple cameras, but under unconstrained settings. They have been quite successfully deployed in smartphones, which have become de facto choice for many photographic applications. However, akin to normal cameras, the functionality of multi-camera systems can be marred by motion blur which is a ubiquitous phenomenon in hand-held cameras. Despite the far-reaching potential of unconstrained camera arrays, there is not a single deblurring method for such systems. In this paper, we propose a generalized blur model that elegantly explains the intrinsically coupled image formation model for dual-lens set-up, which are by far most predominant in smartphones. While image aesthetics is the main objective in normal camera deblurring, any method conceived for our problem is additionally tasked with ascertaining consistent scene-depth in the deblurred images. We reveal an intriguing challenge that stems from an inherent ambiguity unique to this problem which naturally disrupts this coherence. We address this issue by devising a judicious prior, and based on our model and prior propose a practical blind deblurring method for dual-lens cameras, that achieves state-of-the-art performance.

...read more

1 citations


Cites background or methods from "Divide and Conquer for Full-Resolut..."

  • ...Second, any method for DL-BMD must ensure scene-consistent disparities in the deblurred imagepair (akin to angular coherence in light fields [23, 40]), which also incidentally opens up many potential applications [14, 29, 37, 24]....

    [...]

  • ...Following [23, 26, 42, 52, 48], we consider a blurred image as the integration of rotation-induced projections of world over the exposure time, the rotations being caused by camera shake, but do not constrain the COR to be only at the optical center....

    [...]

  • ...For the case of light field cameras, existing methods constrain all multi-view images to share identical camera settings and ego-motions [18, 5, 23, 40]....

    [...]

  • ...Also, the imaging principle of light field is quite different due to the lens effect [5, 23]....

    [...]

  • ...For computational cameras, we considered state-of-the-art stereo BMD [51] and light field BMD [23]....

    [...]


Journal ArticleDOI
TL;DR: The experiment results show that the proposed blur model can maintain the parallax information (depth-dependent blur) in a light field image and produce a synthetic blurry light field dataset based on the 6-DOF model.

...read more

Abstract: Motion deblurring is essential for reconstructing sharp images from given a blurry input caused by the camera motion. The complexity of this problem increases in a light field due to its depth-dependent blur constraint. A method of generating synthetic 3 degree-of-freedom (3-DOF) translation blur on a light field image without camera rotation has been introduced. In this study, we generate a camera translation and rotation (6-DOF) motion blur model that preserves the consistency of the light field image. Our experiment results show that the proposed blur model can maintain the parallax information (depth-dependent blur) in a light field image. Furthermore, we produce a synthetic blurry light field dataset based on the 6-DOF model. Finally, to validate the usability of the synthetic dataset, we conduct extensive benchmarking using state-of-the-art motion deblurring algorithms.

...read more

Additional excerpts

  • ...To the best of our knowledge, only few previous studies that work on light field deblurring [14, 18, 22] and surprisingly, no previous studies that work on generating blurry light field dataset....

    [...]


References
More filters

Book
Richard Hartley1, Andrew Zisserman2Institutions (2)
01 Jan 2000-
Abstract: From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.

...read more

15,158 citations


01 Jan 2001-
TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show the best book collections and completed collections.

...read more

Abstract: Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this multiple view geometry in computer vision. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.

...read more

13,284 citations


"Divide and Conquer for Full-Resolut..." refers background in this paper

  • ...Rotation-only approximation: To reduce the number of unknowns in MDF, CC-BMD methods typically approximate full 6D motion to 3D [17, 23]....

    [...]

  • ...To demonstrate the ineffectiveness of CC methods on LFs, we also use state-of-the-art CC-BMD methods [12] and [17] to perform independent deblurring on individual subaperture images....

    [...]

  • ...This approximation is widely used in many practical applications (including CC-BMD) [6, 17, 26]....

    [...]

  • ...• Our work bridges the gap between the well-studied CC-BMD and emerging LFC-BMD, and facilitates mapping of analogous techniques (such as MDF formulation, efficient filter flow framework, and scalespace strategy) developed for the former to the later....

    [...]

  • ...State-of-the-art CC-BMD methods [17, 26, 21] are based on the motion density function (MDF) [5] which allows both narrow- and wide-angle systems as well as nonparametric camera motion, have a homography-based filter flow framework for computational efficiency [8], and employ a scale-space approach to accommodate large blurs....

    [...]


Journal ArticleDOI
William H. Richardson1Institutions (1)
TL;DR: An iterative method of restoring degraded images was developed by treating images, point spread functions, and degraded images as probability-frequency functions and by applying Bayes’s theorem.

...read more

Abstract: An iterative method of restoring degraded images was developed by treating images, point spread functions, and degraded images as probability-frequency functions and by applying Bayes’s theorem. The method functions effectively in the presence of noise and is adaptable to computer operation.

...read more

3,495 citations


"Divide and Conquer for Full-Resolut..." refers background or methods in this paper

  • ...8) [6], and (f) Richarson Lucy deconvolution [8]....

    [...]

  • ...8) which is solved using iterative reweighted least squares process [6], and (d) RL deconvolution with smoothness prior which is solved using iterative process [8]....

    [...]

  • ...In terms of visual quality, we empirically found out that RL [8] is the best, and the direct method comes second but with ringing artifacts (e....

    [...]

  • ...(19)), and perform deconvolution using [8]....

    [...]

  • ...8 norm on gradient) (f) RL deconvolution [8]...

    [...]


Journal ArticleDOI
Hamid R. Sheikh1, Alan C. Bovik1Institutions (1)
TL;DR: An image information measure is proposed that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image and combined these two quantities form a visual information fidelity measure for image QA.

...read more

Abstract: Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.

...read more

2,743 citations


"Divide and Conquer for Full-Resolut..." refers background or methods in this paper

  • ...Using IFC/VIF, Figs....

    [...]

  • ...Quantitative Evaluation: We introduce an LF-version of information fidelity criterion (IFC) [19] and visual information fidelity (VIF) [18], which are shown to be the best metrics for BMD evaluation in [13], by averaging these metric over subaperture images....

    [...]

  • ...(a) LF-version of IFC [19] (b) LF-version of VIF [18]...

    [...]


Ren Ng1, Marc Levoy1, Mathieu Br1, Gene Duval  +3 moreInstitutions (1)
01 Jan 2005-
Abstract: This paper presents a camera that samples the 4D light field on its sensor in a single photographic exposure. This is achieved by inserting a microlens array between the sensor and main lens, creating a plenoptic camera. Each microlens measures not just the total amount of light deposited at that location, but how much light arrives along each ray. By re-sorting the measured rays of light to where they would have terminated in slightly different, synthetic cameras, we can compute sharp photographs focused at different depths. We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs. This property allows us to extend the depth of field of the camera without reducing the aperture, enabling shorter exposures and lower image noise. Especially in the macrophotography regime, we demonstrate that we can also compute synthetic photographs from a range of different viewpoints. These capabilities argue for a different strategy in designing photographic imaging systems. To the photographer, the plenoptic camera operates exactly like an ordinary hand-held camera. We have used our prototype to take hundreds of light field photographs, and we present examples of portraits, high-speed action and macro close-ups.

...read more

2,119 citations


"Divide and Conquer for Full-Resolut..." refers background in this paper

  • ...Refocusing LF translates to a skew in EPI, and the features of EPIs for a image point will be vertical (or horizontal depending on projection) when it is at focus [16]....

    [...]

  • ...LFCs achieve this by capturing multiple (subaperture) images instead of a single CC image by segregating the light reaching the CC-sensor into multiple angular components; and synthesize these images post-capture to form an image of desired CC setting [16, 1]....

    [...]

  • ...Also, such a high-dimensional optimization can distort the interrelations among subaperture images due to convergence issues, which is an important factor for consistent post-capture rendering of LFs [16]....

    [...]

  • ...This adversely affects the refocusing and fstopping functionality of LFs [16]....

    [...]

  • ...The increase in popularity of LFCs can be attributed to their attractive features over conventional cameras (CCs), including post-capture refocusing, f-stopping, depth sensing [22, 1, 16], etc....

    [...]


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20194