scispace - formally typeset
Search or ask a question
Author

Channarayapatna Shivaram Vijay

Bio: Channarayapatna Shivaram Vijay is an academic researcher from Indian Institutes of Technology. The author has contributed to research in topics: High-dynamic-range imaging & Image restoration. The author has an hindex of 1, co-authored 2 publications receiving 14 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A method is developed that takes input non-uniformly blurred and differently exposed images to extract the deblurred, latent irradiance image and estimates the TSFs of the blurred images from locally derived point spread functions by exploiting their linear relationship.
Abstract: Hand-held cameras inevitably result in blurred images caused by camera-shake, and even more so in high dynamic range imaging applications where multiple images are captured over a wide range of exposure settings. The degree of blurring depends on many factors such as exposure time, stability of the platform, and user experience. Camera shake involves not only translations but also rotations resulting in nonuniform blurring. In this paper, we develop a method that takes input non-uniformly blurred and differently exposed images to extract the deblurred, latent irradiance image. We use transformation spread function (TSF) to effectively model the blur caused by camera motion. We first estimate the TSFs of the blurred images from locally derived point spread functions by exploiting their linear relationship. The scene irradiance is then estimated by minimizing a suitably derived cost functional. Two important cases are investigated wherein 1) only the higher exposures are blurred and 2) all the captured frames are blurred.

19 citations

Book ChapterDOI
01 May 2014
TL;DR: In this paper, the authors proposed a method to estimate the total variation in the magnitude of irradiance incident at a camera, which is called the dynamic range (DR) and is defined as DR = (maximum signal value)/(minimum signal value).
Abstract: Introduction Digital cameras convert incident light energy into electrical signals and present them as an image after altering the signals through different processes which include sensor correction, noise reduction, scaling, gamma correction, image enhancement, color space conversion, frame-rate change, compression, and storage/transmission (Nakamura 2005). Although today's camera sensors have high quantum efficiency and high signalto-noise ratios, they inherently have an upper limit (full well capacity) for accumulation of light energy. Also, the sensor's least acquisition capacity depends on its pre-set sensitivity. The total variation in the magnitude of irradiance incident at a camera is called the dynamic range (DR) and is defined as DR = (maximum signal value)/(minimum signal value). Most digital cameras available in the market today are unable to account for the entire DR due to hardware limitations. Scenes with high dynamic range (HDR) either appear dark or become saturated. The solution for overcoming this limitation and estimating the original data is referred to as high dynamic range imaging (HDRI) (Debevec & Malik 1997, Mertens, Kautz & Van Reeth 2007, Nayar & Mitsunaga 2000). Over the years, several algorithmic approaches have been investigated for estimation of scene irradiance (see, for example, Debevec & Malik (1997), Mann & Picard (1995), Mitsunaga & Nayar (1999)). The basic idea in these approaches is to capture multiple images of a scene with different exposure settings and algorithmically extract HDR information from these observations. By varying the exposure settings, one can control the amount of energy received by the sensors to overcome sensor bounds/limits.

Cited by
More filters
Journal Article
TL;DR: This paper comprehensively reviews the recent development of imagedeblurring, including nonblind/blind, spatially invariant/variant deblurring techniques, and provides a holistic understanding and deep insight into image deblursing.
Abstract: This paper comprehensively reviews the recent development of image deblurring, including nonblind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the illposedness which is a crucial issue in deblurring tasks, existing methods can be grouped into ve categories: Bayesian inference framework, variational methods, sparse representation-based methods, homographybased modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.

92 citations

Journal ArticleDOI
TL;DR: This paper provides a review of the recent literature on Bayesian Blind Image Deconvolution methods and focuses on VB inference and the use of SG and SMG models with coverage of recent advances in sampling methods.

61 citations

Proceedings ArticleDOI
05 Oct 2018
TL;DR: This paper proposes a unified approach to perform high dynamic range super-resolution (HDR-SR) imaging from a sequence of low dynamic range and low-resolution motion-blurred images, designed to handle blurring effects caused by the camera motion.
Abstract: Images captured from consumer cameras are often prone to camera shake resulting in motion blur. Effect of motion blur is more common in high dynamic range imaging applications where multiple images are captured over a wide range of exposure settings. In this paper, we propose a unified approach to perform high dynamic range super-resolution (HDR-SR) imaging from a sequence of low dynamic range and low-resolution motion-blurred images. While existing works on HDR-SR assume the availability of blur-free input images, we propose an approach which is designed to handle blurring effects caused by the camera motion. Our approach attempts to harness the complementarity present in terms of the sensor exposure and blur to yield a high-quality image which has both higher spatial resolution as well as dynamic range. Experiments on synthetic and real examples demonstrate that the proposed method delivers state-of-the-art results.

8 citations

Journal ArticleDOI
TL;DR: The authors propose a novel tone mapping algorithm based on fast image decomposition and multi-layer fusion that could bring better brightness, contrast, and visibility with less halo effect than other state-of-the-art methods both qualitatively and quantitatively in most cases.
Abstract: To solve the problem of low efficiency and colour distortion in several typical tone mapping operators for high dynamic range (HDR) images, the authors propose a novel tone mapping algorithm based on fast image decomposition and multi-layer fusion. An input HDR image is firstly decomposed into a base layer and a detail layer by an improved guided filtering method. For the base layer, its dynamic range is compressed by the simulated camera response function. For the detail layer, it is enhanced to produce more fine structures and reduce halo effect by applying the guided image filter. The colour balance correction method is adopted to suppress colour distortion. The experiments on HDR images demonstrate that the proposed technique could bring better brightness, contrast, and visibility with less halo effect than other state-of-the-art methods both qualitatively and quantitatively in most cases.

8 citations

Journal ArticleDOI
TL;DR: The algorithm can not only effectively remove both the space-varying illumination and motion blurs in aerial images, but also recover the abundant details of aerial scenes with top-level objective and subjective quality, and outperforms other state-of-the-art restoration methods.
Abstract: Aerial images are often degraded by space-varying motion blur and simultaneous uneven illumination. To recover high-quality aerial image from its non-uniform version, we propose a novel patch-wise restoration approach based on a key observation that the degree of blurring is inevitably affected by the illuminated conditions. A non-local Retinex model is developed to accurately estimate the reflectance component from the degraded aerial image. Thereafter the uneven illumination is corrected well. And then non-uniform coupled blurring in the enhanced reflectance image is alleviated and transformed towards uniform distribution, which will facilitate the subsequent deblurring. For constructing the multi-scale sparsified regularizer, the discrete shearlet transform is improved to better represent anisotropic image features in term of directional sensitivity and selectivity. In addition, a new adaptive variant of total generalized variation is proposed for the structure-preserving regularizer. These complementary regularizers are elegantly integrated into an objective function. The final deblurred image with uniform illumination can be extracted by applying the fast alternating direction scheme to solve the derived function. The experimental results demonstrate that our algorithm can not only remove both the space-varying illumination and motion blur in the aerial image effectively but also recover the abundant details of aerial scenes with top-level objective and subjective quality, and outperforms other state-of-the-art restoration methods.

6 citations