scispace - formally typeset
Search or ask a question
Author

Rajagopalan Ambasamudram

Bio: Rajagopalan Ambasamudram is an academic researcher from Indian Institute of Technology Madras. The author has contributed to research in topics: Deblurring & Motion blur. The author has an hindex of 2, co-authored 4 publications receiving 22 citations.

Papers
More filters
Proceedings ArticleDOI
01 Oct 2019
TL;DR: The proposed network is composed of an efficient densely connected encoder-decoder backbone structure with a pyramid pooling module that leverages the task-specific efficacy of joint intensity estimation and dynamic filter synthesis for the spatially-aware blurring process.
Abstract: Bokeh effect refers to the soft defocus blur of the background, which can be achieved with different aperture and shutter settings in a camera. In this work, we present a learning-based method for rendering such synthetic depth-of-field effect on input bokeh-free images acquired using ordinary monocular cameras. The proposed network is composed of an efficient densely connected encoder-decoder backbone structure with a pyramid pooling module. Our network leverages the task-specific efficacy of joint intensity estimation and dynamic filter synthesis for the spatially-aware blurring process. Since the rendering task requires distinguishing between large foreground and background regions and their relative depth, our network is further guided by pre-trained salient-region segmentation and depth-estimation modules. Experiments on diverse scenes show that our model elegantly introduces the desired effects in the input images, enhancing their aesthetic quality while maintaining a natural appearance. Along with extensive ablation analysis and visualizations to validate its components, the effectiveness of the proposed network is also demonstrated by achieving the second-highest score in the AIM 2019 Bokeh Effect challenge: fidelity track.

24 citations

Proceedings ArticleDOI
01 Oct 2019
TL;DR: A generalized blur model is proposed that elegantly explains the intrinsically coupled image formation model for dual-lens set-up, which are by far most predominant in smartphones and reveals an intriguing challenge that stems from an inherent ambiguity unique to this problem which naturally disrupts this coherence.
Abstract: Recently, there has been a renewed interest in leveraging multiple cameras, but under unconstrained settings. They have been quite successfully deployed in smartphones, which have become de facto choice for many photographic applications. However, akin to normal cameras, the functionality of multi-camera systems can be marred by motion blur which is a ubiquitous phenomenon in hand-held cameras. Despite the far-reaching potential of unconstrained camera arrays, there is not a single deblurring method for such systems. In this paper, we propose a generalized blur model that elegantly explains the intrinsically coupled image formation model for dual-lens set-up, which are by far most predominant in smartphones. While image aesthetics is the main objective in normal camera deblurring, any method conceived for our problem is additionally tasked with ascertaining consistent scene-depth in the deblurred images. We reveal an intriguing challenge that stems from an inherent ambiguity unique to this problem which naturally disrupts this coherence. We address this issue by devising a judicious prior, and based on our model and prior propose a practical blind deblurring method for dual-lens cameras, that achieves state-of-the-art performance.

8 citations

Book ChapterDOI
07 Oct 2012
TL;DR: A technique to obtain the high dynamic range (HDR) irradiance of a scene from a set of differently exposed images captured using a hand-held camera and a transformation spread function (TSF) that represents space-variant blurring as a weighted average of differently transformed versions of the latent image.
Abstract: Knowledge of scene irradiance is necessary in many computer vision algorithms. In this paper, we develop a technique to obtain the high dynamic range (HDR) irradiance of a scene from a set of differently exposed images captured using a hand-held camera. Any incidental motion induced by camera-shake can result in non-uniform motion blur. This is particularly true for frames captured with high exposure durations. We model the motion blur using a transformation spread function (TSF) that represents space-variant blurring as a weighted average of differently transformed versions of the latent image. We initially estimate the TSF of the blurred frames and then estimate the latent irradiance of the scene.

6 citations

Proceedings ArticleDOI
06 Oct 2018
TL;DR: A semi -supervised training scheme that utilizes the strengths of both supervised and unsupervised learning to solve for the camera motion undergone by a space-variant blurred image and shows the effectiveness of such a motion estimation network with applications in space-Variant deblurring and change detection.
Abstract: We address the problem of camera motion estimation from a single blurred image with the aid of deep convolutional neural networks Unlike learning-based prior works that estimate a space-invariant blur kernel, we solve for the global camera motion which in turn represents the space-variant blur at each pixel Leveraging the camera motion as well as the clean reference image during training, we resort to a semi -supervised training scheme that utilizes the strengths of both supervised and unsupervised learning to solve for the camera motion undergone by a space-variant blurred image Finally, we show the effectiveness of such a motion estimation network with applications in space-variant deblurring and change detection

2 citations

Proceedings ArticleDOI
08 Dec 2022
TL;DR: In this article , a real-time detection and tracking of multiple point targets in diverse background conditions is addressed by applying a top-hat operator as the first stage of the detection algorithm and this is followed by a systematic thresholding scheme to yield a good balance between missed detections (MD) and false alarms (FA).
Abstract: Infrared (IR) point target detection and tracking is challenging due to lack of texture and detailed information of small dim targets. The problem becomes even more challenging when there is a real-time requirement too. A key goal in robust detection is to reduce missed detections (MD) and false alarms (FA). Complex traditional state-of-the-art point target detection algorithms may give accurate results but typically incur larger execution times which renders them unsuitable for real-time applications. On the other hand, deep learning methods for point target detection do not generalize satisfactorily with domain shifts. In this work, we address the problem of real-time detection and tracking of multiple point targets in diverse background conditions. We apply a ’top-hat’ operator as the first stage of our detection algorithm and this is followed by a systematic thresholding scheme to yield a good balance between MD and FA. This is followed by a methodology to find the exact target position from the detections, and a track association scheme in conjunction with Kalman filter for state estimation. Based on the proposed approach for target detection and track association, we perform 2D tracking of image coordinates as well as 3D tracking of azimuth and elevation angles. We verify the effectiveness of our method on 12 different IR image sequences over existing state-of-the-art methods in terms of accuracy as well as speed.

Cited by
More filters
Proceedings ArticleDOI
14 Jun 2020
TL;DR: This paper presents a large-scale bokeh dataset consisting of 5K shallow / wide depth-of-field image pairs captured using the Canon 7D DSLR with 50mm f/1.8 lenses, and proposes to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
Abstract: Bokeh is an important artistic effect used to highlight the main object of interest on the photo by blurring all out-of-focus areas. While DSLR and system camera lenses can render this effect naturally, mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics. Unlike the current solutions simulating bokeh by applying Gaussian blur to image background, in this paper we propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras. For this, we present a large-scale bokeh dataset consisting of 5K shallow / wide depth-of-field image pairs captured using the Canon 7D DSLR with 50mm f/1.8 lenses. We use these images to train a deep learning model to reproduce a natural bokeh effect based on a single narrow-aperture image. The experimental results show that the proposed approach is able to render a plausible non-uniform bokeh even in case of complex input data with multiple objects. The dataset, pre-trained models and codes used in this paper are available on the project website: https://people.ee.ethz.ch/ ihnatova/pynet-bokeh.html.

51 citations

Proceedings ArticleDOI
01 Oct 2019
TL;DR: This paper reviews the first AIM challenge on bokeh effect synthesis with the focus on proposed solutions and results, defining the state-of-the-art for practical bokeH effect simulation.
Abstract: This paper reviews the first AIM challenge on bokeh effect synthesis with the focus on proposed solutions and results. The participating teams were solving a real-world image-to-image mapping problem, where the goal was to map standard narrow-aperture photos to the same photos captured with a shallow depth-of-field by the Canon 70D DSLR camera. In this task, the participants had to restore bokeh effect based on only one single frame without any additional data from other cameras or sensors. The target metric used in this challenge combined fidelity scores (PSNR and SSIM) with solutions' perceptual results measured in a user study. The proposed solutions significantly improved baseline results, defining the state-of-the-art for practical bokeh effect simulation.

40 citations

Journal ArticleDOI
TL;DR: This work proposes a passive method to automatically detect image splicing using blur as a cue and can expose the presence of splicing by evaluating inconsistencies in motion blur even under space-variant blurring situations.
Abstract: The extensive availability of sophisticated image editing tools has rendered it relatively easy to produce fake images. Image splicing is a form of tampering in which an original image is altered by copying a portion from a different source. Because the phenomenon of motion blur is a common occurrence in hand-held cameras, we propose a passive method to automatically detect image splicing using blur as a cue. Specifically, we address the scenario of a static scene in which the cause of blur is due to hand shake. Existing methods for dealing with this problem work only in the presence of uniform space-invariant blur. In contrast, our method can expose the presence of splicing by evaluating inconsistencies in motion blur even under space-variant blurring situations. We validate our method on several examples for different scene situations and camera motions of interest.

36 citations

Proceedings ArticleDOI
01 May 2021
TL;DR: The NTIRE 2021 depth guided image relighting challenge as mentioned in this paper focused on one-to-one relighting where the goal is to transform the illumination setup of an input image (color temperature and light source position) to the target illumination setup.
Abstract: Image relighting is attracting increasing interest due to its various applications. From a research perspective, im-age relighting can be exploited to conduct both image normalization for domain adaptation, and also for data augmentation. It also has multiple direct uses for photo montage and aesthetic enhancement. In this paper, we review the NTIRE 2021 depth guided image relighting challenge.We rely on the VIDIT dataset for each of our two challenge tracks, including depth information. The first track is on one-to-one relighting where the goal is to transform the illumination setup of an input image (color temperature and light source position) to the target illumination setup. In the second track, the any-to-any relighting challenge, the objective is to transform the illumination settings of the in-put image to match those of another guide image, similar to style transfer. In both tracks, participants were given depth information about the captured scenes. We had nearly 250 registered participants, leading to 18 confirmed team sub-missions in the final competition stage. The competitions, methods, and final results are presented in this paper.

33 citations

Book ChapterDOI
23 Aug 2020
TL;DR: The second AIM realistic bokeh effect rendering challenge as discussed by the authors was the first attempt to learn a realistic shallow focus technique using a large-scale EBB! dataset consisting of 5K shallow/wide depth-of-field image pairs captured using the Canon 7D DSLR camera.
Abstract: This paper reviews the second AIM realistic bokeh effect rendering challenge and provides the description of the proposed solutions and results. The participating teams were solving a real-world bokeh simulation problem, where the goal was to learn a realistic shallow focus technique using a large-scale EBB! bokeh dataset consisting of 5K shallow/wide depth-of-field image pairs captured using the Canon 7D DSLR camera. The participants had to render bokeh effect based on only one single frame without any additional data from other cameras or sensors. The target metric used in this challenge combined the runtime and the perceptual quality of the solutions measured in the user study. To ensure the efficiency of the submitted models, we measured their runtime on standard desktop CPUs as well as were running the models on smartphone GPUs. The proposed solutions significantly improved the baseline results, defining the state-of-the-art for practical bokeh effect rendering problem.

33 citations