scispace - formally typeset
Search or ask a question
Author

Vivek Srikakulapu

Bio: Vivek Srikakulapu is an academic researcher. The author has an hindex of 1, co-authored 1 publications receiving 12 citations.

Papers
More filters
Proceedings ArticleDOI
01 Dec 2015
TL;DR: A model that combines two monocular depth cues namely Texture and Defocus is presented, which mainly focuses on modifying the erroneous regions in defocus map by using the texture energy present at that region.
Abstract: As imaging is a process of 2D projection of a 3D scene, the depth information is lost at the time of image capture from conventional camera. This depth information can be inferred back from a set of visual cues present in the image. In this work, we present a model that combines two monocular depth cues namely Texture and Defocus. Depth is related to the spatial extent of the defocus blur by assuming that more an object is blurred, the farther it is from the camera. At first, we estimate the amount of defocus blur present at edge pixels of an image. This is referred as the Sparse Defocus map. Using the sparse defocus map we generate the full defocus map. However such defocus maps always contain hole regions and ambiguity in depth. To handle this problem an additional depth cue, in our case texture has been integrated to generate better defocus map. This integration mainly focuses on modifying the erroneous regions in defocus map by using the texture energy present at that region. The sparse defocus map is corrected using texture based rules. The hole regions, where there are no significant edges and texture are detected and corrected in sparse defocus map. We have used region wise propagation for better defocus map generation. The accuracy of full defocus map is increased with the region wise propagation.

15 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper presents a novel framework to generate a more accurate depth map for video using defocus and motion cues and corrects the errors in other parts of depth map caused by inaccurate estimation of defocus blur and motion.
Abstract: Significant recent developments in 3D display technology have focused on techniques for converting 2D media into 3D. Depth map is an integral part of 2D-to-3D conversion. Combining multiple depth cues results in a more accurate depth map as it compensates for the errors caused by one depth cue as well as its absence by other depth cues. In this paper, we present a novel framework to generate a more accurate depth map for video using defocus and motion cues. The moving objects present in the scene are the source of errors in both defocus and motion-based depth map estimation. The proposed method rectifies these errors in the depth map by integrating defocus blur and motion cues. In addition, it also corrects the errors in other parts of depth map caused by inaccurate estimation of defocus blur and motion. Since the proposed integration approach relies on the characteristics of point spread functions of defocus and motion blur along with their relations to camera parameters, it is more accurate and reliable.

14 citations

Journal ArticleDOI
TL;DR: A novel method to estimate the concurrent defocus and motion blurs in a single image is proposed, which works well for real images as well as for compressed images.
Abstract: The occurrence of motion blur along with defocus blur is a common phenomena in natural images. Usually, these blurs are spatially varying in nature for any general image and estimation of one type of blur is affected by presence of other. In this paper, we propose a novel method to estimate the concurrent defocus and motion blurs in a single image. Unlike the recent methods, which perform well only on simulated conditions or in presence of single type of blur, proposed method works well for real images as well as for compressed images. In this paper, we consider only commonly associated motion and defocus blurs for analysis. Decoupling of motion and defocus blur provides a fundamental tool that can be used for various analysis and applications.

11 citations

Journal ArticleDOI
TL;DR: MonoDEVSNet as mentioned in this paper leverages virtual-world images with accurate semantic and depth supervision, and addresses the virtual-to-real domain gap to generate self-supervision.
Abstract: Depth information is essential for on-board perception in autonomous driving and driver assistance. Monocular depth estimation (MDE) is very appealing since it allows for appearance and depth being on direct pixelwise correspondence without further calibration. Best MDE models are based on Convolutional Neural Networks (CNNs) trained in a supervised manner, i.e ., assuming pixelwise ground truth (GT). Usually, this GT is acquired at training time through a calibrated multi-modal suite of sensors. However, also using only a monocular system at training time is cheaper and more scalable. This is possible by relying on structure-from-motion (SfM) principles to generate self-supervision. Nevertheless, problems of camouflaged objects, visibility changes, static-camera intervals, textureless areas, and scale ambiguity, diminish the usefulness of such self-supervision. In this paper, we perform mono cular ${d}$ epth ${e}$ stimation by ${v}$ irtual-world ${s}$ upervision (MonoDEVS) and real-world SfM self-supervision. We compensate the SfM self-supervision limitations by leveraging virtual-world images with accurate semantic and depth supervision, and addressing the virtual-to-real domain gap. Our MonoDEVSNet outperforms previous MDE CNNs trained on monocular and even stereo sequences.

8 citations

Proceedings ArticleDOI
01 Aug 2017
TL;DR: The proposed method uses color uniformity principle to detect hole regions present in depth map and provides a framework to identify falsely detected holes in order to increase effectiveness of the method.
Abstract: Depth map estimation forms an integral part of many applications such as 2D-to-3D creation. There exists various methods in literature for depth map estimation using different cues and structure. Usually, depth information is decoded from these cues at the edges and matting is applied to spread it over neighboring regions. Defocus is one such cue due to its natural existence and does not require any precondition compared to other cues. However, there can exist regions in images with no edges. These regions are referred to hole regions and are the main source of error in estimated depth map. In this paper, we propose a method to correct some of these errors to obtain an accurate depth map. The proposed method uses color uniformity principle to detect hole regions present in depth map. We also provide a framework to identify falsely detected holes in order to increase effectiveness of our method.

7 citations

Journal ArticleDOI
01 Sep 2022
TL;DR: In this article , a novel method that is based on edge defocus tracking is proposed to estimate the depth of burden surface images with different morphological characteristics, and the depth is propagated from edge to the entire image based on the edge line tracking method.
Abstract: Continuous and accurate depth information of blast furnace burden surface is important for optimizing charging operations, thereby reducing its energy consumption and CO2 emissions. However, depth estimation for a single image is challenging, especially when estimating the depth of burden surface images in the harsh internal environment of the blast furnace. In this paper, a novel method that is based on edge defocus tracking is proposed to estimate the depth of burden surface images with different morphological characteristics. First, an endoscopic video acquisition system is designed, key frames of burden surface video in stable state are extracted based on feature point optical flow method, and the sparse depth is estimated by using the defocus-based method. Next, the burden surface image is divided into four subregions according to the distribution characteristics of the burden surface, the edge line trajectories and an eight-direction depth gradient template are designed to develop depth propagation rules. Finally, the depth is propagated from edge to the entire image based on edge line tracking method. The experimental results show that the proposed method can accurately and efficiently estimate the depth of the burden surface and provide key data support for optimizing the operation of blast furnace.

5 citations