scispace - formally typeset
Search or ask a question
Author

Shakti Kamal

Bio: Shakti Kamal is an academic researcher from Indian Institute of Technology Delhi. The author has contributed to research in topics: View synthesis & Image-based modeling and rendering. The author has an hindex of 1, co-authored 1 publications receiving 3 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A method for synthesis of views corresponding to translational motion of the camera, which can handle occlusions and changes in visibility in the synthesized views, and gives a characterisation of the viewpoints corresponding to which views can be synthesized.

3 citations


Cited by
More filters
Proceedings Article
01 Jan 2004
TL;DR: This work proposes a scheme for view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras that computes z-buffer values that can be used for handling occlusions in the synthesized view, but requires the computation of the infinite homography.
Abstract: We propose a scheme for view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras. Under the assumption of availability of the correspondence of three vanishing points,in general position, our scheme computes z-buffer values that can be used for handling occlusions in the synthesized view. This requires the computation of the infinite homography. We also present an alternate formulation of the technique which works with the same assumptions but does not require infinite homography computation. We present experimental results to establish the validity of both formulations.

1 citations

Journal ArticleDOI
01 Feb 2007
TL;DR: Two techniques are proposed for novel view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras under the assumption of availability of the correspondence of three vanishing points.
Abstract: We have attempted the problem of novel view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras. Under the assumption of availability of the correspondence of three vanishing points, in general position, we propose two techniques. The first is a transfer-based scheme which synthesizes new views with only a translation of the virtual camera and computes z-buffer values for handling occlusions in synthesized views. The second is a reconstruction-based scheme which synthesizes arbitrary new views in which the camera can undergo rotation as well as translation. We present experimental results to establish the validity of both formulations.

1 citations

Journal ArticleDOI
TL;DR: Experimental comparisons with the images synthesized using the actual three -dimensional scene structure and camera poses show that the proposed method effectively describes scene changes by viewpoint movements without estimation of 3 -D and camera information.
Abstract: This paper presents an uncalibrated v iew synthesis method using piecewise planar regions that are extracted from a given set of image pairsthrough planar segmentation. Our work concentrates on a view synthesis method that does not needestimation of camera parameters and scene structure. Forour goal, we simply assume that images of real world are composed of piecewise planar regions. Then, we perform view synthesis simply with planar regions and homographiesbetween them. Here, for accurate extraction of planar homographies and piecewise pla nar regions in images, the proposed method employs iterative homography estimation and color segmentation -based planar region extraction. The proposed method synthesizes the virtual view image using a set of planar regions as well as a set of corresponding homographies. Experimental comparisons with the images synthesized using the actual three -dimensional (3-D) scene structure and camera poses show that the proposed method effectively describes scene changes by viewpoint movements without estimation of 3 -D and camera information.