scispace - formally typeset
Search or ask a question
Author

Geetika Sharma

Bio: Geetika Sharma is an academic researcher from Indian Institute of Technology Delhi. The author has contributed to research in topics: View synthesis & Vanishing point. The author has an hindex of 1, co-authored 4 publications receiving 5 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A method for synthesis of views corresponding to translational motion of the camera, which can handle occlusions and changes in visibility in the synthesized views, and gives a characterisation of the viewpoints corresponding to which views can be synthesized.
Abstract: This paper addresses the problem of synthesizing novel views of a scene using images taken by an uncalibrated translating camera. We propose a method for synthesis of views corresponding to translational motion of the camera. Our scheme can handle occlusions and changes in visibility in the synthesized views. We give a characterisation of the viewpoints corresponding to which views can be synthesized. Experimental results have established the validity and effectiveness of the method. Our synthesis scheme can also be used to detect translational pan motion of the camera in a given video sequence. We have also presented experimental results to illustrate this feature of our scheme.

3 citations

Proceedings Article
01 Jan 2004
TL;DR: This work proposes a scheme for view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras that computes z-buffer values that can be used for handling occlusions in the synthesized view, but requires the computation of the infinite homography.
Abstract: We propose a scheme for view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras. Under the assumption of availability of the correspondence of three vanishing points,in general position, our scheme computes z-buffer values that can be used for handling occlusions in the synthesized view. This requires the computation of the infinite homography. We also present an alternate formulation of the technique which works with the same assumptions but does not require infinite homography computation. We present experimental results to establish the validity of both formulations.

1 citations

Journal ArticleDOI
01 Feb 2007
TL;DR: Two techniques are proposed for novel view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras under the assumption of availability of the correspondence of three vanishing points.
Abstract: We have attempted the problem of novel view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras. Under the assumption of availability of the correspondence of three vanishing points, in general position, we propose two techniques. The first is a transfer-based scheme which synthesizes new views with only a translation of the virtual camera and computes z-buffer values for handling occlusions in synthesized views. The second is a reconstruction-based scheme which synthesizes arbitrary new views in which the camera can undergo rotation as well as translation. We present experimental results to establish the validity of both formulations.

1 citations

Book ChapterDOI
13 Jan 2006
TL;DR: A voxel-based volumetric scene reconstruction scheme is used to obtain a scene model and synthesize views of the entire scene using an affine coordinate system and experimental results are presented to validate the technique.
Abstract: We propose a technique for view synthesis of scenes with static objects as well as objects that translate independent of the camera motion. Assuming the availability of three vanishing points in general position in the given views, we set up an affine coordinate system in which the static and moving points are reconstructed and the translations of the dynamic objects are recovered. We then describe how to synthesize new views corresponding to a completely new camera specified in the affine space with new translations for the dynamic objects. As the extent of the synthesized scene is restricted by the availability of corresponding points, we use a voxel-based volumetric scene reconstruction scheme to obtain a scene model and synthesize views of the entire scene. We present experimental results to validate our technique.

Cited by
More filters
Proceedings ArticleDOI
24 Aug 2014
TL;DR: This paper presents a novel low-cost hybrid Kinect-variety based content generation scheme for 3DTV displays and demonstrates that proposed robust integration provides guarantees on the completeness and consistency of the algorithm.
Abstract: This paper presents a novel low-cost hybrid Kinect-variety based content generation scheme for 3DTV displays. The integrated framework constructs an efficient consistent image-space parameterization of 3D scene structure using only sparse depth information of few reference scene points. Under full-perspective camera model, the enforced Euclidean constraints simplify the synthesis of high quality novel multiview content for distinct camera motions. The algorithm does not rely on complete precise scene geometry information, and are unaffected by scene complex geometric properties, unconstrained environmental variations and illumination conditions. It, therefore, performs fairly well under a wider set of operation condition where the 3D range sensors fail or reliability of depth-based algorithms are suspect. The robust integration of vision algorithm and visual sensing scheme complement each other's shortcomings. It opens new opportunities for envisioning vision-sensing applications in uncontrolled environments. We demonstrate that proposed robust integration provides guarantees on the completeness and consistency of the algorithm. This leads to improved reliability on an extensive set of experimental results.

7 citations

Proceedings Article
01 Jan 2004
TL;DR: This work proposes a scheme for view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras that computes z-buffer values that can be used for handling occlusions in the synthesized view, but requires the computation of the infinite homography.
Abstract: We propose a scheme for view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras. Under the assumption of availability of the correspondence of three vanishing points,in general position, our scheme computes z-buffer values that can be used for handling occlusions in the synthesized view. This requires the computation of the infinite homography. We also present an alternate formulation of the technique which works with the same assumptions but does not require infinite homography computation. We present experimental results to establish the validity of both formulations.

1 citations

Journal ArticleDOI
01 Feb 2007
TL;DR: Two techniques are proposed for novel view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras under the assumption of availability of the correspondence of three vanishing points.
Abstract: We have attempted the problem of novel view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras. Under the assumption of availability of the correspondence of three vanishing points, in general position, we propose two techniques. The first is a transfer-based scheme which synthesizes new views with only a translation of the virtual camera and computes z-buffer values for handling occlusions in synthesized views. The second is a reconstruction-based scheme which synthesizes arbitrary new views in which the camera can undergo rotation as well as translation. We present experimental results to establish the validity of both formulations.

1 citations

Journal ArticleDOI
TL;DR: Experimental comparisons with the images synthesized using the actual three -dimensional scene structure and camera poses show that the proposed method effectively describes scene changes by viewpoint movements without estimation of 3 -D and camera information.
Abstract: This paper presents an uncalibrated v iew synthesis method using piecewise planar regions that are extracted from a given set of image pairsthrough planar segmentation. Our work concentrates on a view synthesis method that does not needestimation of camera parameters and scene structure. Forour goal, we simply assume that images of real world are composed of piecewise planar regions. Then, we perform view synthesis simply with planar regions and homographiesbetween them. Here, for accurate extraction of planar homographies and piecewise pla nar regions in images, the proposed method employs iterative homography estimation and color segmentation -based planar region extraction. The proposed method synthesizes the virtual view image using a set of planar regions as well as a set of corresponding homographies. Experimental comparisons with the images synthesized using the actual three -dimensional (3-D) scene structure and camera poses show that the proposed method effectively describes scene changes by viewpoint movements without estimation of 3 -D and camera information.