scispace - formally typeset
Search or ask a question
Proceedings Article

View Synthesis of Scenes with Man-Made Objects Using Uncalibrated Cameras.

TL;DR: This work proposes a scheme for view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras that computes z-buffer values that can be used for handling occlusions in the synthesized view, but requires the computation of the infinite homography.
Abstract: We propose a scheme for view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras. Under the assumption of availability of the correspondence of three vanishing points,in general position, our scheme computes z-buffer values that can be used for handling occlusions in the synthesized view. This requires the computation of the infinite homography. We also present an alternate formulation of the technique which works with the same assumptions but does not require infinite homography computation. We present experimental results to establish the validity of both formulations.
Citations
More filters
Journal ArticleDOI
01 Feb 2007
TL;DR: Two techniques are proposed for novel view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras under the assumption of availability of the correspondence of three vanishing points.
Abstract: We have attempted the problem of novel view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras. Under the assumption of availability of the correspondence of three vanishing points, in general position, we propose two techniques. The first is a transfer-based scheme which synthesizes new views with only a translation of the virtual camera and computes z-buffer values for handling occlusions in synthesized views. The second is a reconstruction-based scheme which synthesizes arbitrary new views in which the camera can undergo rotation as well as translation. We present experimental results to establish the validity of both formulations.

1 citations


Cites background from "View Synthesis of Scenes with Man-M..."

  • ...Preliminary results for both schemes with only a translation of the virtual camera have appeared in (Sharma et al., 2004)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: An iterative multiframe algorithm is presented for dense depth estimation that both handles low-contrast regions and produces piecewise smooth depth maps.
Abstract: A fundamental problem in computer vision and graphics is that of arbitrary view synthesis for static 3-D scenes, whereby a user-specified viewpoint of the given scene may be created directly from a representation. We propose a novel compact representation for this purpose called the multivalued representation (MVR). Starting with an image sequence captured by a moving camera undergoing either unknown planar translation or orbital motion, a MVR is derived for each preselected reference frame, and may then be used to synthesize arbitrary views of the scene. The representation itself is comprised of multiple depth and intensity levels in which the k-th level consists of points occluded by exactly k surfaces. To build a MVR with respect to a particular reference frame, dense depth maps are first computed for all the neighboring frames of the reference frame. The depth maps are then combined together into a single map, where points are organized by occlusions rather than by coherent affine motions. This grouping facilitates an automatic process to determine the number of levels and helps to reduce the artifacts caused by occlusions in the scene. An iterative multiframe algorithm is presented for dense depth estimation that both handles low-contrast regions and produces piecewise smooth depth maps. Reconstructed views as well as arbitrary flyarounds of real scenes are presented to demonstrate the effectiveness of the approach.

20 citations


"View Synthesis of Scenes with Man-M..." refers background in this paper

  • ...A novel representation consisting of multiple depth and intensity levels for new view synthesis is proposed in [2], however, it requires calibrated cameras....

    [...]

Journal ArticleDOI
TL;DR: This paper addresses the problem of characterizing the set of all images of a rigid set of m points and n lines observed by a weak perspective or paraperspective camera by showing that the corresponding image space can be represented by a six-dimensional variety embedded in R2(m+n) and parameterized by the image positions of three reference points.
Abstract: This paper addresses the problem of characterizing the set of all images of a rigid set of m points and n lines observed by a weak perspective or paraperspective camera. By taking explicitly into account the Euclidean constraints associated with calibrated cameras, we show that the corresponding image space can be represented by a six-dimensional variety embedded in {\cal R}^{2(m+n)} and parameterized by the image positions of three reference points. The coefficients defining this parameterized image variety (or PIV for short) can be estimated from a sample of images of a scene via linear and non-linear least squares. The PIV provides an integrated framework for using both point and line features to synthesize new images from a set of pre-recorded pictures (image-based rendering). The proposed technique does not perform any explicit three-dimensional scene reconstruction but it supports hidden-surface elimination, texture mapping and interactive image synthesis at frame rate on ordinary PCs. It has been implemented and extensively tested on real data sets.

13 citations


"View Synthesis of Scenes with Man-M..." refers methods in this paper

  • ...In [4], the constraints imposed by weak-perspective and paraperspective cameras are used for view synthesis while we work with perspective cameras....

    [...]

Proceedings Article
01 Jan 2002

7 citations


"View Synthesis of Scenes with Man-M..." refers methods in this paper

  • ...In [6], [9] and [12], a set of views of a scene taken by uncalibrated cameras are used to reconstruct the scene in Projective Grid Space defined by the epipolar geometry between two chosen basis cameras....

    [...]

Journal ArticleDOI
TL;DR: A method for synthesis of views corresponding to translational motion of the camera, which can handle occlusions and changes in visibility in the synthesized views, and gives a characterisation of the viewpoints corresponding to which views can be synthesized.

3 citations


"View Synthesis of Scenes with Man-M..." refers methods in this paper

  • ...We have proposed a technique for view synthesis under the simpler assumption of translating cameras in [11]....

    [...]