scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Uncalibrated view synthesis using planar segmentation of images

TL;DR: Experimental comparisons with the images synthesized using the actual three -dimensional scene structure and camera poses show that the proposed method effectively describes scene changes by viewpoint movements without estimation of 3 -D and camera information.
Abstract: This paper presents an uncalibrated v iew synthesis method using piecewise planar regions that are extracted from a given set of image pairsthrough planar segmentation. Our work concentrates on a view synthesis method that does not needestimation of camera parameters and scene structure. Forour goal, we simply assume that images of real world are composed of piecewise planar regions. Then, we perform view synthesis simply with planar regions and homographiesbetween them. Here, for accurate extraction of planar homographies and piecewise pla nar regions in images, the proposed method employs iterative homography estimation and color segmentation -based planar region extraction. The proposed method synthesizes the virtual view image using a set of planar regions as well as a set of corresponding homographies. Experimental comparisons with the images synthesized using the actual three -dimensional (3-D) scene structure and camera poses show that the proposed method effectively describes scene changes by viewpoint movements without estimation of 3 -D and camera information.
References
More filters
Proceedings ArticleDOI
20 Jun 1995
TL;DR: An efficient recursive algorithm is described that uses a unified framework for point and line features and is immune to outliers and feature drift, two weaknesses of existing structure from motion techniques.
Abstract: A technique is presented for computing 3D scene structure from point and line features in monocular image sequences. Unlike previous methods, the technique guarantees the completeness of the recovered scene, ensuring that every scene feature that is detected in each image is reconstructed. The approach relies on the presence of four or more reference features whose correspondences are known in all the images. Under an orthographic or affine camera model, the parallax of the reference features provides constraints that simplify the recovery of the rest of the visible scene. An efficient recursive algorithm is described that uses a unified framework for point and line features. The algorithm integrates the tasks of feature correspondence and structure recovery, ensuring that all reconstructible features are tracked. In addition, the algorithm is immune to outliers and feature drift, two weaknesses of existing structure from motion techniques. Experimental results are presented for real images. >

57 citations


"Uncalibrated view synthesis using p..." refers methods in this paper

  • ...Besides camera calibration, depth estimation [11–14] is needed to synthesize a virtual view regardless of calibrated/uncalibrated view synthesis methods....

    [...]

Journal ArticleDOI
TL;DR: A novel automatic method for view synthesis from a triplet of uncalibrated images based on trinocular edge matching followed by transfer by interpolation, occlusion detection and correction and finally rendering is presented.

38 citations


"Uncalibrated view synthesis using p..." refers background or methods in this paper

  • ...Although 3-D reconstruction with multiple images [1–8] could automatically synthesize images, essentially it requires to perform additional stereo matching or camera calibration....

    [...]

  • ...Generally, view synthesis methods are divided into two categories depending on whether or not camera calibration is used: calibrated [2–4] and uncalibrated [5–8]....

    [...]

  • ...26 self-calibration method or by utilizing the relation matrix that specifies the relationship of image correspondences from an image pair, such as a homography or fundamental matrix [7,16]....

    [...]

  • ...The goal of view synthesis is generating a virtual image at an arbitrary viewpoint using multiple images taken from a camera [1–8]....

    [...]

  • ...Recently, researches on uncalibrated view synthesis have been done to improve the performance [5–8]....

    [...]

Proceedings ArticleDOI
17 Sep 2003
TL;DR: An automatic method for specifying the virtual viewpoint based on the replication of the epipolar geometry linking two reference views is introduced and a method for generating synthetic views of a soccer ground starting from a single uncalibrated image is presented.
Abstract: This work deals with the view synthesis problem, i.e., how to generate snapshots of a scene taken from a "virtual" viewpoint different from all the viewpoints of the real views. Starting from uncalibrated reference images, the geometry of the scene is recovered by means of the relative affine structure. This information is used to extrapolate novel views using planar warping plus parallax correction. The contributions of this paper are twofold. First we introduce an automatic method for specifying the virtual viewpoint based on the replication of the epipolar geometry linking two reference views. Second, we present a method for generating synthetic views of a soccer ground starting from a single uncalibrated image. Experimental results using real images are shown.

23 citations


"Uncalibrated view synthesis using p..." refers background or methods in this paper

  • ...With an unrectified input image pair, various methods, which employ additional procedures such as self-calibration [16], image rectification [24], or motion parallax [5,8], have been developed for view synthesis....

    [...]

  • ...Unlike the previous uncalibrated view synthesis methods [5–8], the proposed method does not require any 3-D scene structure information such as disparity or motion parallax....

    [...]

  • ...The last approach to virtual view synthesis is an uncalibrated view synthesis technique with motion parallax [5,8]....

    [...]

  • ...Although 3-D reconstruction with multiple images [1–8] could automatically synthesize images, essentially it requires to perform additional stereo matching or camera calibration....

    [...]

  • ...Generally, view synthesis methods are divided into two categories depending on whether or not camera calibration is used: calibrated [2–4] and uncalibrated [5–8]....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a reconstruction method for the scene, then it discusses how the framework of projective geometry allows symbolic or numerical information about positions to be derived, and how knowledge about the scene can be used for computing symbolic and numerical relationships.

20 citations


"Uncalibrated view synthesis using p..." refers methods in this paper

  • ...Besides camera calibration, depth estimation [11–14] is needed to synthesize a virtual view regardless of calibrated/uncalibrated view synthesis methods....

    [...]

Proceedings ArticleDOI
01 Jan 2006
TL;DR: The method uses the camera models and point cloud typically generated by a structure-and-motion process as a starting point for developing a higher level model of the scene to improve the accuracy of fit.
Abstract: This paper describes a method for generating a model-based reconstruction of a scene from image data. The method uses the camera models and point cloud typically generated by a structure-and-motion process as a starting point for developing a higher level model of the scene. The method relies on the user to provide a minimal amount of structural seeding information from which more complex geometry is extrapolated. The regularity typically present in man-made environments is used to minimise the interaction required, but also to improve the accuracy of fit. We demonstrate model based reconstructions obtained using this method.

17 citations


"Uncalibrated view synthesis using p..." refers methods in this paper

  • ...For our goal, the proposed method employs a planar region based processing [18–20]....

    [...]