scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Novel view synthesis using a translating camera

01 Mar 2005-Pattern Recognition Letters (Elsevier Science Inc.)-Vol. 26, Iss: 4, pp 483-492
TL;DR: A method for synthesis of views corresponding to translational motion of the camera, which can handle occlusions and changes in visibility in the synthesized views, and gives a characterisation of the viewpoints corresponding to which views can be synthesized.
About: This article is published in Pattern Recognition Letters.The article was published on 2005-03-01. It has received 3 citations till now. The article focuses on the topics: Camera auto-calibration & View synthesis.
Citations
More filters
Proceedings Article
01 Jan 2004
TL;DR: This work proposes a scheme for view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras that computes z-buffer values that can be used for handling occlusions in the synthesized view, but requires the computation of the infinite homography.
Abstract: We propose a scheme for view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras. Under the assumption of availability of the correspondence of three vanishing points,in general position, our scheme computes z-buffer values that can be used for handling occlusions in the synthesized view. This requires the computation of the infinite homography. We also present an alternate formulation of the technique which works with the same assumptions but does not require infinite homography computation. We present experimental results to establish the validity of both formulations.

1 citations


Cites methods from "Novel view synthesis using a transl..."

  • ...We have proposed a technique for view synthesis under the simpler assumption of translating cameras in [11]....

    [...]

Journal ArticleDOI
01 Feb 2007
TL;DR: Two techniques are proposed for novel view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras under the assumption of availability of the correspondence of three vanishing points.
Abstract: We have attempted the problem of novel view synthesis of scenes containing man-made objects from images taken by arbitrary, uncalibrated cameras. Under the assumption of availability of the correspondence of three vanishing points, in general position, we propose two techniques. The first is a transfer-based scheme which synthesizes new views with only a translation of the virtual camera and computes z-buffer values for handling occlusions in synthesized views. The second is a reconstruction-based scheme which synthesizes arbitrary new views in which the camera can undergo rotation as well as translation. We present experimental results to establish the validity of both formulations.

1 citations


Cites background or methods from "Novel view synthesis using a transl..."

  • ...We have proposed a technique for view synthesis under the simpler assumption of translating cameras in (Sharma et al., 2005)....

    [...]

  • ...under the simpler assumption of translating cameras in (Sharma et al., 2005)....

    [...]

Journal ArticleDOI
TL;DR: Experimental comparisons with the images synthesized using the actual three -dimensional scene structure and camera poses show that the proposed method effectively describes scene changes by viewpoint movements without estimation of 3 -D and camera information.
Abstract: This paper presents an uncalibrated v iew synthesis method using piecewise planar regions that are extracted from a given set of image pairsthrough planar segmentation. Our work concentrates on a view synthesis method that does not needestimation of camera parameters and scene structure. Forour goal, we simply assume that images of real world are composed of piecewise planar regions. Then, we perform view synthesis simply with planar regions and homographiesbetween them. Here, for accurate extraction of planar homographies and piecewise pla nar regions in images, the proposed method employs iterative homography estimation and color segmentation -based planar region extraction. The proposed method synthesizes the virtual view image using a set of planar regions as well as a set of corresponding homographies. Experimental comparisons with the images synthesized using the actual three -dimensional (3-D) scene structure and camera poses show that the proposed method effectively describes scene changes by viewpoint movements without estimation of 3 -D and camera information.

Cites background or methods from "Novel view synthesis using a transl..."

  • ...Although 3-D reconstruction with multiple images [1–8] could automatically synthesize images, essentially it requires to perform additional stereo matching or camera calibration....

    [...]

  • ...Generally, view synthesis methods are divided into two categories depending on whether or not camera calibration is used: calibrated [2–4] and uncalibrated [5–8]....

    [...]

  • ...The goal of view synthesis is generating a virtual image at an arbitrary viewpoint using multiple images taken from a camera [1–8]....

    [...]

  • ...Recently, researches on uncalibrated view synthesis have been done to improve the performance [5–8]....

    [...]

  • ...Unlike the previous uncalibrated view synthesis methods [5–8], the proposed method does not require any 3-D scene structure information such as disparity or motion parallax....

    [...]

References
More filters
Book
01 Jan 2000
TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
Abstract: From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.

15,558 citations

Proceedings ArticleDOI
01 Sep 1993
TL;DR: In this paper, a view interpolation approach to synthesize 3D scenes has been proposed, which combines interpolation of texture maps and their shape, is applied to computing arbitrary intermediate frames from an array of prestored images.
Abstract: Image-space simplifications have been used to accelerate the calculation of computer graphic images since the dawn of visual simulation. Texture mapping has been used to provide a means by which images may themselves be used as display primitives. The work reported by this paper endeavors to carry this concept to its logical extreme by using interpolated images to portray three-dimensional scenes. The special-effects technique of morphing, which combines interpolation of texture maps and their shape, is applied to computing arbitrary intermediate frames from an array of prestored images. If the images are a structured set of views of a 3D object or scene, intermediate frames derived by morphing can be used to approximate intermediate 3D transformations of the object or scene. Using the view interpolation approach to synthesize 3D scenes has two main advantages. First, the 3D representation of the scene may be replaced with images. Second, the image synthesis time is independent of the scene complexity. The correspondence between images, required for the morphing method, can be predetermined automatically using the range data associated with the images. The method is further accelerated by a quadtree decomposition and a view-independent visible priority. Our experiments have shown that the morphing can be performed at interactive rates on today’s high-end personal computers. Potential applications of the method include virtual holograms, a walkthrough in a virtual environment, image-based primitives and incremental rendering. The method also can be used to greatly accelerate the computation of motion blur and soft shadows cast by area light sources. CR Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. Additional Keywords: image morphing, interpolation, virtual reality, motion blur, shadow, incremental rendering, real-time display, virtual holography, motion compensation.

1,340 citations

Proceedings ArticleDOI
01 Jul 1992
TL;DR: 2.1 Conventional Metamorphosis Techniques Mc[:ml(wpht)iii twlween lWo or mor’c imafys (wer lime i) u uwi’ul \ i~u;ii tcchniquc.
Abstract: 2.1 Conventional Metamorphosis Techniques Mc[:ml(wpht)iii twlween lWo or mor’c imafys (wer lime i) u uwi’ul \ i~u;ii tcchniquc. (Jflen uwd f’orCducaliomd (n’tMCid;liMll Cnt purpt>wi. ‘1’l-:idi(ional Iilmmahing techniques for (his cflcc[ include ~’lckcr c’ut~(iuc’h LISu chwwwr cxhibi(ing ch:mgm while running thr(mgll ;! toreil and prosing behind several trws ) tind op[ic:d cro\\diswdv<’. in which onc image is f:ide(i out while wwther is sinwlt:lnLNNI\l)f’:idcdin (Mith makeup ch:mge. tippliwcm, or nhjecl subs[i [u[I(m ). Sc\’~’riilclawic horror lilm~ illu$tfiite [he process: who ctwld hnycl ~hc b:lir-tai~ing (fiiniform;ilml of the Woitman. or the drw m:itic lllct;itll(~rpll(~sii from Dr. Jchyll [o Mr. Hyde’? This pupcr prcwmls ii c(mtcnlp{mmy w~lu(i(mto the vi~u:d translonmrtion pnh lL’nl.

1,130 citations

Proceedings ArticleDOI
01 Aug 1996
TL;DR: This paper introduces a simple extension to image morphing that correctly handles 3D projective camera and scene transformations and works by prewarping two images prior to computing a morph and then postwarped the interpolated images.
Abstract: Image morphing techniques can generate compelling 2D transitions between images. However, differences in object pose or viewpoint often cause unnatural distortions in image morphs that are difficult to correct manually. Using basic principles of projective geometry, this paper introduces a simple extension to image morphing that correctly handles 3D projective camera and scene transformations. The technique, called view morphing, works by prewarping two images prior to computing a morph and then postwarping the interpolated images. Because no knowledge of 3D shape is required, the technique may be applied to photographs and drawings, as well as rendered scenes. The ability to synthesize changes both in viewpoint and image structure affords a wide variety of interesting 3D effects via simple image transformations. CR

872 citations