scispace - formally typeset
Search or ask a question

Showing papers on "View synthesis published in 2002"


Proceedings ArticleDOI
09 Oct 2002
TL;DR: The heart of the method is using programmable pixel shader technology to square intensity differences between reference image pixels, and then to choose final colors that correspond to the minimum difference, i.e. the most consistent color.
Abstract: We present a novel use of commodity graphics hardware that effectively combines a plane-sweeping algorithm with view synthesis for real-time, on-line 3D scene acquisition and view synthesis. Using real-time imagery from a few calibrated cameras, our method can generate new images from nearby viewpoints, estimate a dense depth map from the current viewpoint, or create a textured triangular mesh. We can do this without prior geometric information or requiring any user interaction, in real time and on line. The heart of our method is using programmable pixel shader technology to square intensity differences between reference image pixels, and then to choose final colors (or depths) that correspond to the minimum difference, i.e. the most consistent color. In this paper we describe the method, place it in the context of related work in computer graphics and computer vision, and present results.

185 citations


Journal ArticleDOI
TL;DR: A new “range-space” approach is described for synergistic resolution of both stereovision and reflectance (visual) modeling problems simultaneously, which can be applied to arbitrary camera arrangements with different intrinsic and extrinsic parameters, image types, image resolutions, and image number.
Abstract: A new “range-space” approach is described for synergistic resolution of both stereovision and reflectance (visual) modeling problems simultaneously. This synergistic approach can be applied to arbitrary camera arrangements with different intrinsic and extrinsic parameters, image types, image resolutions, and image number. These images are analyzed in a step-wise manner to extract 3-D range measurements and also to render a customized perspective view. The entire process is fully automatic. An extensive and detailed experimental validation phase supports the basic feasibility and generality of the Range-Space Approach.

24 citations


Proceedings ArticleDOI
01 Jan 2002
TL;DR: A new method for directly specifying the novel camera motion for epipolar transfer is presented and a backward mapping scheme for trifocal transfer is developed which overcomes the problems associated with standard forward mapping methods.
Abstract: Given a set of real images, Novel View Synthesis (NVS) aims to produce views of a scene that would correspond to that of a virtual camera. There exist many approaches to solving this problem. We consider physically valid NVS methods, in particular those based on epipolar and trifocal transfer. We review a number of methods and place them into a common framework of view specification and mapping method. We present a new method for directly specifying the novel camera motion for epipolar transfer. We also develop a backward mapping scheme for trifocal transfer which overcomes the problems associated with standard forward mapping methods.

22 citations


Dissertation
01 Jan 2002
TL;DR: This thesis describes methods that generate a digital three-dimensional model of a visual scene's surfaces, using a set of calibrated photographs taken of the scene, and describes post-processing methods that refine surface reconstructions to improve model fidelity.
Abstract: This thesis describes methods that generate a digital three-dimensional (3D) model of a visual scene's surfaces, using a set of calibrated photographs taken of the scene. The 3D model is then rendered to produce views of the scene from new viewpoints. In the literature, this is known as 3D scene reconstruction for new view synthesis. This thesis introduces novel approaches that improve upon the quality, efficiency, and applicability of existing methods. To achieve a high quality reconstruction, it is essential to know which cameras have visibility of local areas on the surface. Accordingly, we present a related pair of techniques for computing visibility during a volumetric reconstruction. We then describe post-processing methods that refine surface reconstructions to improve model fidelity. We explore different representations for modeling the 3D surface during reconstruction. We introduce a method of warping the 3D space to represent large-scale scenes. We also investigate a level set approach, which embeds the 3D surface as the zero level set of volumetrically sampled function. Finally, we present a view-dependent representation that can be computed at interactive rates.

13 citations


Proceedings ArticleDOI
21 Jul 2002
TL;DR: This work presents a novel use of commodity graphics hardware that effectively combines a plane-sweeping algorithm and view synthesis in a single step for real-time, on-line 3D view synthesis, and focuses on using image-based metrics to directly estimate images.
Abstract: We present a novel use of commodity graphics hardware that effectively combines a plane-sweeping algorithm [Collins 1996] and view synthesis in a single step for real-time, on-line 3D view synthesis. Unlike typical stereo algorithms that use image-based metrics to estimate depths, we focus on using image-based metrics to directly estimate images. Using real-time imagery from a few calibrated cameras, our method can generate new images from nearby viewpoints, without any prior geometric information or requiring any user interaction, in real time and on line.

11 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: A scan-line-based intermediate view synthesis algorithm is presented, in which image rectification is applied in the pre-warping processing and the post-Warping matrix is specified through the movement of the epipoles.
Abstract: One method to synthesize arbitrary views of a given environment in image based rendering (IBR) is the technique of view synthesis. A scan-line-based intermediate view synthesis algorithm is presented, in which image rectification is applied in the pre-warping processing and the post-warping matrix is specified through the movement of the epipoles. Image warping, or morphing, is a technique used in computer vision. Simulation results are provided on the synthesis of intermediate views using this method.

10 citations


Proceedings ArticleDOI
01 Jan 2002
TL;DR: It is shown how dense correspondence can be found between needle-maps generated using shape-from-shading, which in turn can be used to generate new needle- maps and produce novel intermediate views.
Abstract: Interest in view interpolation and novel view synthesis is growing. In this paper we show how dense correspondence can be found between needle-maps generated using shape-from-shading, which in turn can be used to generate new needle-maps. From these we can produce novel intermediate views, and also estimates of how each intermediate view would look under different lighting conditions. The approach offers the prospect of creating large sets of realistic views of a scene under different viewing and lighting conditions from a small number of original images.

6 citations


Book ChapterDOI
28 May 2002
TL;DR: A novel approach to view synthesis which hinges on the observation that human viewers tend to be quite sensitive to the motion of features in the image corresponding to intensity discontinuities or edges is presented.
Abstract: The goal of most image based rendering systems can be stated as follows: given a set of pictures taken from various vantage points, synthesize the image that would be obtained from a novel viewpoint. In this paper we present a novel approach to view synthesis which hinges on the observation that human viewers tend to be quite sensitive to the motion of features in the image corresponding to intensity discontinuities or edges. Our system focuses its efforts on recovering the 3D position of these features so that their motions can be synthesized correctly. In the current implementation these feature points are recovered from image sequences by employing the epipolar plane image (EPI) analysis techniques proposed by Bolles, Baker, and Marimont. The output of this procedure resembles the output of an edge extraction system where the edgels are augmented with accurate depth information. This method has the advantage of producing accurate depth estimates for most of the salient features in the scene including those corresponding to occluding contours. We will demonstrate that it is possible to produce compelling novel views based on this information.The paper will also describe a principled approach to reasoning about the 3D structure of the scene based on the quasi-sparse features returned by the EPI analysis. This analysis allows us to correctly reproduce occlusion and disocclusion effects in the synthetic views without requiring dense correspondences. Importantly, the technique could also be used to analyze and refine the 3-D results returned by range finders, stereo systems or structure from motion algorithms. Results obtained by applying the proposed techniques to actual image data sets are presented.

5 citations


Dissertation
01 Jan 2002
TL;DR: A novel system for automatic generation of images from new view points using the information in a number of given images, is described and an automatic system for visualisation of fridge contents is proposed.
Abstract: In this thesis a number of computer vision problems is discussed, in particular we study the problem of view synthesis. The focus is on using complex features to determine the geometry of the scene and the cameras, in contrast to traditional point-based methods. The thesis is divided into two parts: In the fist part geometric relations between different features, corresponding in several images are investigated. Among others geometric relations of images of planes are presented. It is shown how these relations may be used to generate images from novel viewpoints and to create 3D models from images. The feature quiver, defined by a point and a number of directions from this point, is introduced. Three minimal cases of estimating the structure and motion using correspondences of quivers in three images are solved. In the second part of the thesis a number of implemented computer vision systems is presented. First a novel system for automatic generation of images from new view points using the information in a number of given images, is described. Secondly we propose an automatic system for visualisation of fridge contents. 3D models of the objects in the fridge are created when they are inserted. Thirdly a system for detecting windows in a city scene based on support vector machines is presented, and finally we describe a system for estimation of position and orientation from a image, when a model of the surroundings is available. (Less)

3 citations


Proceedings ArticleDOI
07 Nov 2002
TL;DR: It is argued that this view synthesis approach can be integrated seamlessly with other parts of the teleconference system, speeding up the virtual view reconstruction even more.
Abstract: In this paper we propose a real-time view synthesis implementation with valid geometry for a teleconference application with viewpoint adaptation capability, from which a wide range of realistic views can be reconstructed. The whole process is decomposed into two backward mapping steps and two 1D processing steps for enabling parallelism. We show that the proposed algorithm can provide high quality virtual views that are comparable with the real perceived view. Theoretical motivation and implementation issues of the view synthesis algorithm are discussed. Experiments show that, due to the decomposition, a real-time implementation is feasible. It is further argued that this view synthesis approach can be integrated seamlessly with other parts of the teleconference system, speeding up the virtual view reconstruction even more.

2 citations


Proceedings ArticleDOI
07 Nov 2002
TL;DR: A technique of robot motion simulation with imaged-based view synthesis with eigen space method to acquire appearance representations of images, which can capture all possible variations in object shape, surface reflection, illumination etc.
Abstract: Proposes a technique of robot motion simulation with imaged-based view synthesis. An eigen space method is used to acquire appearance representations of images, which can capture all possible variations in object shape, surface reflection, illumination etc. According to the given robot joint positions, the trajectory in the joint space is first planned to generate a joint sequence, and the image sequence of the robot motion is synthesized directly from reference images, not requiring a priori a CAD model or any calibration. Virtual motion results of the robot UP6 are demonstrated.

Proceedings Article
01 Jan 2002
TL;DR: An image-based system for novel view synthesis from multiple model views that works by segmenting images of a static scene in background and foreground, basing on motion parallax, and synthesizes novel views with an original method based on step-wise replication of the epipolar geometry acquired from few model or “seed” views.
Abstract: In this paper we present an image-based system for novel view synthesis from multiple model views. Our method works by segmenting images of a static scene in background and foreground, basing on motion parallax. From this segmentation we are able to recover the relative affine structure. Finally, we synthesize novel views with an original method based on step-wise replication of the epipolar geometry acquired from few model or “seed” views. The method is uncalibrated, for it does not need the rigid displacements in the Euclidean frame (which is unknown), and it is automatic, for it does not require the user to manually specify viewing parameters.

Proceedings ArticleDOI
06 Oct 2002
TL;DR: This paper has investigated how errors on the intrinsic parameters deet the syntheshed views, and achieved view synthesis through trilinear tensor, epipolar geometry and, SD-reconstruction respectively.
Abstract: The problem of synthesizing novel views consists of generating new views of a scene, at arbitrary camera positions and orientations, using at least two reference views. This paper presents and compares three methods for novel view synthesis. For a given virtual location and orientation of the camera, the corresponding virtual image is syntheshed using two reference images. View synthesis was achieved through trilinear tensor, epipolar geometry and, SD-reconstruction respectively. We have considered the case where the intrinsic parameters are not precisely known. In particular, we have investigated how errors on the intrinsic parameters deet the syntheshed views. Keywordsnovel view synthesis, SD reconstruction, trifocal tensor, epipolar geometry, three-view geometry.

Proceedings Article
01 Jan 2002
TL;DR: This paper presents a method for generating synthetic views of a soccer ground starting from a single uncalibrated image using the “plane+parallax” representation to reproject points.
Abstract: This paper presents a method for generating synthetic views of a soccer ground starting from a single uncalibrated image. The relative affine structure of the players is computed by exploiting the knowledge of the soccer ground geometry and the fact that the players are in vertical positions. Then, novel views are generated using the “plane+parallax” representation to reproject points.

01 Jan 2002
TL;DR: This paper proposes a hybrid method, which uses simple shapes such as planes to model the city, and applies image based techniques to add realism, which can be performed automatically through a simple image capturing process with a single scan along a street with a vehicle mounted omni-directional camera.
Abstract: In this paper, we present an efficient method to synthesize realworld scenes, such as broad city landscapes. To date, model based approaches have mainly been adopted for this purpose, and some fairly convincing polygon cities have been successfully generated. However, the shapes of real world objects are usually very complicated and it is infeasible to model an entire city realistically. On the other hand, image based approaches have been attempted only recently. Image based methods are effective for realistic rendering, but their huge data sets and restrictions on interactivity pose serious problems for an actual application. Thus, we propose a hybrid method, which uses simple shapes such as planes to model the city, and applies image based techniques to add realism. It can be performed automatically through a simple image capturing process with a single scan along a street with a vehicle mounted omni-directional camera. Further, We also analyze the relationship between error and number of needed images to reduce the data size.

Journal ArticleDOI
TL;DR: An object-based encoding method for multi-view sequences of a 3D object and a view synthesis algorithm for predicting new views using an image-based rendering approach are proposed.
Abstract: Image-based rendering is a powerful and promising approach for a 3D object representation This approach considers a 3D object or a scene as a collection of images called key frames taken from the reference viewpoints and generates arbitrary views of the object using these key frames In this paper, we propose an object-based encoding method for multi-view sequences of a 3D object and a view synthesis algorithm for predicting new views using an image-based rendering approach

Book
01 Jan 2002
TL;DR: The proposed method also applies to the construction of virtual frontal view face image and the results are encouraging.
Abstract: This paper addresses the issue on generating a 2D view of a 3D object from its other 2D views. Linear Combination method is the typical approach to this problem. However, a 2D view cannot be represented by a linear combination of other 2D views under perspective projection. Instead, we have presented a solution under perspective projection. The proposed method also applies to construction of virtual frontal view face image and the results are encouraging.