Topic
View synthesis
About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.
Papers published on a yearly basis
Papers
More filters
••
23 Jun 2000TL;DR: This approach to the problem of image-based view synthesis is based upon the implicit construction of a 3D approximation of the scene, composed of planar triangular patches, which minimizes the distortions in the rendering of the new view.
Abstract: The teleoperation of equipment under impoverished sensing and communication delays, cannot be handled efficiently by conventional remote control techniques. Our approach to this problem is based on an augmented reality control mode in which a graphical model of the equipment is overlaid upon real views from the work-site. A basic capability required in order to produce such an augmented reality mode is the ability to synthesize visual information from new viewpoints based upon existing ones, so as to compensate for the sparsity of real data. Our approach to the problem of image-based view synthesis is based upon the implicit construction of a 3D approximation of the scene, composed of planar triangular patches. New views are then generated by texture-mapping the available real image data onto the reprojected triangles. In order to generate a physically valid joint-triangulation which minimizes the distortions in the rendering of the new view, an iterative approach is utilized. This approach begins with an initial triangulation and refines it iteratively through node-linking alterations and a split and merge process, based upon correlation values between corresponding triangular patches. The paper presents results for both synthetic and real scenes.
12 citations
••
01 Jul 2013TL;DR: Experimental results demonstrate that the proposed view synthesis method can effectively produce smooth textures and reasonable structure propagation and is well suitable to multi-view video and other higher dimensional view synthesis settings.
Abstract: Depth-based view synthesis can produce novel realistic images of a scene by view warping and image inpainting. This paper presents a depth-based view synthesis approach performing pixel-level image inpainting. The proposed approach provides great flexibility in pixel manipulation and prevents random effects in texture propagation. By analyzing the process generating image holes in view warping, we firstly classify such areas into simple holes and disocclusion areas. Based on depth information constraints and different strategies for random propagation, an approximate nearest-neighbor match based pixel-level inpainting is introduced to complete holes from the two classes. Experimental results demonstrate that the proposed view synthesis method can effectively produce smooth textures and reasonable structure propagation. The proposed depth-based pixel-level inpainting is well suitable to multi-view video and other higher dimensional view synthesis settings.
12 citations
••
14 Nov 2005TL;DR: A new multiple image view synthesis algorithm that only requires camera parameters and disparity maps to be used and is scalable as virtual views can be created given 1 to N of the available video inputs, providing a means to gracefully handle scenarios where camera inputs decrease or increase over time.
Abstract: Interactive audio-visual (AV) applications such as free viewpoint video (FVV) aim to enable unrestricted spatio-temporal navigation within multiple camera environments. Current virtual viewpoint view synthesis solutions for FVV are either purely image-based implying large information redundancy; or involve reconstructing complex 3D models of the scene. In this paper we present a new multiple image view synthesis algorithm that only requires camera parameters and disparity maps. The multi-view synthesis (MVS) approach can be used in any multi-camera environment and is scalable as virtual views can be created given 1 to N of the available video inputs, providing a means to gracefully handle scenarios where camera inputs decrease or increase over time. The algorithm identifies and selects only the best quality surface areas from available reference images, thereby reducing perceptual errors in virtual view reconstruction. Experimental results are presented and verified using both objective (PSNR) and subjective comparisons.
12 citations
••
TL;DR: It is demonstrated that for each group of three arrays (in a 15 view system) a synthetic image may be employed to make the middle array redundant, which potential reduction in hardware offers important advantages for the development of a practical multiple view X-ray scanner.
12 citations
••
28 May 2008TL;DR: The framework implements the intermediate view synthesis by a chain of consecutive processing modules, as an extension to the Middlebury open software structure, allowing it to benchmark quality and execution time of individual modules for end-to-end system performance optimization.
Abstract: This paper presents the system-level overview of a real-time image- based rendering framework performing multiple intermediate view synthesis, completely on the Graphics Processing Unit (GPU). The software design achieves high-performance, yet maintains flexibility and ease of development through a hierarchical layered architecture. The framework implements the intermediate view synthesis by a chain of consecutive processing modules, as an extension to the Middlebury open software structure, allowing it to benchmark quality and execution time of individual modules for end-to-end system performance optimization. The modules can be flexibly coordinated, enabling scalability to run the multiple view synthesis in real-time on both powerful and weak GPUs.
12 citations