scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Proceedings ArticleDOI
TL;DR: A fully automatic 2D to 3D conversion algorithm that assigns relative depth values to the various objects in a given 2D image/scene and generates two different views using a Depth Image Based Rendering (DIBR) algorithm for 3D displays is proposed.
Abstract: With the recent advent in 3D display technology, there is an increasing need for conversion of existing 2D content into rendered 3D views. We propose a fully automatic 2D to 3D conversion algorithm that assigns relative depth values to the various objects in a given 2D image/scene and generates two different views (stereo pair) using a Depth Image Based Rendering (DIBR) algorithm for 3D displays. The algorithm described in this paper creates a scene model for each image based on certain low-level features like texture, gradient and pixel location and estimates a pseudo depth map. Since the capture environment is unknown, using low-level features alone creates inaccuracies in the depth map. Using such flawed depth map for 3D rendering will result in various artifacts, causing an unpleasant viewing experience. The proposed algorithm also uses certain high-level image features to overcome these imperfections and generates an enhanced depth map for improved viewing experience. Finally, we show several 3D results generated with our algorithm in the results section.

6 citations

Proceedings ArticleDOI
19 Jul 2010
TL;DR: This paper proposes a new structure-from-motion technique, called locally temporal bundle adjustment (LTBA), to handle the dynamic scenes as well as the static camera motion, which violates the conventional structure from motion assumption.
Abstract: The video-plus-depth format has been widely used for representing the 3D scene due to its main advantage of compatibility to image format. In practice, the depth inconsistency may lead to unsatisfactory view synthesis results. In this paper, we propose a new structure-from-motion (SfM) technique, called locally temporal bundle adjustment (LTBA), to handle the dynamic scenes as well as the static camera motion, which violates the conventional structure from motion assumption. By integrating the camera information, depth map, and video temporally, we develop a geometric quadrilateral filter to reduce noise in the depth map and enhance the spatio-temporal consistency to improve the quality of depth maps. We show the improved quality of dynamic depth maps by using the proposed algorithm through experiments on real video-plus-depth sequences.

6 citations

Proceedings ArticleDOI
Guangzhong Liu1, Jiajie Wang1, Chi Zhang1, Song Liao1, Yuehu Liu1 
01 Oct 2017
TL;DR: This paper utilizes the state-of-the-art generative adversarial networks (GAN) to synthesize novel views of a structured scene and proposes a simple but effective constraint in the generator network to preserve the geometric property of the input in the generated image.
Abstract: Generating a realistic image from a novel viewpoint has always been a key problem in image-based rendering and other related domains. In this paper we utilize the state-of-the-art generative adversarial networks(GAN) to synthesize novel views of a structured scene. Based on our proposed representations for traffic scene, a realistic image of a certain viewpoint can be generated via conditional GANs, given the geometric layout of the corresponding position and pose. In order to preserve the geometric property of the input in the generated image, we propose a simple but effective constraint in the generator network. Qualitative and comparative results have validated our method as well as shown its effectiveness and efficiency.

6 citations

Book ChapterDOI
01 Jan 2013
TL;DR: With texture and depth data, virtual views are synthesized to produce a disparity-adjustable stereo pair for stereoscopic displays, or to generate multiple views required by autostereoscopic displays.
Abstract: With texture and depth data, virtual views are synthesized to produce a disparity-adjustable stereo pair for stereoscopic displays, or to generate multiple views required by autostereoscopic displays View synthesis typically consists of three steps: 3D warping, view merging, and hole filling However, simple synthesis algorithms may yield some visual artifacts, eg, texture flickering, boundary artifact, and smearing effect, and many efforts have been made to suppress these synthesis artifacts Some employ spatial/temporal filters to smooth depth maps, which mitigate depth errors and enhance temporal consistency; some use a cross-check technique to detect and prevent possible synthesis distortions; some focus on removing boundary artifacts and others attempt to create natural texture patches for the disoccluded regions In addition to rendering quality, real-time implementation is necessary for view synthesis So far, the basic three-step rendering process has been realized in real time through GPU programming and a design

6 citations

Journal ArticleDOI
TL;DR: A priority patch inpainting algorithm for hole filling in DIBR algorithms by generating multiple virtual views by applying texture-based interpolation method for crack filling and a prioritized method for selecting the critical patch is proposed to reduce computation time.
Abstract: Hole and crack filling is the most important issue in depth-image-based rendering (DIBR) algorithms for generating virtual view images when only one view image and one depth map are available. This paper proposes a priority patch inpainting algorithm for hole filling in DIBR algorithms by generating multiple virtual views. A texture-based interpolation method is applied for crack filling. Then, an inpainting-based algorithm is applied patch by patch for hole filling. A prioritized method for selecting the critical patch is also proposed to reduce computation time. Finally, the proposed method is realized on the compute unified device architecture parallel computing platform which runs on a graphics processing unit. Simulation results show that the proposed algorithm is 51-fold faster for virtual view synthesis and achieves better virtual view quality compared to the traditional DIBR algorithm which contains depth preprocessing, warping, and hole filling.

6 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102