scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Proceedings ArticleDOI
01 Sep 2012
TL;DR: This work proposes a solution that additional transmits auxiliary information in order to help the construction of synthesized views, especially in the occluded areas, and shows that decoding quality and consistency among frames are improved with only a small share of additional information.
Abstract: An important question in the design of interactive multiview systems consists in determining the information needed by the decoder for high quality navigation between the views. Most of the existing techniques focus on the captured sequences and only consider their transmission, which does not guarantee consistency among receiver-generated frames of chosen virtual views. In this work, we propose a solution that additional transmits auxiliary information in order to help the construction of synthesized views, especially in the occluded areas. Comparative results with existing approaches validate this novel representation of multiview data for interactive navigation. We show that decoding quality and consistency among frames are improved with only a small share of additional information.

15 citations

Proceedings ArticleDOI
Lu Wang1, Ju Liu1, Jiande Sun1, Yannan Ren1, Wei Liu2, Yuling Gao1 
16 May 2011
TL;DR: Experimental results show that the proposed method without preprocessing the depth image can obtain better performance in both subjective quality and objective evaluation.
Abstract: Virtual view synthesis has been considered as a crucial technique in three-dimensional television (3DTV) display, where depth-image-based rendering (DIBR) is a key technology. In order to improve the virtual image quality, a method without preprocessing the depth image is proposed. During the synthesis, the hole-flag map is fully utilized. A Horizontal, Vertical and Diagonal Extrapolation (HVDE) using depth information algorithm is also proposed for filling the tiny cracks. After blending, main virtual view image is obtained. Then the image generated by filtering the depth image is regarded as assistant image to fill small holes in the main image. Experimental results show that the proposed method can obtain better performance in both subjective quality and objective evaluation.

15 citations

Posted Content
TL;DR: In this paper, a multilayer perceptron and a ray transformer are used to estimate radiance and volume density at continuous 5D locations (3D spatial locations and 2D viewing directions), drawing appearance information from multiple source views.
Abstract: We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views. The core of our method is a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations (3D spatial locations and 2D viewing directions), drawing appearance information on the fly from multiple source views. By drawing on source views at render time, our method hearkens back to classic work on image-based rendering (IBR), and allows us to render high-resolution imagery. Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes. We render images using classic volume rendering, which is fully differentiable and allows us to train using only multi-view posed images as supervision. Experiments show that our method outperforms recent novel view synthesis methods that also seek to generalize to novel scenes. Further, if fine-tuned on each scene, our method is competitive with state-of-the-art single-scene neural rendering methods. Project page: this https URL

15 citations

Book ChapterDOI
28 Nov 2017
TL;DR: This chapter presents an overview of the acquisition, processing, and rendering pipeline in a multiview video system, targeting the unique feature of rendering virtual views, not captured by the input camera feeds, called Virtual View Synthesis.
Abstract: This chapter presents an overview of the acquisition, processing, and rendering pipeline in a multiview video system, targeting the unique feature of rendering virtual views, not captured by the input camera feeds. This is called Virtual View Synthesis and supports Free Navigation similar to The Matrix bullet time effect that was popularized in the late 1990s. A substantial part of the chapter is devoted in explaining how to estimate the respective camera parameters and their relative positions in the system, as well as how to estimate/measure depth, which is an essential information in order to obtain smooth virtual view transitions with Depth Image-Based Rendering.

15 citations

Journal ArticleDOI
TL;DR: To enhance compression performance, the synthesized view distortion, which is evaluated by emulating the interpolation and the virtual view synthesis process, is used in the optimization objective function for coding mode selection in the video encoder.
Abstract: In this paper, we propose a depth map down-sampling and coding scheme that minimizes the view synthesis distortion. Moreover, a solution for the optimal depth map down-sampling problem that minimizes the depth-caused distortion in the virtual view by exploiting the depth map and the associated texture information along with the up-sampling method to be used in the decoder side is derived. Furthermore, to enhance compression performance, the synthesized view distortion, which is evaluated by emulating the interpolation and the virtual view synthesis process, is used in the optimization objective function for coding mode selection in the video encoder. Experimental results show that both the proposed depth map down-sampling and encoding methods lead to good performance, and the average bit rate reduction is 2.62 $\%$ compared with 3D-AVC.

15 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102