scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Proceedings ArticleDOI
01 Sep 2016
TL;DR: Experimental results show high effectiveness of the presented method of depth maps refinement, especially for object edges, as it is intended for low resolution depth maps acquired through estimation using multiview video.
Abstract: In free-viewpoint television, high-quality depth maps are substantial for a virtual view synthesis for free navigation purposes. In this paper, we propose a new method of depth map quality improvement and resolution increase. Our method is intended for low resolution depth maps acquired through estimation using multiview video. The proposed depth map quality improvement is based on segmentation of an acquired high-resolution image. In the estimated depth map, searching for outliers is performed in the neighborhood created from similar segments in the acquired view. Experimental results show high effectiveness of the presented method of depth maps refinement, especially for object edges.

4 citations

Proceedings ArticleDOI
Shiping Zhu1, Yang Yu1
24 Sep 2012
TL;DR: A method of precise matching of every pixel based on block matching in the whole image is proposed in disparity estimation and image interpolation is adopted to synthesize virtual view.
Abstract: Virtual view synthesis is the key technology of virtual reality and multi-view video system. Thus a new algorithm based on disparity estimation and interpolation is proposed in order to achieve intermediate view of any position between original images. A method of precise matching of every pixel based on block matching in the whole image is proposed in disparity estimation. And prediction work of start point of searching is also adopted to decrease computation of disparity estimation. Image interpolation is adopted to synthesize virtual view. Reverse mapping is used as an optimizing approach to fill up the holes formed in interpolation. Satisfying visual effect is shown in experimental results when there exist few sharp edges in the scene.

4 citations

Proceedings ArticleDOI
03 Nov 2020
TL;DR: This paper investigates several variations of skip connection on two widely used novel view synthesis modules, pixel generation and flow prediction, and finds that the combination of the skip connections with flow-based hard attention is helpful.
Abstract: Novel view synthesis is the task of synthesizing an image of an object at an arbitrary viewpoint given one or a few views of the object [1]. The output image of novel view synthesis exhibits a significant structural change from the input. Because of the large change, the skip connections or U-Net architecture, which can sustain the multi-level characteristics of the input images, cannot be directly utilized for the novel view synthesis [2]. In this paper, we investigate several variations of skip connection on two widely used novel view synthesis modules, pixel generation [1] and flow prediction [3]. For pixel generation, we find that the combination of the skip connections with flow-based hard attention is helpful. On the other hand, flow prediction enjoys marginal benefit from skip connections in deeper layers. Our pipeline suggests how to make use of skip connections on tasks that involve large geometric changes.

4 citations

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a self-consistent generative network (SCGN) to synthesize novel views from the given input views without explicitly exploiting the geometric information.
Abstract: View synthesis aims to produce unseen views from a set of views captured by two or more cameras at different positions. This task is non-trivial since it is hard to conduct pixel-level matching among different views. To address this issue, most existing methods seek to exploit the geometric information to match pixels. However, when the distinct cameras have a large baseline (i. e., far away from each other), severe geometry distortion issues would occur and the geometric information may fail to provide useful guidance, resulting in very blurry synthesized images. To address the above issues, in this paper, we propose a novel deep generative model, called Self-Consistent Generative Network (SCGN), which synthesizes novel views from the given input views without explicitly exploiting the geometric information. The proposed SCGN model consists of two main components, i. e., a View Synthesis Network (VSN) and a View Decomposition Network (VDN), both employing an Encoder-Decoder structure. Here, the VDN seeks to reconstruct input views from the synthesized novel view to preserve the consistency of view synthesis. Thanks to VDN, SCGN is able to synthesize novel views without using any geometric rectification before encoding, making it easier for both training and applications. Finally, adversarial loss is introduced to improve the photo-realism of novel views. Both qualitative and quantitative comparisons against several state-of-the-art methods on two benchmark tasks demonstrated the superiority of our approach.

4 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102