scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Proceedings ArticleDOI
07 Nov 2009
TL;DR: The method for reliability-based view synthesis from compressed multi-view + depth data (MVD) is investigated, and corresponding results are shown.
Abstract: For advanced 3D Video (3DV) applications, efficient data representations are investigated, which only transmit a subset of the views that are required for 3D visualization. From this subset, all intermediate views are synthesized from sample-dense color and depth data. In this paper, the method for reliability-based view synthesis from compressed multi-view + depth data (MVD) is investigated and corresponding results are shown. The initial problem in such 3DV systems is the interdependency between view capturing, coding and view synthesis. For evaluating each component separately, we first generate results from the coding stage only, where color and depth coding is carried out separately. In the next step, we add the view synthesis stage with reliability-based view synthesis and show, how the separate coding results influence the view synthesis quality and what type of artifacts are produced. Efficient bit rate distribution between color and depth is investigated by objective as well as subjective evaluations. Furthermore, quality characteristics across the viewing range for different bit rate distributions are analyzed. Finally, the robustness of the reliability-based view synthesis to coding artifacts is presented.

44 citations

Journal ArticleDOI
TL;DR: An algorithm for synthesizing intermediate views from a single stereo-pair is presented, incorporating of scene assumptions and a disparity estimation confidence measure that lead to the accurate synthesis of occluded and ambiguously referenced regions.
Abstract: In this paper, we present an algorithm for synthesizing intermediate views from a single stereo-pair. The key contributions of this algorithm are the incorporation of scene assumptions and a disparity estimation confidence measure that lead to the accurate synthesis of occluded and ambiguously referenced regions. The synthesized views have been displayed on a multi-view binocular imaging system, with subjectively effective motion parallax and diminished eye strain.

44 citations

Journal ArticleDOI
TL;DR: A view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely.
Abstract: Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.

44 citations

Proceedings ArticleDOI
14 Jun 2020
TL;DR: This work proposes a neural network with a geo-transformation layer that turns predicted ground-height values from the satellite view to a ground view while retaining the physical satellite-to-ground relation.
Abstract: We present a novel method for generating panoramic street-view images which are geometrically consistent with a given satellite image. Different from existing approaches that completely rely on a deep learning architecture to generalize cross-view image distributions, our approach explicitly loops in the geometric configuration of the ground objects based on the satellite views, such that the produced ground view synthesis preserves the geometric shape and the semantics of the scene. In particular, we propose a neural network with a geo-transformation layer that turns predicted ground-height values from the satellite view to a ground view while retaining the physical satellite-to-ground relation. Our results show that the synthesized image retains well-articulated and authentic geometric shapes, as well as texture richness of the street-view in various scenarios. Both qualitative and quantitative results demonstrate that our method compares favorably to other state-of-the-art approaches that lack geometric consistency.

44 citations

Proceedings ArticleDOI
01 Jan 2018
TL;DR: In this paper, a region-aware geometric transform network is proposed to predict a set of homographies and their corresponding region masks to transform the input image into a novel view, which can be used to generate high quality synthetic views that respect the scene geometry.
Abstract: This paper tackles the problem of novel view synthesis from a single image. In particular, we target real-world scenes with rich geometric structure, a challenging task due to the large appearance variations of such scenes and the lack of simple 3D models to represent them. Modern, learning-based approaches mostly focus on appearance to synthesize novel views and thus tend to generate predictions that are inconsistent with the underlying scene structure. By contrast, in this paper, we propose to exploit the 3D geometry of the scene to synthesize a novel view. Specifically, we approximate a real-world scene by a fixed number of planes, and learn to predict a set of homographies and their corresponding region masks to transform the input image into a novel view. To this end, we develop a new region-aware geometric transform network that performs these multiple tasks in a common framework. Our results on the outdoor KITTI and the indoor ScanNet datasets demonstrate the effectiveness of our network in generating high-quality synthetic views that respect the scene geometry, thus outperforming the state-of-the-art methods.

43 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102