scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a depth map refinement method was proposed to enhance the inter-view consistency of depth maps (estimated or acquired by any method), crucial for achieving the required fidelity of the virtual view synthesis process.
Abstract: In this article, we propose a depth map refinement method that increases the quality of immersive video. The proposal highly enhances the inter-view consistency of depth maps (estimated or acquired by any method), crucial for achieving the required fidelity of the virtual view synthesis process. In the described method, only information from depth maps is used, as the use of texture can introduce errors in the refinement, mostly due to inter-view color inconsistencies and noise. In order to evaluate the performance of the proposal and compare it with the state of the art, three experiments were conducted. To test the influence of the refinement on the encoding of immersive video, four sets of depth maps (original, refined with the synthesis-based refinement, a bilateral filter, and with the proposal) were encoded with the MPEG Immersive Video (MIV) encoder. In the second experiment, in order to provide a direct evaluation of the accuracy of depth maps, the Middlebury database comparison was performed. In the third experiment, the temporal consistency of depth maps was assessed by measuring the efficiency of encoding of the virtual views. The experiments showed both a high increase of the virtual view synthesis quality in immersive video applications and higher similarity to ground-truth after the refinement of estimated depth maps. The usefulness of the proposal was appreciated and confirmed by the experts of the ISO/IEC MPEG group for immersive video and the method became the MPEG Reference Software for the depth refinement. The implementation of the method is publicly available for other researchers.

14 citations

Proceedings ArticleDOI
29 Oct 2010
TL;DR: This work proposes to add a third phase where 2D or 3D artifacts are detected and removed is each stereoscopic image pair, while keeping the perceived quality of the stereoscopic movie close to the original.
Abstract: Novel view synthesis methods consist in using several images or video sequences of the same scene, and creating new images of this scene, as if they were taken by a camera placed at a different viewpoint. They can be used in stereoscopic cinema to change the camera parameters (baseline, vergence, focal length...) a posteriori, or to adapt a stereoscopic broadcast that was shot for given viewing conditions (such as a movie theater) to a different screen size and distance (such as a 3DTV in a living room). View synthesis from stereoscopic movies usually proceeds in two phases: First, disparity maps and other viewpoint-independent data (such as scene layers and matting information) are extracted from the original sequences, and second, this data and the original images are used to synthesize the new sequence, given geometric information about the synthesized viewpoints. Unfortunately, since no known stereo method gives perfect results in all situations, the results of the first phase will most probably contain errors, which will result in 2D or 3D artifacts in the synthesized stereoscopic movie. We propose to add a third phase where these artifacts are detected and removed is each stereoscopic image pair, while keeping the perceived quality of the stereoscopic movie close to the original.

14 citations

Proceedings ArticleDOI
01 Dec 2010
TL;DR: This work investigates the cause of boundary noises, and proposes a novel solution to remove such boundary noises by applying restrictions during forward warping on the pixels within the texture-depth misalignment regions.
Abstract: During view synthesis based on depth maps, also known as Depth-Image-Based Rendering (DIBR), annoying artifacts are often generated around foreground objects, yielding the visual effects that slim silhouettes of foreground objects are scattered into the background. The artifacts are referred as the boundary noises. We investigate the cause of boundary noises, and find out that they result from the misalignment between texture and depth information along object boundaries. Accordingly, we propose a novel solution to remove such boundary noises by applying restrictions during forward warping on the pixels within the texture-depth misalignment regions. Experiments show this algorithm can effectively eliminate most boundary noises and it is also robust for view synthesis with compressed depth and texture information.

13 citations

Proceedings ArticleDOI
10 Dec 2010
TL;DR: A new filtering technique addresses the disocclusions problem issued from the depth image based rendering (DIBR) technique within 3DTV framework by pre-processing the depth video and/or post- processing the warped image through hole-filling techniques.
Abstract: In this paper, a new filtering technique addresses the disocclusions problem issued from the depth image based rendering (DIBR) technique within 3DTV framework. An inherent problem with DIBR is to fill in the newly exposed areas (holes) caused by the image warping process. In opposition with multiview video (MVV) systems, such as free viewpoint television (FTV), where multiple reference views are used for recovering the disocclusions, we consider in this paper a 3DTV system based on a video-plus-depth sequence which provides only one reference view of the scene. To overcome this issue, disocclusion removal can be achieved by pre-processing the depth video and/or post-processing the warped image through hole-filling techniques. Specifically, we propose in this paper a pre-processing of the depth video based on a bilateral filtering according to the strength of the depth discontinuity. Experimental results are shown to illustrate the efficiency of the proposed method compared to the traditional methods.

13 citations

Journal ArticleDOI
TL;DR: A novel view-spatial-temporal post-refinement method for view synthesis, in which new hole-filling and boundary artifact removal techniques are proposed, and an optimal reference frame selection algorithm is proposed for a better trade-off between the computational complexity and rendered image quality.
Abstract: Depth image based rendering is one of key techniques to realize view synthesis for three-dimensional television and free-viewpoint television, which provide high quality and immersive experiences to end viewers. However, artifacts of rendered images, including holes caused by occlusion/disclosure and boundary artifacts, may degrade the subjective and objective image quality. To handle these problems and improve the quality of rendered images, we present a novel view-spatial-temporal post-refinement method for view synthesis, in which new hole-filling and boundary artifact removal techniques are proposed. In addition, we propose an optimal reference frame selection algorithm for a better trade-off between the computational complexity and rendered image quality. Experimental results show that the proposed method can achieve a peak signal-to-noise ratio gain of 0.94dB on average for multiview video test sequences when compared with the benchmark view synthesis reference software. In addition, the subjective quality of the rendered image is also improved.

13 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102