scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A number of models in the Gaussian mixture modeling (GMM) are used to separate background and foreground pixels in the proposed view synthesis technique and it is confirmed that the proposed approach is able to improve PSNR by 3.15–5.13 dB compared with the conventional three reference frames.
Abstract: High quality virtual views need to be synthesized from adjacent available views for free viewpoint video and multiview video coding (MVC) to provide users with a more realistic 3D viewing experience of a scene. View synthesis techniques suffer from poor rendering quality due to holes created by occlusion and rounding integer error through warping. To remove the holes in the virtual view, the existing techniques use spatial and temporal correlation in intra/inter-view images and depth maps. However, they still suffer quality degradation in the boundary region of foreground and background areas due to the low spatial correlation in texture images and low correspondence in inter-view depth maps. To overcome the above-mentioned limitations, we use a number of models in the Gaussian mixture modeling (GMM) to separate background and foreground pixels in our proposed technique. Here, the missing pixels introduced from the warping process are recovered by the adaptive weighted average of the pixel intensities from the corresponding GMM model(s) and warped image. The weights vary with time to accommodate the changes due to a dynamic background and the motions of the moving objects for view synthesis. We also introduce an adaptive strategy to reset the GMM modeling if the contributions of the pixel intensities drop significantly. Our experimental results indicate that the proposed approach provides 5.40–6.60-dB PSNR improvement compared with the relevant methods. To verify the effectiveness of the proposed view synthesis technique, we use it as an extra reference frame in the motion estimation for MVC. The experimental results confirm that the proposed view synthesis is able to improve PSNR by 3.15–5.13 dB compared with the conventional three reference frames.

38 citations

Journal ArticleDOI
TL;DR: This work presents a new method for using commodity graphics hardware to achieve real-time, on-line, 2D view synthesis or 3D depth estimation from two or more calibrated cameras that combines a 3D plane-sweeping approach with 2D multi-resolution color consistency tests.
Abstract: We present a new method for using commodity graphics hardware to achieve real-time, on-line, 2D view synthesis or 3D depth estimation from two or more calibrated cameras. Our method combines a 3D plane-sweeping approach with 2D multi-resolution color consistency tests. We project camera imagery onto each plane, compute measures of color consistency throughout the plane at multiple resolutions, and then choose the color or depth (corresponding plane) that is most consistent. The key to achieving real-time performance is our use of the advanced features included with recent commodity computer graphics hardware to implement the computations simultaneously (in parallel) across all reference image pixels on a plane. Our method is relatively simple to implement, and flexible in term of the number and placement of cameras. With two cameras and

38 citations

Journal ArticleDOI
TL;DR: A novel automatic method for view synthesis from a triplet of uncalibrated images based on trinocular edge matching followed by transfer by interpolation, occlusion detection and correction and finally rendering is presented.

38 citations

Book ChapterDOI
23 Aug 2020
TL;DR: In this article, the authors introduce a method to convert stereo 360 images into a layered, multi-sphere image representation for 6DoF rendering with motion parallax and correct in-all-directions disparity cues.
Abstract: We introduce a method to convert stereo 360\(^\circ \) (omnidirectional stereo) imagery into a layered, multi-sphere image representation for six degree-of-freedom (6DoF) rendering. Stereo 360\(^\circ \) imagery can be captured from multi-camera systems for virtual reality (VR), but lacks motion parallax and correct-in-all-directions disparity cues. Together, these can quickly lead to VR sickness when viewing content. One solution is to try and generate a format suitable for 6DoF rendering, such as by estimating depth. However, this raises questions as to how to handle disoccluded regions in dynamic scenes. Our approach is to simultaneously learn depth and disocclusions via a multi-sphere image representation, which can be rendered with correct 6DoF disparity and motion parallax in VR. This significantly improves comfort for the viewer, and can be inferred and rendered in real time on modern GPU hardware. Together, these move towards making VR video a more comfortable immersive medium.

38 citations

Proceedings ArticleDOI
28 May 2008
TL;DR: A simple objective measure of accuracy is presented in terms of structural registration error in view synthesis, applied to a data-set with known geometric accuracy and a comparison is also demonstrated between two free-viewpoint video techniques across two prototype production studios.
Abstract: This paper addresses the problem of objectively measuring quality in free-viewpoint video production. The accuracy of scene reconstruction is typically limited and an evaluation of free-viewpoint video should explicitly consider the quality of image production. A simple objective measure of accuracy is presented in terms of structural registration error in view synthesis. This technique can be applied as a full-reference metric to measure the fidelity of view synthesis to a ground truth image or as a no-reference metric to measure the error in registering scene appearance in image-based rendering. The metric is applied to a data-set with known geometric accuracy and a comparison is also demonstrated between two free-viewpoint video techniques across two prototype production studios.

37 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102