scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Proceedings ArticleDOI
01 Sep 1997
TL;DR: It is shown that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images — for the purpose of cheap and fast viewers that can run on standard hardware.
Abstract: We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images (“view extrapolation”), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of “trilinear tensors” that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images — for the purpose of cheap and fast viewers that can run on standard hardware.

26 citations

Journal ArticleDOI
TL;DR: The experimental results show that the proposed scheme not only achieves high view synthesis performance, but also reduce the computational complexity of encoding.
Abstract: In 3-D video, view synthesis with depth-image-based rendering is employed to generate any virtual view between available camera views. Distortions in depth map induce geometry changes in the virtual views, and thus degrade the performance of view synthesis. This paper proposes a depth map coding method to improve the performance of view synthesis based on distortion analyses. The major technical innovation of this paper is to formulate maximum tolerable depth distortion (MTDD) and depth disocclusion mask (DDM), since such depth sensitivity for view synthesis and inter-view redundancy can be well utilized in coding. To be more specific, we define two different encoders (e.g., base encoder and side encoder) for depth maps in left and right views, respectively. For base encoding, different types of coding units are extracted based on the distribution of MTDD and assigned with different quantitative parameters for coding. For side encoding, a warped-SKIP mode is designed to remove inter-view redundancy based on the distribution of DDM. The experimental results show that the proposed scheme not only achieves high view synthesis performance, but also reduce the computational complexity of encoding.

26 citations

Proceedings ArticleDOI
29 Oct 2013
TL;DR: A new Joint Texture-Depth Inpainting (JTDI) algorithm is proposed that simultaneously fill in missing texture and depth pixels and shows that JTDI outperforms two previous inpainting schemes that either does not use available depth information during inpaintedting, or depends on the availability of a good depth map at the virtual view for good inPainting performance.
Abstract: Transmitting texture and depth maps from one or more reference views enables a user to freely choose virtual viewpoints from which to synthesize images for observation via depth-image-based rendering (DIBR). In each DIBR-synthesized image, however, there remain disocclusion holes with missing pixels corresponding to spatial regions occluded from view in the reference images. To complete these holes, unlike previous schemes that rely heavily (and unrealistically) on the availability of a high-quality depth map in the virtual view for inpainting of the corresponding texture map, in this paper a new Joint Texture-Depth Inpainting (JTDI) algorithm is proposed that simultaneously fill in missing texture and depth pixels. Specifically, we first use available partial depth information to compute priority terms to identify the next target pixel patch in a disocclusion hole for inpainting. Then, after identifying the best-matched texture patch in the known pixel region via template matching for texture inpainting, the variance of the corresponding depth patch is copied to the target depth patch for depth inpainting. Experimental results show that JTDI outperforms two previous inpainting schemes that either does not use available depth information during inpainting, or depends on the availability of a good depth map at the virtual view for good inpainting performance.

26 citations

Proceedings ArticleDOI
TL;DR: This paper examines how to generate new views so that the perceived depth is similar to the original scene depth, and proposes a method to detect and reduce artifacts in the third and last step, these artifacts being created by errors contained in the disparity from the first step.
Abstract: The 3D shape perceived from viewing a stereoscopic movie depends on the viewing conditions, most notably on the screen size and distance, and depth and size distortions appear because of the differences between the shooting and viewing geometries. When the shooting geometry is constrained, or when the same stereoscopic movie must be displayed with different viewing geometries (e.g. in a movie theater and on a 3DTV), these depth distortions may be reduced by novel view synthesis techniques. They usually involve three steps: computing the stereo disparity, computing a disparity-dependent 2D mapping from the original stereo pair to the synthesized views, and finally composing the synthesized views. In this paper, we focus on the second and third step: we examine how to generate new views so that the perceived depth is similar to the original scene depth, and we propose a method to detect and reduce artifacts in the third and last step, these artifacts being created by errors contained in the disparity from the first step.

25 citations

Proceedings ArticleDOI
29 Dec 2011
TL;DR: This paper proposes a novel object-based LDI representation, improving synthesized virtual views quality, in a rate-constrained context, and reorganised pixels from each LDI layer are reorganised to enhance depth continuity.
Abstract: Layered Depth Image (LDI) representations are attractive compact representations for multi-view videos. Any virtual viewpoint can be rendered from LDI by using view synthesis technique. However, rendering from classical LDI leads to annoying visual artifacts, such as cracks and disocclusions. Visual quality gets even worse after a DCT-based compression of the LDI, because of blurring effects on depth discontinuities. In this paper, we propose a novel object-based LDI representation, improving synthesized virtual views quality, in a rate-constrained context. Pixels from each LDI layer are reorganised to enhance depth continuity.

25 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102