scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Journal ArticleDOI
01 Jul 2014
TL;DR: The proposed method can provide better perceptual 3D video quality with natural depth perception and a famous multi-view video synthesis algorithm, VSRS, which requires six views to complete synthesis is presented.
Abstract: With the recent progress of multi-view devices and the corresponding signal processing techniques, stereoscopic viewing experience has been introduced to the public with growing interest. To create depth perception in human vision, two different video sequences in binocular vision are required for viewers. Those videos can be either captured by 3D-enabled cameras or synthesized as needed. The primary contribution of this paper is to establish two transformation models for stationary scenes and non-stationary objects in a given view, respectively. The models can be used for the production of corresponding stereoscopic videos as a viewer would have seen at the original event of the scene. The transformation model to estimate the depth information for stationary scenes is based on the information of the vanishing point and vanishing lines of the given video. The transformation model for non-stationary regions is the result of combining the motion analysis of the non-stationary regions and the transformation model for stationary scenes to estimate the depth information. The performance of the models is evaluated using subjective 3D video quality evaluation and objective quality evaluation on the synthesized views. Performance comparison with the ground truth and a famous multi-view video synthesis algorithm, VSRS, which requires six views to complete synthesis, is also presented. It is shown that the proposed method can provide better perceptual 3D video quality with natural depth perception.

3 citations

Journal ArticleDOI
TL;DR: The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views to synthesize a high-quality image and is suitable for 3-D video systems.
Abstract: A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.

3 citations

Proceedings Article
04 Oct 2009
TL;DR: The proposed compensation process is applied to a geometrically compensated picture to minimize the effect of warping-based view synthesis and reduces the bitrate by up to 7% relative to view synthesis prediction based on the general adaptive filtering method.
Abstract: View synthesis prediction has been studied to achieve efficient inter-view prediction. Existing view synthesis prediction methods generate the predicted pictures by using pictures decoded at the other views and geometric information of the scene. However, it is difficult to obtain such geometric information correctly. In addition, these conventional methods have no ability to compensate the inter-view difference in image signals caused by individual camera characteristics and the nonLambert reflection of objects. The method proposed herein can compensate both the interview signal mismatch and incorrect depth information by using an asymmetrical adaptive filter and the weighted average of wiener filter and the median filter. The proposed compensation process is applied to a geometrically compensated picture to minimize the effect of warping-based view synthesis. Experiments show that the proposed method reduces the bitrate by up to 7% relative to view synthesis prediction based on the general adaptive filtering method.

3 citations

Proceedings ArticleDOI
Paul Bao1, D. Xu1
19 Jul 2000
TL;DR: A new view synthesis technique using the 2D discrete wavelet-based view morphing to generate new views with linear interpolating techniques and Quantization techniques can be embedded here to compress the coefficients to reduce the morphing complexity.
Abstract: This paper presents a new view synthesis technique using the 2D discrete wavelet-based view morphing. View morphing is completely based on pairwise images without camera calibration and depth information of images. First, the fundamental matrix related to any pair of images is estimated. Then, with the fundamental matrix, the pair of image planes is rectified to be parallel and to have their corresponding points lie on the same scanline, which gives an opportunity to generate new views with linear interpolating techniques. The pre-warped images are then decomposed into a hierarchical structure with the wavelet transform. Corresponding coefficients between two decomposed images are therefore linearly interpolated to form the multiresolution representation of an intermediate view. Quantization techniques can be embedded here to compress the coefficients to reduce the morphing complexity. Finally, when displaying, compressed images are decoded and inverse wavelet transformed. A post-warping procedure is employed to transform the interpolated views to its desired position.

3 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102