scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A new objective quality assessment method for retargeted stereopairs by combining image quality and depth perception measures is presented, demonstrating the superiority of this method on NBU SIRQA and SIRD databases.
Abstract: Stereoscopic Image Retargeting (SIR) aims to adapt stereoscopic images and videos to 3D display devices with various aspect ratios by emphasizing the important content while retaining surrounding context with minimal visual distortion. To address the issue of SIR evaluation, this paper presents a new objective quality assessment method for retargeted stereopairs by combining image quality and depth perception measures. Specifically, the image quality measure is conducted between the source and retargeted intermediate views generated by the view synthesis method to characterize the geometric distortion and content loss of the retargeted stereopair, while several depth-aware features are extracted to measure the visual comfort/discomfort and depth sensation when human views a 3D scene. Then, the extracted features are fused for an overall perceptual quality prediction. Experiment results on NBU SIRQA and SIRD databases verify the superiority of our method.

5 citations

Proceedings ArticleDOI
13 Oct 2012
TL;DR: This paper presents a novel approach to uncalibrated view synthesis that overcomes the sensitivity to the epipole of existing methods by following a interpolate-then-derectify scheme, as opposed to the previous derectify- then-interpolate strategy.
Abstract: This paper presents a novel approach to uncalibrated view synthesis that overcomes the sensitivity to the epipole of existing methods. The approach follows a interpolate-then-derectify scheme, as opposed to the previous derectify-then-interpolate strategy. Both approaches generate a trajectory in an uncalibrated framework that is related to a specific Euclidean counterpart, but our method yields a warping map that is more resilient to errors in the estimate of the epipole, as it is confirmed by synthetic experiments.

5 citations

Proceedings ArticleDOI
02 Jul 2007
TL;DR: Experimental results show that the newly developed algorithm can improve image quality of synthesized virtual views with a PSNR gain of up to 0.65 dB.
Abstract: A framework for virtual view synthesis based on multiple images is presented in this paper. Compared to conventional view synthesis based on stereoscopic image pairs, a postprocessing algorithm for disparity refinement is added to exploit information contained in multiple images captured with a multi-view camera configuration. The principle for disparity refinement is examined, leading to the development of a novel algorithm. Experimental results show that the newly developed algorithm can improve image quality of synthesized virtual views with a PSNR gain of up to 0.65 dB.

5 citations

Journal ArticleDOI
TL;DR: A new local filter is developed for the synthesis view (named SynBF) after it has been generated, which has a similar expression as that of the BF but not the exact same weight terms, inspired by the finding that not all pixels are significantly affected by noise in the synthesisView.
Abstract: In 3-D video systems, noise in the texture and depth videos of reference views may not be removed (Scenario 1) or not fully removed (Scenario 2) by prefiltering methods before the view synthesis procedure. In these scenarios, the noise is transferred to the generated synthesis view. After investigating the noise model of the synthesis view, we conclude that the noise in the synthesis view not only causes fluctuation in the photometric values of pixels in the range domain but also additionally shifts the positions of neighbor pixels in the spatial domain compared to that in natural images. It consequently damages the textural content near edges in the synthesis view, for which the popular local filters of natural images, that is, the bilateral filter (BF) and the guided filter, do not work well. In this paper, we develop a new local filter for the synthesis view (named SynBF) after it has been generated, which has a similar expression as that of the BF but not the exact same weight terms. On one hand, the spatial term in the classical BF is directly reused due to its robustness to noise, which gives high weights to spatially closed pixels of to-be-filtered pixels. On the other hand, a reliability term is designed that gives high weights to pixels that are unlikely to be affected by noise. It is inspired by the finding that not all pixels are significantly affected by noise in the synthesis view. In this way, true edge profiles are protected in the filtering process. Experiments are conducted on a set of synthesis views for both scenarios above and compared to the two local filters, which verifies its effectiveness in removing noise and protecting edge profiles. The proposed method can be considered as a supplement to prefiltering methods of texture/depth videos in 3-D video systems.

5 citations

Patent
02 Jul 2014
TL;DR: In this paper, a method for decoding a video including a plurality of views, according to one embodiment of the present invention, comprises the steps of configuring a base merge motion candidate list by using motion information of neighboring blocks and a time correspondence block of a current block, configuring an extended merge motion information list, and determining whether neighboring block motion information contained in the base motion candidate lists is derived through view synthesis prediction.
Abstract: A method for decoding a video including a plurality of views, according to one embodiment of the present invention, comprises the steps of: configuring a base merge motion candidate list by using motion information of neighboring blocks and a time correspondence block of a current block; configuring an extended merge motion information list by using motion information of a depth information map and a video view different from the current block; and determining whether neighboring block motion information contained in the base merge motion candidate list is derived through view synthesis prediction.

4 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102