Can 3D synthesized views be reliably assessed through usual subjective and objective evaluation protocols
read more
Citations
Objective image quality assessment of 3D synthesized views
Objective view synthesis quality assessment
Depth Map Driven Hole Filling Algorithm Exploiting Temporal Correlation Information
Estimation of Signal Distortion Using Effective Sampling Density for Light Field-Based Free Viewpoint Video
Evaluating virtual image quality using the side-views information fusion and depth maps
References
Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV
A new standardized method for objectively measuring video quality
An Image Inpainting Technique Based on the Fast Marching Method
Multi-View Video Plus Depth Representation and Coding
Related Papers (5)
Frequently Asked Questions (13)
Q2. What are the future works mentioned in the paper "Can 3d synthesized views be reliably assessed through usual subjective and objective evaluation protocols?" ?
Registration process according to the original view coupled with weighted critical areas could be investigated in future work to build a new metric. In addition, paired comparisons experiments should be hold on still images and video sequences in the future to refine the presented results.
Q3. What is the purpose of the experiment?
Absolute categorical rating (ACR) [16] was used to collect perceived quality scores: stimuli are presented in a random order and are evaluated through a coarse resolution rating scale.
Q4. What is the definition of a critical problem in DIBR?
A critical problem in DIBR is that regions occluded in the original view may become visible in the “virtual” view, an event also referred to as disocclusion.
Q5. What is the underlying principle of 3D video?
the appreciation of a 3D content relies on the stereopsis phenomenon: an observer needs to be presented a pair of stereoscopic images with a strong binocular disparity.
Q6. What are some of the distortions that are mentioned in the article?
Among them, the keystone effect that makes the image look like a trapezoid; the ghosting effect that is a shadow-like artifact; the cardboard effect when depth is perceived as unnatural, as discrete incoherent planes.
Q7. Why should depth be taken into account in a metric?
Depth should be taken into account in such a metric as recently proposed in [8], because view synthesis produces geometric distorsions.
Q8. What are the main objectives of the experiments?
The experiments have two main objectives: first to determine the tested algorithms performances and second, to assess the reliability of objective metrics for 3D images.
Q9. What are the main findings of the paper?
The assessments of the seven test algorithms by objective measurements and subjective ratings show that among all tested objective metrics, WSNR and pixel-based PSNR and NQM are the most correlated with perceptual evaluation provided by MOS scores.
Q10. What should be done in future work to build a new metric?
In addition, paired comparisons experiments should be hold on still images and video sequences in the future to refine the presented results.
Q11. What are the three test sequences used to generate?
Three test sequences have been used to generate four different viewpoints, that is to say twelve synthesized sequences for each test algorithm (84 synthesized sequences in total): Book Arrival (1024×768, 16 cameras with 6.5cm spacing), Lovebird1 (1024×768, 12 cameras with 3.5cm spacing) and Newspaper (1024×768, 9 cameras with 5cm spacing).
Q12. What is the way to solve the problem of disocclusion in 3D video?
In the absence of original image data two extrapolation paradigms address this inherent problem: 1) One can preprocess the depth information in a manner that no disocclusion occur in the “virtual” view, or 2) replace the missing image areas (holes) with known suitable image information.
Q13. What is the significance of the Student’s ttest?
In order to determine whether classes of algorithms could emerge, a Student’s ttest has been performed over the MOS scores for each test algorithm: on Table 1, statistically dependent pairs can be distinguished.