HDR image compression: A new challenge for objective quality metrics
read more
Citations
Hdr-vqm
Benchmarking of objective quality metrics for HDR image quality assessment
Overview and evaluation of the JPEG XT HDR image compression standard
Subjective quality assessment database of HDR images compressed with JPEG XT
Objective and subjective evaluation of High Dynamic Range video compression
References
Image quality assessment: from error visibility to structural similarity
A universal image quality index
A law of comparative judgment
FSIM: A Feature Similarity Index for Image Quality Assessment
Image information and visual quality
Related Papers (5)
Perceptual Signal Coding for More Efficient Usage of Bit Codes
Frequently Asked Questions (15)
Q2. What are the properties of the IQR estimation of MOS?
To be compliant with the standard procedure for evaluating the performance of objective metrics [18], the following properties of the IQR estimation of MOS should be considered: accuracy, monotonicity, and consistency.
Q3. What was the monitor for the test?
To display the test stimuli, a full HD (1920× 1080p) 42” Dolby Research HDR RGB backlight dual modulation display (aka Pulsar) was used.
Q4. What can be used to evaluate the subjective metrics?
The results of the subjective tests can be used as ground truth to evaluate how well the objective metrics estimate perceived quality.
Q5. What was the color temperature of the background walls and curtains?
The test room is equipped with a controlled lighting system with a 6500 K color temperature, whereas the color of all the background walls and curtains present in the test area were mid grey.
Q6. What was the metric used to compute the CIEDE2000 color difference?
To compute the CIEDE2000 color difference, all HDR images were converted to the CIELAB color space using Banterle’s HDR toolbox for MATLAB2.4.
Q7. How was the image prepared for subjective experiments?
2http://www.github.com/banterle/HDR_ToolboxTo prepare images for subjective experiments, both HDR and LDR versions were first downscaled by a factor of two with bicubic interpolation.
Q8. What is the metric for LDR?
In many benchmarking performed on LDR content, VIF(p) is often among the best metrics and shows lower content dependency when compared to other metrics [19].
Q9. What was the purpose of the experiment?
Before the experiment, a consent form was handed to subjects for signature and oral instructions were provided to explain their tasks.
Q10. Why was the paired comparison evaluation methodology selected?
The paired comparison evaluation methodology was selected for its high accuracy and reliability in constructing a scale of perceptual preferences.
Q11. What are the common metrics used to predict perceived quality of HDR content?
Results show that HDR images are challenging for objective metrics and that the most commonly used metrics, e.g., PSNR, SSIM, and MS-SSIM, predict perceived quality of HDR content unreliably.
Q12. How many paired comparisons were made for each of the 5 contents?
For each of the 5 contents, all the possible combinations of the 4 bit rates were considered, i.e., 6 pairs for each content, leading to a total of 5 × 6 = 30 paired comparisons for all contents.
Q13. What is the metric used to assess HDR image quality?
3. OBJECTIVE QUALITY METRICSIn this study, the performance of a set of 13 full-reference objective metrics in predicting HDR image quality was assessed:1. MSE: Mean Squared Error,2. PSNR: Peak Signal-to-Noise Ratio,3.
Q14. How many HDR images were used in the experiments?
Five HDR images1 of different dynamic ranges (computed using Banterle’s HDR toolbox for MATLAB2), representing different typical scenes, were used in the experiments (see Figure 1 and Table 1 for details).
Q15. What was the objective metric used to determine the quality of the HDR images?
In this study, all HDR images were converted to the Y ′CbCr color space [17] and these metrics were applied to the components Y ′, Cb, and Cr separately.