Subjective assessment of H.264/AVC video sequences transmitted over a noisy channel
read more
Citations
Video Quality Assessment on Mobile Devices: Subjective, Behavioral and Objective Studies
A Completely Blind Video Integrity Oracle
Best Practices for QoE Crowdtesting: QoE Assessment With Crowdsourcing
Analysis of Public Image and Video Databases for Quality Assessment
Spatiotemporal Statistics for Video Quality Assessment
References
Capacity of a burst-noise channel
Analysis of video transmission over lossy channels
Video CODEC Design
The effects of jitter on the peceptual quality of video
Related Papers (5)
Frequently Asked Questions (16)
Q2. What future works have the authors mentioned in the paper "Subjective assessment of h.264/avc video sequences transmitted over a noisy channel" ?
Future works will include extension of the study to 4CIF and HD resolution data, as well as increase in the number of subjects. Finally, other test methodologies, like the continuous quality evaluation, will be taken into account.
Q3. What is the importance of a good description of the test environment?
Accurate control and description of the test environment is necessary to assure the reproducibility of the test activity and to compare results across different laboratories and test sessions.
Q4. What is the reason why the channel might drop packets?
In fact, the channel might drop packets, thus introducing errors that propagate along the decoded video content because of the predictive nature of conventional video coding schemes [1, 2, 3], or it might cause jitter delay, due to decoder buffer underflows determined by network latencies.
Q5. How many realizations are selected for each PLR?
The authors selected two channel realizations for each PLR, for a total of 12 realizations per video content, in order to uniformly span a wide range of distortions, i.e perceived video quality, while having a dataset of reasonable dimension.
Q6. How many corrupted bitstreams were generated for each of the six original H.264?
bitstreams corresponding to the test sequences, the authors generated a number of corrupted bitstreams, by dropping packets according to a given error pattern.
Q7. How did the authors tune the QP for each sequence?
the authors tuned the QP for each sequence in order not to exceed a bitrate of 600 kbps which can be considered an upper bound for the transmission of CIF video contents over IP networks.
Q8. What is the way to simulate burst errors?
To simulate burst errors, the patterns have been generated at six different PLRs [0.1%, 0.4%, 1%, 3%, 5%, 10%] with a two state Gilbert’s model [11].
Q9. How many subjects participated in the tests at PoliMi and EPFL?
Test phase (approx. 20 min):– Assessment of 5 dummy sequences– Assessment of 78 sequencesTwenty-three subjects and seventeen subjects participated in the tests at PoliMi and EPFL, respectively.
Q10. What other sequences have been used for training the subjects?
Additionally,two other sequences, namely Coastguard and Container, have been used for training the subjects, as detailed in subsection 2.3.
Q11. What is the significance of the MOS plots?
It is assumed that the overlap of 95% confidence intervals provides indication of the absence of statistical differences between MOS values.
Q12. How many realizations are selected for each video sequence?
Subjects are seated directly in line with the center of the video display at a specified viewing distance, which is equal to 6- 8H for CIF resolution sequences, where H is the height of the video window.
Q13. What is the presentation order for each subject?
The presentation order for each subject is randomized according to a random number generator, discarding those permutations wherestimuli related to the same original content are consecutive.
Q14. What is the purpose of the test sequences?
Each tested sequence has been visually inspected in order to see whether the chosen QPs minimized the blocking artifacts induced by lossy coding.
Q15. What is the purpose of the paper?
With this contribution, the authors aim at providing a publicly available database containing Mean Opinion Scores (MOSs) collected during subjective tests carried out at the premises of 2 academic institutions: Politecnico di Milano - Italy and Ecole Polytechnique Fédérale de Lausanne - Switzerland.
Q16. What is the source of the data?
The test material (including the original uncompressed test and training material and the H.264 coded streams before and after the simulation of packet losses), the error-prone network simulatorand the H.264 decoder used in their study, the raw subjective data, the files used to process them, and the final MOS data, are available at http://mmspl.epfl.ch/vqa.