scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

On improving the pooling in HDR-VDP-2 towards better HDR perceptual quality assessment

25 Feb 2014-electronic imaging (International Society for Optics and Photonics)-Vol. 9014, pp 143-151
TL;DR: The HDR Visual Difference Predictor (HDR-VDP-2) is primarily a visibility prediction metric i.e. whether the signal distortion is visible to the eye and to what extent and it also employs a pooling function to compute an overall quality score.
Abstract: High Dynamic Range (HDR) signals capture much higher contrasts as compared to the traditional 8-bit low dynamic range (LDR) signals. This is achieved by representing the visual signal via values that are related to the real-world luminance, instead of gamma encoded pixel values which is the case with LDR. Therefore, HDR signals cover a larger luminance range and tend to have more visual appeal. However, due to the higher luminance conditions, the existing methods cannot be directly employed for objective quality assessment of HDR signals. For that reason, the HDR Visual Difference Predictor (HDR-VDP-2) has been proposed. HDR-VDP-2 is primarily a visibility prediction metric i.e. whether the signal distortion is visible to the eye and to what extent. Nevertheless, it also employs a pooling function to compute an overall quality score. This paper focuses on the pooling aspect in HDR-VDP-2 and employs a comprehensive database of HDR images (with their corresponding subjective ratings) to improve the prediction accuracy of HDR-VDP-2. We also discuss and evaluate the existing objective methods and provide a perspective towards better HDR quality assessment.
Citations
More filters
Dissertation
01 Jan 2015
TL;DR: The main goal of this research was the development of a new image difference metric called improved Color-Image-Difference (iCID) which normalizes images to standard viewing conditions and extracts chromatic features and outperforms almost all state-of-the-art metrics.
Abstract: In digital imaging, evaluating the visual quality of images is a crucial requirement for most image-processing systems. For such an image quality assessment, mainly objective assessments are employed which automatically predict image quality by a computer algorithm. The vast majority of objective assessments are so-called image difference metrics which predict the perceived difference between a distorted image and a reference. Due to the limited understanding of the human visual system, image quality assessment is not straightforward and still an open research field. The majority of image-difference metrics disregard color information which allows for faster computation. Even though their performance is sufficient for many applications, they are not able to correctly predict the quality for a variety of color distortions. Furthermore, many image-difference metrics do not account for viewing conditions which may have a large impact on the perceived image quality (e.g., a large display in an office compared with a small mobile device in the bright sunlight). The main goal of my research was the development of a new image difference metric called improved Color-Image-Difference (iCID) which normalizes images to standard viewing conditions and extracts chromatic features. The new metric was then used as objective function to improve gamut mapping as well as tone mapping. Both methods represent essential transformations for the reproduction of color images. The performance of the proposed metric was verified by visual experiments as well as by comparisons with human judgments. The visual experiments reveal significant improvements over state-of-the-art gamut-mapping and tone-mapping transformations. For gamut-mapping distortions, iCID exhibits the significantly highest correlation to human judgments and for conventional distortions (e.g., noise, blur, and compression artifacts), iCID outperforms almost all state-of-the-art metrics.

3 citations


Cites background from "On improving the pooling in HDR-VDP..."

  • ...Furthermore, an HDR extension of iCID shall be investigated because there is a need of a reliable HDR-image quality assessment particularly regarding color [94]....

    [...]

Journal ArticleDOI
TL;DR: In this paper , the authors discuss emerging trends in VQA algorithm design and general perspectives on the evolution of Video Quality Assessment in the foreseeable future, and discuss the development of Subjective Video Quality databases containing videos and human-annotated quality scores.
Abstract: Perceptual Video Quality Assessment (VQA) is one of the most fundamental and challenging problems in the field of Video Engineering. Along with video compression, it has become one of two dominant theoretical and algorithmic technologies in television streaming and social media. Over the last 2 decades, the volume of video traffic over the internet has grown exponentially, powered by rapid advancements in cloud services, faster video compression technologies, and increased access to high-speed, low-latency wireless internet connectivity. This has given rise to issues related to delivering extraordinary volumes of picture and video data to an increasingly sophisticated and demanding global audience. Consequently, developing algorithms to measure the quality of pictures and videos as perceived by humans has become increasingly critical since these algorithms can be used to perceptually optimize trade-offs between quality and bandwidth consumption. VQA models have evolved from algorithms developed for generic 2D videos to specialized algorithms explicitly designed for on-demand video streaming, user-generated content (UGC), virtual and augmented reality (VR and AR), cloud gaming, high dynamic range (HDR), and high frame rate (HFR) scenarios. Along the way, we also describe the advancement in algorithm design, beginning with traditional hand-crafted feature-based methods and finishing with current deep-learning models powering accurate VQA algorithms. We also discuss the evolution of Subjective Video Quality databases containing videos and human-annotated quality scores, which are the necessary tools to create, test, compare, and benchmark VQA algorithms. To finish, we discuss emerging trends in VQA algorithm design and general perspectives on the evolution of Video Quality Assessment in the foreseeable future.
References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI
TL;DR: This paper presents convergence properties of the Nelder--Mead algorithm applied to strictly convex functions in dimensions 1 and 2, and proves convergence to a minimizer for dimension 1, and various limited convergence results for dimension 2.
Abstract: The Nelder--Mead simplex algorithm, first published in 1965, is an enormously popular direct search method for multidimensional unconstrained minimization. Despite its widespread use, essentially no theoretical results have been proved explicitly for the Nelder--Mead algorithm. This paper presents convergence properties of the Nelder--Mead algorithm applied to strictly convex functions in dimensions 1 and 2. We prove convergence to a minimizer for dimension 1, and various limited convergence results for dimension 2. A counterexample of McKinnon gives a family of strictly convex functions in two dimensions and a set of initial conditions for which the Nelder--Mead algorithm converges to a nonminimizer. It is not yet known whether the Nelder--Mead method can be proved to converge to a minimizer for a more specialized class of convex functions in two dimensions.

7,141 citations

Journal ArticleDOI
TL;DR: In this paper, the maximum difference between an empirical and a hypothetical cumulative distribution is calculated, and confidence limits for a cumulative distribution are described, showing that the test is superior to the chi-square test.
Abstract: The test is based on the maximum difference between an empirical and a hypothetical cumulative distribution. Percentage points are tabled, and a lower bound to the power function is charted. Confidence limits for a cumulative distribution are described. Examples are given. Indications that the test is superior to the chi-square test are cited.

5,143 citations

Book
20 Mar 1996
TL;DR: Montgomery and Runger's Engineering Statistics text as discussed by the authors provides a practical approach oriented to engineering as well as chemical and physical sciences by providing unique problem sets that reflect realistic situations, students learn how the material will be relevant in their careers.
Abstract: Montgomery and Runger's bestselling engineering statistics text provides a practical approach oriented to engineering as well as chemical and physical sciences. By providing unique problem sets that reflect realistic situations, students learn how the material will be relevant in their careers. With a focus on how statistical tools are integrated into the engineering problem-solving process, all major aspects of engineering statistics are covered. Developed with sponsorship from the National Science Foundation, this text incorporates many insights from the authors' teaching experience along with feedback from numerous adopters of previous editions.

3,915 citations

Journal ArticleDOI
TL;DR: Next, the authors discuss an additive model obtained by replacing the timevarying regression coefŽ cients by constants, and a brief summary of multivariate survival analysis, including measures of association and frailty models.
Abstract: (2004). Applied Statistics and Probability for Engineers. Technometrics: Vol. 46, No. 1, pp. 112-113.

2,475 citations