scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Objective Quality Assessment of Tone-Mapped Images

01 Feb 2013-IEEE Transactions on Image Processing (IEEE)-Vol. 22, Iss: 2, pp 657-667
TL;DR: An objective quality assessment algorithm for tone-mapped images is proposed by combining: 1) a multiscale signal fidelity measure on the basis of a modified structural similarity index and 2) a naturalness measure onThe basis of intensity statistics of natural images.
Abstract: Tone-mapping operators (TMOs) that convert high dynamic range (HDR) to low dynamic range (LDR) images provide practically useful tools for the visualization of HDR images on standard LDR displays. Different TMOs create different tone-mapped images, and a natural question is which one has the best quality. Without an appropriate quality measure, different TMOs cannot be compared, and further improvement is directionless. Subjective rating may be a reliable evaluation method, but it is expensive and time consuming, and more importantly, is difficult to be embedded into optimization frameworks. Here we propose an objective quality assessment algorithm for tone-mapped images by combining: 1) a multiscale signal fidelity measure on the basis of a modified structural similarity index and 2) a naturalness measure on the basis of intensity statistics of natural images. Validations using independent subject-rated image databases show good correlations between subjective ranking score and the proposed tone-mapped image quality index (TMQI). Furthermore, we demonstrate the extended applications of TMQI using two examples - parameter tuning for TMOs and adaptive fusion of multiple tone-mapped images.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This paper proposes a novel objective image quality assessment (IQA) algorithm for MEF images based on the principle of the structural similarity approach and a novel measure of patch structural consistency and shows that the proposed model well correlates with subjective judgments and significantly outperforms the existing IQA models for general image fusion.
Abstract: Multi-exposure image fusion (MEF) is considered an effective quality enhancement technique widely adopted in consumer electronics, but little work has been dedicated to the perceptual quality assessment of multi-exposure fused images. In this paper, we first build an MEF database and carry out a subjective user study to evaluate the quality of images generated by different MEF algorithms. There are several useful findings. First, considerable agreement has been observed among human subjects on the quality of MEF images. Second, no single state-of-the-art MEF algorithm produces the best quality for all test images. Third, the existing objective quality models for general image fusion are very limited in predicting perceived quality of MEF images. Motivated by the lack of appropriate objective models, we propose a novel objective image quality assessment (IQA) algorithm for MEF images based on the principle of the structural similarity approach and a novel measure of patch structural consistency. Our experimental results on the subjective database show that the proposed model well correlates with subjective judgments and significantly outperforms the existing IQA models for general image fusion. Finally, we demonstrate the potential application of the proposed model by automatically tuning the parameters of MEF algorithms. 1 The subjective database and the MATLAB code of the proposed model will be made available online. Preliminary results of Section III were presented at the 6th International Workshop on Quality of Multimedia Experience , Singapore, 2014.

530 citations


Cites background from "Objective Quality Assessment of Ton..."

  • ...These include information theoretic and natural scene statistical approaches [38]–[40], the adaptive linear system decomposition framework [41]–[43], the feature similarity method [44], and visual attention and saliency-based approaches [39], [45]–[47]....

    [...]

Journal ArticleDOI
TL;DR: This survey provides a general overview of classical algorithms and recent progresses in the field of perceptual image quality assessment and describes the performances of the state-of-the-art quality measures for visual signals.
Abstract: Perceptual quality assessmentplays a vital role in the visual communication systems owing to theexistence of quality degradations introduced in various stages of visual signalacquisition, compression, transmission and display.Quality assessment for visual signals can be performed subjectively andobjectively, and objective quality assessment is usually preferred owing to itshigh efficiency and easy deployment. A large number of subjective andobjective visual quality assessment studies have been conducted during recent years.In this survey, we give an up-to-date and comprehensivereview of these studies.Specifically, the frequently used subjective image quality assessment databases are firstreviewed, as they serve as the validation set for the objective measures.Second, the objective image quality assessment measures are classified and reviewed according to the applications and the methodologies utilized in the quality measures.Third, the performances of the state-of-the-artquality measures for visual signals are compared with an introduction of theevaluation protocols.This survey provides a general overview of classical algorithms andrecent progresses in the field of perceptual image quality assessment.

281 citations

Proceedings Article
01 Jan 2018
TL;DR: The proposed multi-branch low-light enhancement network (MBLLEN) is found to outperform the state-of-art techniques by a large margin and can be directly extended to handle low-lights videos.
Abstract: We present a deep learning based method for low-light image enhancement. This problem is challenging due to the difficulty in handling various factors simultaneously including brightness, contrast, artifacts and noise. To address this task, we propose the multi-branch low-light enhancement network (MBLLEN). The key idea is to extract rich features up to different levels, so that we can apply enhancement via multiple subnets and finally produce the output image via multi-branch fusion. In this manner, image quality is improved from different aspects. Through extensive experiments, our proposed MBLLEN is found to outperform the state-of-art techniques by a large margin. We additionally show that our method can be directly extended to handle low-light videos.

277 citations


Cites methods from "Objective Quality Assessment of Ton..."

  • ...PSNR SSIM [39] VIF [32] LOE TMQI [40] AB [6] WAHE [2] 17.91/17.37 0.62/0.59 0.40/0.40 771.34/771.33 0.83/0.82 -26.41/-30.04 MF [11] 19.37/19.66 0.67/0.67 0.39/0.38 896.67/896.46 0.84/0.84 -13.77/-16.88 DHECI [29] 18.03/18.71 0.67/0.68 0.36/0.36 687.60/687.61 0.86/0.86 3.75/ 0.16 Ying [43] 18.61/19.69 0.70/0.71 0.40/0.39 928.13/927.83 0.86/0.86 10.99/ 6.99 BIMEF [41] 20.27/17.56 0.73/0.70 0.41/0.39 725.72/725.61 0.85/0.83 -11.58/-31.13 Ours 25.97/26.39 0.87/0.87 0.49/0.49 573.14/573.14 0.90/0.89 1.45/ -1.57 Table 3: Comparison on low-light (with additional Poisson noise) images enhancement....

    [...]

  • ...PSNR SSIM [39] VIF [32] LOE TMQI [40] AB [6] WAHE [2] 17.43 0.65 0.48 648.30 0.84 -30.70 MF [11] 18.64 0.67 0.45 882.23 0.84 -22.74 DHECI [29] 18.34 0.68 0.43 607.01 0.87 -2.09 Ying [43] 19.93 0.73 0.47 892.53 0.86 2.22 BIMEF [41] 16.75 0.71 0.46 674.53 0.83 -34.44 Ours 26.65 0.89 0.55 477.95 0.90 -1.30 Table 2: Comparison of different methods after brightness scale according to ground truth....

    [...]

  • ...PSNR SSIM [39] VIF [32] LOE TMQI [40] AB [6] Input 12....

    [...]

  • ...PSNR SSIM [39] VIF [32] LOE TMQI [40] AB [6] WAHE [2] 17....

    [...]

  • ...To evaluate the performance of different methods from different aspects and in a more fair way, we use a variety of different metrics, including PSNR, SSIM [39], Average Brightness(AB) [6], Visual Information Fidelity(VIF) [32], Lightness order error(LOE) as suggested in [41] and TMQI [40]....

    [...]

Journal ArticleDOI
TL;DR: A new perceptual image quality assessment (IQA) metric based on the human visual system (HVS) is proposed that performs efficiently with convolution operations at multiscales, gradient magnitude, and color information similarity, and a perceptual-based pooling.
Abstract: A fast reliable computational quality predictor is eagerly desired in practical image/video applications, such as serving for the quality monitoring of real-time coding and transcoding. In this paper, we propose a new perceptual image quality assessment (IQA) metric based on the human visual system (HVS). The proposed IQA model performs efficiently with convolution operations at multiscales, gradient magnitude, and color information similarity, and a perceptual-based pooling. Extensive experiments are conducted using four popular large-size image databases and two multiply distorted image databases, and results validate the superiority of our approach over modern IQA measures in efficiency and efficacy. Our metric is built on the theoretical support of the HVS with lately designed IQA methods as special cases.

218 citations


Cites methods from "Objective Quality Assessment of Ton..."

  • ...Having initialized the used parameters, in the second phase, we follow the method applied in [50] to fine tune the parameters used in our PSIM metric....

    [...]

Journal ArticleDOI
TL;DR: A novel blind/no-reference (NR) model for accessing the perceptual quality of screen content pictures with big data learning and delivers computational efficiency and promising performance.
Abstract: Recent years have witnessed a growing number of image and video centric applications on mobile, vehicular, and cloud platforms, involving a wide variety of digital screen content images Unlike natural scene images captured with modern high fidelity cameras, screen content images are typically composed of fewer colors, simpler shapes, and a larger frequency of thin lines In this paper, we develop a novel blind/no-reference (NR) model for accessing the perceptual quality of screen content pictures with big data learning The new model extracts four types of features descriptive of the picture complexity, of screen content statistics, of global brightness quality, and of the sharpness of details Comparative experiments verify the efficacy of the new model as compared with existing relevant blind picture quality assessment algorithms applied on screen content image databases A regression module is trained on a considerable number of training samples labeled with objective visual quality predictions delivered by a high-performance full-reference method designed for screen content image quality assessment (IQA) This results in an opinion-unaware NR blind screen content IQA algorithm Our proposed model delivers computational efficiency and promising performance The source code of the new model will be available at: https://sitesgooglecom/site/guke198701/publications

213 citations


Cites methods from "Objective Quality Assessment of Ton..."

  • ...This framework could be used to transform any blind IQA model into one that does not require human ratings, such as recent blind IQA models designed to handle multiple distortions [27], [28], infrared images [29], authentic distortions [30], contrast distortions [31], tone-mapped images [32], [33], dehazed images [34], etc....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI
TL;DR: A technique for image encoding in which local operators of many scales but identical shape serve as the basis functions, which tends to enhance salient image features and is well suited for many image analysis tasks as well as for image compression.
Abstract: We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixel-to-pixel correlations are first removed by subtracting a lowpass filtered copy of the image from the image itself. The result is a net data compression since the difference, or error, image has low variance and entropy, and the low-pass filtered image may represented at reduced sample density. Further data compression is achieved by quantizing the difference image. These steps are then repeated to compress the low-pass image. Iteration of the process at appropriately expanded scales generates a pyramid data structure. The encoding process is equivalent to sampling the image with Laplacian operators of many scales. Thus, the code tends to enhance salient image features. A further advantage of the present code is that it is well suited for many image analysis tasks as well as for image compression. Fast algorithms are described for coding and decoding.

6,975 citations


"Objective Quality Assessment of Ton..." refers methods in this paper

  • ...Following the idea used in multi-scale [16] and information-weighted SSIM [17], we adopt a multi-scale approach, where the images are iteratively low-pass filtered and downsampled to create an image pyramid structure [27], as illustrated in Fig....

    [...]

Proceedings ArticleDOI
09 Nov 2003
TL;DR: This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions, and develops an image synthesis method to calibrate the parameters that define the relative importance of different scales.
Abstract: The structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene, and therefore a measure of structural similarity can provide a good approximation to perceived image quality. This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions. We develop an image synthesis method to calibrate the parameters that define the relative importance of different scales. Experimental comparisons demonstrate the effectiveness of the proposed method.

4,333 citations

Journal ArticleDOI
TL;DR: This article has reviewed the reasons why people want to love or leave the venerable (but perhaps hoary) MSE and reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems.
Abstract: In this article, we have reviewed the reasons why we (collectively) want to love or leave the venerable (but perhaps hoary) MSE. We have also reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems. The message we are trying to send here is not that one should abandon use of the MSE nor to blindly switch to any other particular signal fidelity measure. Rather, we hope to make the point that there are powerful, easy-to-use, and easy-to-understand alternatives that might be deployed depending on the application environment and needs. While we expect (and indeed, hope) that the MSE will continue to be widely used as a signal fidelity measure, it is our greater desire to see more advanced signal fidelity measures being used, especially in applications where perceptual criteria might be relevant. Ideally, the performance of a new signal processing algorithm might be compared to other algorithms using several fidelity criteria. Lastly, we hope that we have given further motivation to the community to consider recent advanced signal fidelity measures as design criteria for optimizing signal processing algorithms and systems. It is in this direction that we believe that the greatest benefit eventually lies.

2,601 citations


"Objective Quality Assessment of Ton..." refers methods in this paper

  • ...The SSIM approach provides a useful design philosophy as well as a practical method for measuring structural fidelities between images [20]....

    [...]

Journal ArticleDOI
TL;DR: It has long been assumed that sensory neurons are adapted to the statistical properties of the signals to which they are exposed, but recent developments in statistical modeling have enabled researchers to study more sophisticated statistical models for visual images, to validate these models empirically against large sets of data, and to begin experimentally testing the efficient coding hypothesis.
Abstract: ▪ Abstract It has long been assumed that sensory neurons are adapted, through both evolutionary and developmental processes, to the statistical properties of the signals to which they are exposed. Attneave (1954), Barlow (1961) proposed that information theory could provide a link between environmental statistics and neural responses through the concept of coding efficiency. Recent developments in statistical modeling, along with powerful computational tools, have enabled researchers to study more sophisticated statistical models for visual images, to validate these models empirically against large sets of data, and to begin experimentally testing the efficient coding hypothesis for both individual neurons and populations of neurons.

2,280 citations


"Objective Quality Assessment of Ton..." refers background in this paper

  • ...There is a rich literature on natural image statistics [28] and advanced statistical models (that reflects the structural regularities in space, scale and orientation in natural images) may be included to improve the statistical naturalness measure....

    [...]

  • ...applications and the understanding of biological vision [28]....

    [...]