scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Scope of validity of PSNR in image/video quality assessment

19 Jun 2008-Electronics Letters (IET)-Vol. 44, Iss: 13, pp 800-801
TL;DR: Experimental data are presented that clearly demonstrate the scope of application of peak signal-to-noise ratio (PSNR) as a video quality metric and it is shown that as long as the video content and the codec type are not changed, PSNR is a valid quality measure.
Abstract: Experimental data are presented that clearly demonstrate the scope of application of peak signal-to-noise ratio (PSNR) as a video quality metric. It is shown that as long as the video content and the codec type are not changed, PSNR is a valid quality measure. However, when the content is changed, correlation between subjective quality and PSNR is highly reduced. Hence PSNR cannot be a reliable method for assessing the video quality across different video contents.
Citations
More filters
Book ChapterDOI
08 Oct 2016
TL;DR: In this paper, the authors combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image style transfer, where a feedforward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
Abstract: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.

6,639 citations

Posted Content
TL;DR: This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
Abstract: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.

5,668 citations


Cites methods from "Scope of validity of PSNR in image/..."

  • ...The traditional metrics used to evaluate super-resolution are PSNR and SSIM [54], both of which have been found to correlate poorly with human assessment of visual quality [55,56,57,58,59]....

    [...]

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively is proposed and a new contextualized dilated network is developed to exploit regional contextual information and to produce better representations for rain detection.
Abstract: In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain streak accumulation. Our core ideas lie in our new rain image model and new deep learning architecture. We add a binary map that provides rain streak locations to an existing model, which comprises a rain streak layer and a background layer. We create a model consisting of a component representing rain streak accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which usually happen in heavy rain. Based on the model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. To handle rain streak accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose a recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively. In each recurrence of our method, a new contextualized dilated network is developed to exploit regional contextual information and to produce better representations for rain detection. The evaluation on real images, particularly on heavy rain, shows the effectiveness of our models and architecture.

640 citations


Cites methods from "Scope of validity of PSNR in image/..."

  • ...As observed, our method considerably outperforms other methods in terms of both PSNR and SSIM....

    [...]

  • ...For the experiments on synthesized data, two metrics Peak Signal-to-Noise Ratio (PSNR) [19] and Structure Similarity Index (SSIM) [33] are used as comparison criteria....

    [...]

  • ...For the experiments on synthesized data, two metrics Peak Signal-to-Noise Ratio (PSNR) [20] and Structure Similarity Index (SSIM) [36] are used as comparison criteria....

    [...]

  • ...The PSNR of JORDER-R gains over JORDER more than 1dB....

    [...]

Book ChapterDOI
Xia Li1, Jianlong Wu1, Zhouchen Lin1, Hong Liu1, Hongbin Zha1 
08 Sep 2018
TL;DR: A novel deep network architecture based on deep convolutional and recurrent neural networks for single image deraining based on contextual information is proposed and outperforms the state-of-the-art approaches under all evaluation metrics.
Abstract: Rain streaks can severely degrade the visibility, which causes many current computer vision algorithms fail to work. So it is necessary to remove the rain from images. We propose a novel deep network architecture based on deep convolutional and recurrent neural networks for single image deraining. As contextual information is very important for rain removal, we first adopt the dilated convolutional neural network to acquire large receptive field. To better fit the rain removal task, we also modify the network. In heavy rain, rain streaks have various directions and shapes, which can be regarded as the accumulation of multiple rain streak layers. We assign different alpha-values to various rain streak layers according to the intensity and transparency by incorporating the squeeze-and-excitation block. Since rain streak layers overlap with each other, it is not easy to remove the rain in one stage. So we further decompose the rain removal into multiple stages. Recurrent neural network is incorporated to preserve the useful information in previous stages and benefit the rain removal in later stages. We conduct extensive experiments on both synthetic and real-world datasets. Our proposed method outperforms the state-of-the-art approaches under all evaluation metrics. Codes and supplementary material are available at our project webpage: https://xialipku.github.io/RESCAN.

539 citations


Cites methods from "Scope of validity of PSNR in image/..."

  • ...We can see that our RESCAN considerably outperforms other methods in terms of both PSNR and SSIM on these two datasets....

    [...]

  • ...Quality Measures To evaluate the performance on synthetic image pairs, we adopt two commonly used metrics, including peak signal to noise ratio (PSNR) [36] and structure similarity index (SSIM) [37]....

    [...]

Journal ArticleDOI
TL;DR: A novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts and the experimental results show that the proposed method is highly competitive compared with other state-of-the-art approaches.
Abstract: To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

444 citations


Cites background from "Scope of validity of PSNR in image/..."

  • ...2 FR PSNR Peak Signal to Noise Ratio [30] PSNR(I, Î) = 10 log( max(I (2))...

    [...]

  • ...Here we include: Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR), Signal to Noise Ratio (SNR), Structural Content (SC), Maximum Difference (MD), Average Difference (AD), Normalized Absolute Error (NAE), R-Averaged Maximum Difference (RAMD) and Laplacian Mean Squared Error (LMSE)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A unified approach to the coder control of video coding standards such as MPEG-2, H.263, MPEG-4, and the draft video coding standard H.264/AVC (advanced video coding) is presented.
Abstract: A unified approach to the coder control of video coding standards such as MPEG-2, H.263, MPEG-4, and the draft video coding standard H.264/AVC (advanced video coding) is presented. The performance of the various standards is compared by means of PSNR and subjective testing results. The results indicate that H.264/AVC compliant encoders typically achieve essentially the same reproduction quality as encoders that are compliant with the previous standards while typically requiring 60% or less of the bit rate.

3,312 citations

Book
29 Oct 1993
TL;DR: The invention relates to a spark plug tightener with a cylindrical housing having on one end a multi-faceted opening for engaging the spark plug and on the other end an annular shaped profile in front view through which passes a turning shaft.
Abstract: The invention relates to a spark plug tightener with a cylindrical housing having on one end a multi-faceted opening for engaging the spark plug and on the other end an annular shaped profile in front view through which passes a turning shaft with a corresponding profile in front view whereby the turning shaft is held under the tension of cup springs by means of a cover cap screwed over the housing.

630 citations