scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Perceptual Blur and Ringing Metrics: Application to JPEG2000

TL;DR: A full- and no-reference blur metric as well as a full-reference ringing metric are presented, based on an analysis of the edges and adjacent regions in an image and have very low computational complexity.
Abstract: We present a full- and no-reference blur metric as well as a full-reference ringing metric. These metrics are based on an analysis of the edges and adjacent regions in an image and have very low computational complexity. As blur and ringing are typical artifacts of wavelet compression, the metrics are then applied to JPEG2000 coded images. Their perceptual significance is corroborated through a number of subjective experiments. The results show that the proposed metrics perform well over a wide range of image content and distortion levels. Potential applications include source coding optimization and network resource management.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This article has reviewed the reasons why people want to love or leave the venerable (but perhaps hoary) MSE and reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems.
Abstract: In this article, we have reviewed the reasons why we (collectively) want to love or leave the venerable (but perhaps hoary) MSE. We have also reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems. The message we are trying to send here is not that one should abandon use of the MSE nor to blindly switch to any other particular signal fidelity measure. Rather, we hope to make the point that there are powerful, easy-to-use, and easy-to-understand alternatives that might be deployed depending on the application environment and needs. While we expect (and indeed, hope) that the MSE will continue to be widely used as a signal fidelity measure, it is our greater desire to see more advanced signal fidelity measures being used, especially in applications where perceptual criteria might be relevant. Ideally, the performance of a new signal processing algorithm might be compared to other algorithms using several fidelity criteria. Lastly, we hope that we have given further motivation to the community to consider recent advanced signal fidelity measures as design criteria for optimizing signal processing algorithms and systems. It is in this direction that we believe that the greatest benefit eventually lies.

2,601 citations

Journal ArticleDOI
TL;DR: DIIVINE is capable of assessing the quality of a distorted image across multiple distortion categories, as against most NR IQA algorithms that are distortion-specific in nature, and is statistically superior to the often used measure of peak signal-to-noise ratio (PSNR) and statistically equivalent to the popular structural similarity index (SSIM).
Abstract: Our approach to blind image quality assessment (IQA) is based on the hypothesis that natural scenes possess certain statistical properties which are altered in the presence of distortion, rendering them un-natural; and that by characterizing this un-naturalness using scene statistics, one can identify the distortion afflicting the image and perform no-reference (NR) IQA. Based on this theory, we propose an (NR)/blind algorithm-the Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) index-that assesses the quality of a distorted image without need for a reference image. DIIVINE is based on a 2-stage framework involving distortion identification followed by distortion-specific quality assessment. DIIVINE is capable of assessing the quality of a distorted image across multiple distortion categories, as against most NR IQA algorithms that are distortion-specific in nature. DIIVINE is based on natural scene statistics which govern the behavior of natural images. In this paper, we detail the principles underlying DIIVINE, the statistical features extracted and their relevance to perception and thoroughly evaluate the algorithm on the popular LIVE IQA database. Further, we compare the performance of DIIVINE against leading full-reference (FR) IQA algorithms and demonstrate that DIIVINE is statistically superior to the often used measure of peak signal-to-noise ratio (PSNR) and statistically equivalent to the popular structural similarity index (SSIM). A software release of DIIVINE has been made available online: http://live.ece.utexas.edu/research/quality/DIIVINE_release.zip for public use and evaluation.

1,501 citations

Journal ArticleDOI
TL;DR: A new two-step framework for no-reference image quality assessment based on natural scene statistics (NSS) is proposed, which does not require any knowledge of the distorting process and the framework is modular in that it can be extended to any number of distortions.
Abstract: Present day no-reference/no-reference image quality assessment (NR IQA) algorithms usually assume that the distortion affecting the image is known. This is a limiting assumption for practical applications, since in a majority of cases the distortions in the image are unknown. We propose a new two-step framework for no-reference image quality assessment based on natural scene statistics (NSS). Once trained, the framework does not require any knowledge of the distorting process and the framework is modular in that it can be extended to any number of distortions. We describe the framework for blind image quality assessment and a version of this framework-the blind image quality index (BIQI) is evaluated on the LIVE image quality assessment database. A software release of BIQI has been made available online: http://live.ece.utexas.edu/research/quality/BIQI_release.zip.

1,085 citations

Book
01 Jan 2006
TL;DR: This book is about objective image quality assessment to provide computational models that can automatically predict perceptual image quality and to provide new directions for future research by introducing recent models and paradigms that significantly differ from those used in the past.
Abstract: This book is about objective image quality assessmentwhere the aim is to provide computational models that can automatically predict perceptual image quality. The early years of the 21st century have witnessed a tremendous growth in the use of digital images as a means for representing and communicating information. A considerable percentage of this literature is devoted to methods for improving the appearance of images, or for maintaining the appearance of images that are processed. Nevertheless, the quality of digital images, processed or otherwise, is rarely perfect. Images are subject to distortions during acquisition, compression, transmission, processing, and reproduction. To maintain, control, and enhance the quality of images, it is important for image acquisition, management, communication, and processing systems to be able to identify and quantify image quality degradations. The goals of this book are as follows; a) to introduce the fundamentals of image quality assessment, and to explain the relevant engineering problems, b) to give a broad treatment of the current state-of-the-art in image quality assessment, by describing leading algorithms that address these engineering problems, and c) to provide new directions for future research, by introducing recent models and paradigms that significantly differ from those used in the past. The book is written to be accessible to university students curious about the state-of-the-art of image quality assessment, expert industrial R&D engineers seeking to implement image/video quality assessment systems for specific applications, and academic theorists interested in developing new algorithms for image quality assessment or using existing algorithms to design or optimize other image processing applications.

1,041 citations

Journal ArticleDOI
TL;DR: A systematic, comprehensive and up-to-date review of perceptual visual quality metrics (PVQMs) to predict picture quality according to human perception.

895 citations

References
More filters
Book
30 Nov 2001
TL;DR: This work has specific applications for those involved in the development of software and hardware solutions for multimedia, internet, and medical imaging applications.
Abstract: This is nothing less than a totally essential reference for engineers and researchers in any field of work that involves the use of compressed imagery. Beginning with a thorough and up-to-date overview of the fundamentals of image compression, the authors move on to provide a complete description of the JPEG2000 standard. They then devote space to the implementation and exploitation of that standard. The final section describes other key image compression systems. This work has specific applications for those involved in the development of software and hardware solutions for multimedia, internet, and medical imaging applications.

3,115 citations

Book
01 Jan 2000
TL;DR: The Handbook of Image and Video Processing contains a comprehensive and highly accessible presentation of all essential mathematics, techniques, and algorithms for every type of image and video processing used by scientists and engineers.
Abstract: 1.0 INTRODUCTION 1.1 Introduction to Image and Video Processing (Bovik) 2.0 BASIC IMAGE PROCESSING TECHNIQUES 2.1 Basic Gray-Level Image Processing (Bovik) 2.2 Basic Binary Image Processing (Desai/Bovik) 2.3 Basic Image Fourier Analysis and Convolution (Bovik) 3.0 IMAGE AND VIDEO PROCESSING Image and Video Enhancement and Restoration 3.1 Basic Linear Filtering for Image Enhancement (Acton/Bovik) 3.2 Nonlinear Filtering for Image Enhancement (Arce) 3.3 Morphological Filtering for Image Enhancement and Detection (Maragos) 3.4 Wavelet Denoising for Image Enhancement (Wei) 3.5 Basic Methods for Image Restoration and Identification (Biemond) 3.6 Regularization for Image Restoration and Reconstruction (Karl) 3.7 Multi-Channel Image Recovery (Galatsanos) 3.8 Multi-Frame Image Restoration (Schulz) 3.9 Iterative Image Restoration (Katsaggelos) 3.10 Motion Detection and Estimation (Konrad) 3.11 Video Enhancement and Restoration (Lagendijk) Reconstruction from Multiple Images 3.12 3-D Shape Reconstruction from Multiple Views (Aggarwal) 3.13 Image Stabilization and Mosaicking (Chellappa) 4.0 IMAGE AND VIDEO ANALYSIS Image Representations and Image Models 4.1 Computational Models of Early Human Vision (Cormack) 4.2 Multiscale Image Decomposition and Wavelets (Moulin) 4.3 Random Field Models (Zhang) 4.4 Modulation Models (Havlicek) 4.5 Image Noise Models (Boncelet) 4.6 Color and Multispectral Representations (Trussell) Image and Video Classification and Segmentation 4.7 Statistical Methods (Lakshmanan) 4.8 Multi-Band Techniques for Texture Classification and Segmentation (Manjunath) 4.9 Video Segmentation (Tekalp) 4.10 Adaptive and Neural Methods for Image Segmentation (Ghosh) Edge and Boundary Detection in Images 4.11 Gradient and Laplacian-Type Edge Detectors (Rodriguez) 4.12 Diffusion-Based Edge Detectors (Acton) Algorithms for Image Processing 4.13 Software for Image and Video Processing (Evans) 5.0 IMAGE COMPRESSION 5.1 Lossless Coding (Karam) 5.2 Block Truncation Coding (Delp) 5.3 Vector Quantization (Smith) 5.4 Wavelet Image Compression (Ramchandran) 5.5 The JPEG Lossy Standard (Ansari) 5.6 The JPEG Lossless Standard (Memon) 5.7 Multispectral Image Coding (Bouman) 6.0 VIDEO COMPRESSION 6.1 Basic Concepts and Techniques of Video Coding (Barnett/Bovik) 6.2 Spatiotemporal Subband/Wavelet Video Compression (Woods) 6.3 Object-Based Video Coding (Kunt) 6.4 MPEG-I and MPEG-II Video Standards (Ming-Ting Sun) 6.5 Emerging MPEG Standards: MPEG-IV and MPEG-VII (Kossentini) 7.0 IMAGE AND VIDEO ACQUISITION 7.1 Image Scanning, Sampling, and Interpolation (Allebach) 7.2 Video Sampling and Interpolation (Dubois) 8.0 IMAGE AND VIDEO RENDERING AND ASSESSMENT 8.1 Image Quantization, Halftoning, and Printing (Wong) 8.2 Perceptual Criteria for Image Quality Evaluation (Pappas) 9.0 IMAGE AND VIDEO STORAGE, RETRIEVAL AND COMMUNICATION 9.1 Image and Video Indexing and Retrieval (Tsuhan Chen) 9.2 A Unified Framework for Video Browsing and Retrieval (Huang) 9.3 Image and Video Communication Networks (Schonfeld) 9.4 Image Watermarking (Pitas) 10.0 APPLICATIONS OF IMAGE PROCESSING 10.1 Synthetic Aperture Radar Imaging (Goodman/Carrera) 10.2 Computed Tomography (Leahy) 10.3 Cardiac Imaging (Higgins) 10.4 Computer-Aided Detection for Screening Mammography (Bowyer) 10.5 Fingerprint Classification and Matching (Jain) 10.6 Probabilistic Models for Face Recognition (Pentland/Moghaddam) 10.7 Confocal Microscopy (Merchant/Bartels) 10.8 Automatic Target Recognition (Miller) Index

1,678 citations

Journal ArticleDOI
TL;DR: The problem of blind deconvolution for images is introduced, the basic principles and methodologies behind the existing algorithms are provided, and the current trends and the potential of this difficult signal processing problem are examined.
Abstract: The goal of image restoration is to reconstruct the original scene from a degraded observation. This recovery process is critical to many image processing applications. Although classical linear image restoration has been thoroughly studied, the more difficult problem of blind image restoration has numerous research possibilities. We introduce the problem of blind deconvolution for images, provide an overview of the basic principles and methodologies behind the existing algorithms, and examine the current trends and the potential of this difficult signal processing problem. A broad review of blind deconvolution methods for images is given to portray the experience of the authors and of the many other researchers in this area. We first introduce the blind deconvolution problem for general signal processing applications. The specific challenges encountered in image related restoration applications are explained. Analytic descriptions of the structure of the major blind deconvolution approaches for images then follows. The application areas, convergence properties, complexity, and other implementation issues are addressed for each approach. We then discuss the strengths and limitations of various approaches based on theoretical expectations and computer simulations.

1,332 citations


"Perceptual Blur and Ringing Metrics..." refers background in this paper

  • ...While measuring the perceptual blur in an image or a video sequence has not yet been investigated, related research topics include blur identification [6], blur estimation [4,2], image deblurring [1] and blind deconvolution [5]....

    [...]

01 Jan 2002
TL;DR: The JPEG2000 standard as discussed by the authors is an International Standard (ISO 154447ITU-T Recommendation T.800) that is being issued in six parts (Part 1, in the same vein as the JPEG baseline system, is aimed at minimal complexity and maximal interchange and was issued as an International standard at the end of 2000.
Abstract: In 1996, the JPEGcommittee began to investigate possibilities for a new still image compression standard to serve current and future applications. This initiative, which was named JPEG2000, has resulted in a comprehensive standard (ISO 154447ITU-T Recommendation T.800) that is being issued in six parts. Part 1, in the same vein as the JPEG baseline system, is aimed at minimal complexity and maximal interchange and was issued as an International Standard at the end of 2000. Parts 2–6 define extensions to both the compression technology and the file format and are currently in various stages of development. In this paper, a technical description of Part 1 of the JPEG2000 standard is provided, and the rationale behind the selected technologies is explained. Although the JPEG2000 standard only specifies the decoder and the codesteam syntax, the discussion will span both encoder and decoder issues to provide a better understanding of the standard in various applications. r 2002 Elsevier Science B.V. All rights reserved.

664 citations