scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings ArticleDOI
01 Mar 1984
TL;DR: Adaptive vector quantization of color pictures is shown to produce better mean-square-error result than block cosine transform coding, for the same data rate.
Abstract: Vector quantization techniques are currently favored in speech compression. Recently a nonadaptive form of vector quantization was proposed for monochrome image compression. In this paper, color picture compression by adaptive vector quantization is presented. Both spatial and spectral redundancy is exploited. Adaptive vector quantization of color pictures is shown to produce better mean-square-error result than block cosine transform coding, for the same data rate. No statistical image model is assumed. Decoding is a simple process of table look-up, permitting video-rate decoding, thus only a small refresh memory containing the codewords is necessary.

27 citations

Book ChapterDOI
07 Sep 2010
TL;DR: Experimental results demonstrate that combining the two transforms improves the performance of the steganography technique in terms of PSNR value and the performance is better as compared to that achieved using DWT transform only.
Abstract: In this paper, a copyright protection scheme that combines the Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) is proposed. The proposed scheme first extracts the DCT coefficients of secret image by applying DCT. After that, image features are extracted from cover image and from DCT coefficients by applying DWT on both separately. Hiding of extracted features of DCT coefficients in the features of cover image is done using two different secret keys. Experimentation has been done using eight different attacks. Experimental results demonstrate that combining the two transforms improves the performance of the steganography technique in terms of PSNR value and the performance is better as compared to that achieved using DWT transform only. The extracted image has good visual quality also.

27 citations

Journal ArticleDOI
TL;DR: The results show that the PQR/sub 1-5/ of diagnostically acceptable lossy image reconstructions have better agreement with cardiologists' responses than objective error measurement methods, such as peak signal-to-noise ratio.
Abstract: This paper describes a multistage perceptual quality assessment (MPQA) model for compressed images. The motivation for the development of a perceptual quality assessment is to measure (in)visible differences between original and processed images. The MPQA produces visible distortion maps and quantitative error measures informed by considerations of the human visual system (HVS). Original and decompressed images are decomposed into different spatial frequency bands and orientations modeling the human cortex. Contrast errors are calculated for each frequency and orientation, and masked as a function of contrast sensitivity and background uncertainty. Spatially masked contrast error measurements are then made across frequency bands and orientations to produce a single perceptual distortion visibility map (PDVM). A perceptual quality rating (PQR) is calculated from the PDVM and transformed into a one to five scale, PQR/sub 1-5/, for direct comparison with the mean opinion score, generally used in subjective ratings. The proposed MPQA model is based on existing perceptual quality assessment models, while it is differentiated by the inclusion of contrast masking as a function of background uncertainty. A pilot study of clinical experiments on wavelet-compressed digital angiogram has been performed on a sample set of angiogram images to identify diagnostically acceptable reconstruction. Our results show that the PQR/sub 1-5/ of diagnostically acceptable lossy image reconstructions have better agreement with cardiologists' responses than objective error measurement methods, such as peak signal-to-noise ratio. A Perceptual thresholding and CSF-based Uniform quantization (PCU) method is also proposed using the vision models presented in this paper. The vision models are implemented in the thresholding and quantization stages of a compression algorithm and shown to produce improved compression ratio performance with less visible distortion than that of the embedded zerotrees wavelet (EZWs).

27 citations

Proceedings ArticleDOI
04 Oct 1998
TL;DR: This work assesses whether the added orthogonality of the new balanced multiwavelets yields a performance gain compared to traditional biorthogonal transforms, and re-establishs the rule of thumb that strict orthog onality is not a key factor in image transform coding.
Abstract: Biorthogonal wavelets have been used with great success in most of the recent transform image coders. By using the new balanced multiwavelets, one can now easily design fully orthogonal linear phase FIR transform schemes. The aim of our work is to assess whether the added orthogonality yields a performance gain compared to traditional biorthogonal transforms. As comparison platform we use the well-known SPIHT codec, which is based on the significance tree quantization (STQ) principle. Without any particular fine-tuning the multiwavelet codec performs within 0.5 dB of SPIHT. A closer inspection shows however that it is hard to improve on this, therefore re-establishing the rule of thumb that strict orthogonality is not a key factor in image transform coding. More details can be obtained on the [WEB] at http://lcavwww.epfl.ch/~weidmann/mwcoder.

27 citations

Patent
14 Aug 2003
TL;DR: In this article, the authors proposed a method for selective filtering of discrete cosine transform (DCT) coefficients in the frequency domain, rather than the processing intensive pixel domain (time domain), to reduce the number of bits used to encode a picture.
Abstract: The invention is related to methods and apparatus that provide selective filtering of discrete cosine transform (DCT) coefficients. Advantageously, filtering of the DCT coefficients is efficiently performed in the frequency domain, rather than the processing intensive pixel domain (time domain). The DCT filtering is performed “in-loop ” to the DCT encoding and not in a preprocessing approach that is independent to encoding loop. The DCT filtering advantageously reduces the number of bits used to encode a picture, which can preserve compliance with occupancy levels for buffer models while improving picture quality over a conventional technique to preserve bits, such as an increase in the value of the quantization parameter QP.

27 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295