Topic
Quantization (image processing)
About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A novel algorithm for speeding up the codebook design in image vector quantization that exploits the correlation among the pixels in an image block to compress the computational complexity of calculating the squared Euclidean distortion measures.
Abstract: This paper presents a novel algorithm for speeding up the codebook design in image vector quantization that exploits the correlation among the pixels in an image block to compress the computational complexity of calculating the squared Euclidean distortion measures, and uses the similarity between the codevectors in the consecutive code-books during the iterative clustering-process to reduce the number of codevectors necessary to be checked for one codebook search. Verified test results have shown that the proposed algorithm can provide almost 98% reduction of the execution time when compared to the conventional Linde-Buzo-Gray (LBG) algorithm.
49 citations
•
28 Sep 1998TL;DR: Multi-threshold wavelet coding (MTWC) as discussed by the authors utilizes a separate quantization threshold for each subband generated by the wavelet transform, which substantially reduces the number of insignificant bits generated during the initial quantization steps.
Abstract: A system and method for performing image compression using multi-threshold wavelet coding (MTWC) which utilizes a separate initial quantization threshold for each subband generated by the wavelet transform, which substantially reduces the number of insignificant bits generated during the initial quantization steps. Further, the MTWC system chooses the order in which the subbands are encoded according to a preferred rate-distortion tradeoff in order to enhance the image fidelity. Moreover, the MTWC system utilizes a novel quantization sequence order in order to optimize the amount of error energy reduction in significant and refinement maps generated during the quantization step. Thus, the MTWC system provides a better bit rate-distortion tradeoff and performs faster than existing state-of-the-art wavelet coders.
49 citations
••
TL;DR: In order to reduce the blocking artifact in theJPEG-compressed images, a new noniterative postprocessing algorithm is proposed that provides comparable or better results with less computational complexity.
Abstract: In order to reduce the blocking artifact in the Joint Photographic Experts Group (JPEG)-compressed images, a new noniterative postprocessing algorithm is proposed. The algorithm consists of a two-step operation: low-pass filtering and then predicting. Predicting the original image from the low-pass filtered image is performed by using the predictors, which are constructed based on a broken line regression model. The constructed predictor is a generalized version of the projector onto the quantization constraint set , , or the narrow quantization constraint set . We employed different predictors depending on the frequency components in the discrete cosine transform (DCT) domain since each component has different statistical properties. Further, by using a simple classifier, we adaptively applied the predictors depending on the local variance of the DCT block. This adaptation enables an appropriate blurring depending on the smooth or detail region, and shows improved performance in terms of the average distortion and the perceptual view. For the major-edge DCT blocks, which usually suffer from the ringing artifact, the quality of fit to the regression model is usually not good. By making a modification of the regression model for such DCT blocks, we can also obtain a good perceptual view. The proposed algorithm does not employ any sophisticated edge-oriented classifiers and nonlinear filters. Compared to the previously proposed algorithms, the proposed algorithm provides comparable or better results with less computational complexity.
49 citations
••
TL;DR: The problem of energy conservation in wireless image sensor networks is addressed, and a fast zonal discrete consine transform (DCT) design which aims to decrease the complexity of JPEG baseline compression is presented.
Abstract: The problem of energy conservation in wireless image sensor networks is addressed, and a fast zonal discrete consine transform (DCT) design which aims to decrease the complexity of JPEG baseline compression is presented. Energy consumption measurements have been made on a real wireless camera node in order to evaluate the amount of energy which can be saved during the image compression process without loss of visual quality. Although the gain depends on the compression rate, such a DCT design is a simple and effective way to prolong the lifetime of the camera nodes, and thereby the network lifetime.
49 citations
••
TL;DR: A three-dimensional holographic image is deteriorated due to quantization of the phase in the hologram as a superposition of false images that fall at depth positions other than the plane of the image to which they correspond.
Abstract: A three-dimensional holographic image is deteriorated due to quantization of the phase in the hologram. As in two-dimensional Fourier holograms, the deterioration is exhibited as a superposition of false images. However, in the three-dimensional case, the false images fall at depth positions other than the plane of the image to which they correspond. If far out of focus, these false images are harmless.
49 citations