scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings ArticleDOI
01 Nov 2012
TL;DR: An effective algorithm to compress and to reconstruct digital imaging and communications in medicine (DICOM) images is presented and the comparison of compression methods such as JEPG,JEPG 2000 with SPIHT encoding on the basis of compression ratio and compression quality is outlined.
Abstract: Image compression has become one of the most important disciplines in digital electronics because of the ever growing popularity and usage of the internet and multimedia systems combined with the high requirements of the bandwidth and storage space. The increasing volume of data generated by some medical imaging modalities which justifies the use of different compression techniques to decrease the storage space and efficiency of transfer the images over the network for access to electronic patient records. This paper addresses the area of data compression as it is applicable to image processing. Here we are presented an effective algorithm to compress and to reconstruct digital imaging and communications in medicine (DICOM) images. Various image compression algorithms exist in today's commercial market. This paper outlines the comparison of compression methods such as JEPG, JEPG 2000 with SPIHT encoding on the basis of compression ratio and compression quality. The comparison of these compression methods are classified according to different medical images like MRI and CT. For JPEG based image compression RLE and Huffman encoding techniques are used by varying the bits per pixel. For JPEG 2000 based image compression SPIHT encoding method is used. The DCT and DWT methods are compared by varying bits per pixel and measured the performance parameters of MSE, PSNR and compression ratio. In JPEG 2000 method, compared the different wavelets like Haar, CDF 9/7, CDF 5/3 etc. and evaluated the compression ratio and compression quality. Also varied the decomposition levels of wavelet transform with different images.

34 citations

Proceedings ArticleDOI
29 Nov 2011
TL;DR: Experimental results show that the proposed shift-recompression based detection method is very promising to detect misaligned cropping and recompression with the same quantization matrix and greatly improves the existing methods.
Abstract: Image tampering, being widely facilitated and proliferated by today's digital techniques, is increasingly causing problems concerning the authenticity of digital images. As one of the most favorable compressed media, JPEG image can be easily tampered without leaving any visible clues. JPEG-based forensics, including the detection of double compression, interpolation, rotation, etc, has been actively performed. However, the detection of misaligned cropping and recompression, with the same quantization matrix that was once used to encode original JPEG images, has not been effectively expressed or ignored to some extent. Aiming to detect such manipulations for forensics purpose, in this paper, we propose an approach based on block artifacts caused by the manipulation with JPEG compression. Specifically, we propose a shift-recompression based detection method to identify the inconsistency of the block artifacts in doctored JPEG images. The learning classifiers are applied for classification. Experimental results show that our approach is very promising to detect misaligned cropping and recompression with the same quantization matrix and greatly improves the existing methods. Our detection method is also very effective to detect relevant copy-paste and composite forgery in JPEG images.

34 citations

Patent
Miyane Toshiki1, Sekimoto Uichi1
30 Jan 1996
TL;DR: In this article, the inverse quantization table generator 250 is used to generate quantization tables from the compressed image data ZZ, where the quantization level coefficient QF(u,v) is inserted between block data units.
Abstract: The compressed image data ZZ includes code data representing a quantization level coefficient QCx inserted between block data units. DCT coefficients QF(u,v) and a quantization level coefficient QCx, which are decoded form the compressed image data ZZ, are multiplied in the inverse quantization table generator 250 to generate a quantization table QT, and the inverse quantization unit 250 executes inverse quantization with the quantization table QT. Since the quantization level coefficient QCx is inserted between block data units in the compressed image data, the quantization table QT is renewed every time when a new quantization level coefficient QCx is decoded. The compressed image data also includes a special type of data, or null run data, representing a series of pixel blocks having an identical image pattern.

34 citations

Journal ArticleDOI
TL;DR: The proposed method performs a multiscale analysis on the neighborhood of each pixel, determines the presence and scale of contour artifacts, and probabilistically dithers (perturbs) the color of the pixel.
Abstract: A method is proposed for reducing the visibility of ldquocontour artifacts,rdquo i.e., false contours resulting from color quantization in digital images. The method performs a multiscale analysis on the neighborhood of each pixel, determines the presence and scale of contour artifacts, and probabilistically dithers (perturbs) the color of the pixel. The overall effect is to ldquobreak downrdquo the false contours, making them less visible. The proposed method may be used to reduce contour artifacts at the same bit depth as the input image or at higher bit depths. The contour artifact detection mechanism ensures that artifact-free regions remain unaffected during the process.

34 citations

Proceedings ArticleDOI
10 Dec 2002
TL;DR: The latest transform and quantization designs for H.26L can be computed exactly in integer arithmetic, thus avoiding inverse transform mismatch problems and minimizing computational complexity, especially for low-end processors.
Abstract: This paper presents an overview of the latest transform and quantization designs for H.26L. Unlike the popular discrete cosine transform (DCT) used in previous standards, the transforms in H.26L can be computed exactly in integer arithmetic, thus avoiding inverse transform mismatch problems. The new transforms can also be computed without multiplications, just additions and shifts, in 16-bit arithmetic, thus minimizing computational complexity, especially for low-end processors. By using short tables, the new quantization formulas use multiplications but avoid divisions.

34 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295