scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings ArticleDOI
David A. Clunie1
18 May 2000
TL;DR: In this article, JPEG-LS and JPEG-2000 were evaluated on a set of CT images from multiple anatomical regions, modalities, and vendors, and the results showed that the proposed scheme outperformed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM).
Abstract: Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

121 citations

Journal ArticleDOI
TL;DR: This survey will study the basic BTC algorithm and its improvements by dividing it into three separate tasks; performing quantization, coding the quantization data and cling the bit plane.
Abstract: Block truncation coding (BTC) is a lossy moment preserving quantization method for compressing digital gray-level images. Its advantages are simplicity, fault tolerance, the relatively high compression efficiency and good image quality of the decoded image. Several improvements of the basic method have been recently proposed in the literature. In this survey we will study the basic algorithm and its improvements by dividing it into three separate tasks; performing quantization, coding the quantization data and cling the bit plane. Each phase of the algorithm will be analyzed separately. On the basis of the analysis, a combined BTC algorithm will be proposed and the comparisons to the standard JPEG algoritbm will be made

119 citations

Proceedings ArticleDOI
28 Dec 2000
TL;DR: Evaluating JPEG 2000 versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG, shows that the choice of the “best” standard depends strongly on the application at hand.
Abstract: JPEG 2000, the new ISO/ITU-T standard for still image coding, is about to be finished. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper compares the set of features offered by JPEG 2000, and how well they are fulfilled, versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG and more recent PNG. The study concentrates on compression efficiency and functionality set, while addressing other aspects such as complexity. Lossless compression efficiency as well as the fixed and progressive lossy rate-distortion behaviors are evaluated. Robustness to transmission errors, Region of Interest coding and complexity are also discussed. The principles behind each algorithm are briefly described. The results show that the choice of the "best" standard depends strongly on the application at hand, but that JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance in most cases.

119 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed high-capacity reversible data hiding method achieves both high capacity and high image quality.

118 citations

Book ChapterDOI
01 Jan 1991
TL;DR: This chapter discusses efficient statistical computations for optimal color quantization based on variance minimization, a 3D clustering process that leads to significant image data compression, making extra frame buffer available for animation and reducing bandwidth requirements.
Abstract: Publisher Summary This chapter discusses efficient statistical computations for optimal color quantization. Color quantization is a must when using an inexpensive 8-bit color display to display high-quality color images. Even when 24-bit full color displays become commonplace in the future, quantization will still be important because it leads to significant image data compression, making extra frame buffer available for animation and reducing bandwidth requirements. Color quantization is a 3D clustering process. A color image in an RGB mode corresponds to a three-dimensional discrete density. In this chapter, quantization based on variance minimization is discussed. Efficient computations of color statistics are described. An optimal color quantization algorithm is presented. The algorithm was implemented on a SUN 3/80 workstation. It took only 10 s to quantize a 256 × 256 image. The impact of optimizing partitions is very positive. The new algorithm achieved, on average, one-third and one-ninth of mean-square errors for the median-cut and Wan et. al. algorithms, respectively.

117 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295