scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings ArticleDOI
02 Nov 2004
TL;DR: It is shown that the MOS predictions by the proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images.
Abstract: This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.

67 citations

Journal ArticleDOI
TL;DR: The authors' study is unlike previous studies of the effects of lossy compression in that they consider nonbinary detection tasks, simulate actual diagnostic practice instead of using paired tests or confidence rankings, and use statistical methods that are more appropriate for nonbinary clinical data than are the popular receiver operating characteristic curves.
Abstract: The authors apply a lossy compression algorithm to medical images, and quantify the quality of the images by the diagnostic performance of radiologists, as well as by traditional signal-to-noise ratios and subjective ratings. The authors' study is unlike previous studies of the effects of lossy compression in that they consider nonbinary detection tasks, simulate actual diagnostic practice instead of using paired tests or confidence rankings, use statistical methods that are more appropriate for nonbinary clinical data than are the popular receiver operating characteristic curves, and use low-complexity predictive tree-structured vector quantization for compression rather than DCT-based transform codes combined with entropy coding. The authors' diagnostic tasks are the identification of nodules (tumors) in the lungs and lymphadenopathy in the mediastinum from computerized tomography (CT) chest scans. Radiologists read both uncompressed and lossy compressed versions of images. For the image modality, compression algorithm, and diagnostic tasks the authors consider, the original 12 bit per pixel (bpp) CT image can be compressed to between 1 bpp and 2 bpp with no significant changes in diagnostic accuracy. The techniques presented here for evaluating image quality do not depend on the specific compression algorithm and are useful new methods for evaluating the benefits of any lossy image processing technique. >

67 citations

Proceedings ArticleDOI
01 Dec 2011
TL;DR: An offline ECG compression technique, based on encoding of successive sample differences is proposed, which is presently being implemented in a wireless telecardiology system using a standalone embedded system.
Abstract: An offline ECG compression technique, based on encoding of successive sample differences is proposed. The encoded elements are generated through four stages, viz., down sampling of raw samples, normalization of successive sample differences; data grouping; magnitude and sign encoding; and finally zero element compression. Initially, the compression algorithm is validated with short duration raw ECG samples from PTB database under Physionet. MATLAB simulation results using ptb-db data with 8-bit quantization results a compression ratio (CR) of 9.02 and percentage root mean square difference (PRD) of 2.51. With mit-db these figures are 4.68 and 0.739 respectively. The algorithm is presently being implemented in a wireless telecardiology system using a standalone embedded system.

67 citations

Patent
Brian Astle1
26 Oct 1994
TL;DR: In this paper, an inverse transform is applied to sets of transform coefficients to generate decoded regions of a decoded video frame, where the discontinuities for boundaries between adjacent regions are used to adjust one or more of the transform coefficients.
Abstract: Encoded video signals comprise sets of transform coefficients (e.g., DCT coefficients) corresponding to different regions of a video frame. An inverse transform is applied to sets of transform coefficients to generate decoded regions of a decoded video frame. The discontinuities for boundaries between adjacent regions are used to adjust one or more of the transform coefficients. The adjusted sets of transform coefficients are then used to generate filtered regions of a filtered video frame corresponding to the decoded video frame. In a preferred embodiment, the transform coefficients are DCT coefficients and the DC and first two AC DCT coefficients are sequentially adjusted to correct for quantization errors in the encoding process.

67 citations

Patent
17 Jun 1996
TL;DR: In this article, a memory efficient system of storing color correction information for liquid crystal tuning filters when used with electronic imaging cameras to produce color images is presented, which is stored with maximum possible gain to optimize accuracy prior to compression.
Abstract: A memory efficient system of storing color correction information for liquid crystal tuning filters when used with electronic imaging cameras to produce color images, which color correction information is stored with maximum possible gain to optimize accuracy preparatory to compression. The system bins the color correction image, for example, from a 4K×4K CCD sensor into a 500×500 or 1K×1K file, and then applies the JPEG and/or wavelet compression algorithm with a default configuration and/or a custom quantization table that emphasizes low frequency changes with more bits than high frequency changes with less bits. At the end of the compression, the compressed R, G, B files and an n-point correction executable algorithm are stored on floppy disk or CD ROM and are used to automatically take control of image enhancement when invoked by the photographer.

67 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295