scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Patent
Norman Dennis Richards1
22 Jul 1991
TL;DR: In this article, a method of encoding an image for CD-I players comprises obtaining the pixel information as a first matrix (M1) of 768×560 pixel component values, decimation filtering (1) the first matrix to produce a second matrix(M2) of 384×560pixel component values and encoding (2) the second matrix to output a first set of DYUV digital data (RDD1) for storage on a compact disc (SM).
Abstract: A method of encoding an image for CD-I players comprises obtaining the pixel information as a first matrix (M1) of 768×560 pixel component values, decimation filtering (1) the first matrix (M1) to produce a second matrix (M2) of 384×560 pixel component values, encoding (2) the second matrix (M2) to produce a first set of DYUV digital data (RDD1) for storage on a compact disc (SM), applying the digital data (RDD1) to a decoder (3) to form a third matrix (M3) of 384×560 pixel component values, interpolation filtering (4) the third matrix (M3) to form a fourth matrix (M4) of 768×560 pixel component values, forming the difference (5) between the first (M1) and fourth (M4) matrices to produce a fifth matrix (M5) of 768×560 difference values and encoding the fifth matrix (M5) as respective multi-bit and/or run length codes (RDD1) for storage on a compact disc (SM). Compatibility with known hardware (the CD-I standard `basecase` player) is obtained by encoding negative difference values using the quantization levels in the guard range (0-15) below black level (16, on a scale of zero to 255).

33 citations

Journal ArticleDOI
TL;DR: An image hiding technique with computer-generated phase codes is presented, which transforms the hidden image into an original host image and makes it be a complex function, whose amplitude closely resembles the original host.

33 citations

Proceedings ArticleDOI
05 Jun 2016
TL;DR: This work analyzes the error propagation sensitivity in the DCT network and uses this information to model the impact of introduced errors on the output quality of JPEG, and formulate a novel optimization problem that maximizes power savings under an error budget.
Abstract: JPEG compression based on the discrete cosine transform (DCT) is a key building block in low-power multimedia applications. We use approximate computing to exploit the error tolerance of JPEG and formulate a novel optimization problem that maximizes power savings under an error budget. We analyze the error propagation sensitivity in the DCT network and use this information to model the impact of introduced errors on the output quality. Simulations show up to 15% reduction in area and delay which corresponds to 40% power savings at iso-delay.

33 citations

01 Jan 2012
TL;DR: This analysis of various compression techniques provides knowledge in identifying the advantageous features and helps in choosing correct method for compression.
Abstract: With the rapid development of digital technology in consumer electronics, the demand to preserve raw image data for further editing or repeated compression is increasing. Image compression is minimizing the size in bytes of an image without degrading the quality of the image to an unacceptable level. There are several different ways in which images can be compressed. This paper analyzes various image compression techniques. In addition, specific methods are presented illustrating the application of such techniques to the real-world images. We have presented various steps involved in the general procedure for compressing images. We provided the basics of image coding with a discussion of vector quantization and one of the main technique of wavelet compression under vector quantization. This analysis of various compression techniques provides knowledge in identifying the advantageous features and helps in choosing correct method for compression.

33 citations

Proceedings ArticleDOI
03 Dec 2010
TL;DR: This paper proposes an anti-forensic technique capable of removing artifacts indicative of wavelet-based image compression from an image, and shows that this technique is capable of fooling current forensic image compression detection algorithms 100% of the time.
Abstract: Because digital images can be modified with relative ease, considerable effort has been spent developing image forensic algorithms capable of tracing an image's processing history. In contrast to this, relatively little consideration has been given to anti-forensic operations designed to mislead forensic techniques. In this paper, we propose an anti-forensic technique capable of removing artifacts indicative of wavelet-based image compression from an image. Our technique operates by adding anti-forensic dither to a previously compressed image's wavelet coefficients so that the anti-forensically modified wavelet coefficient distribution matches a model of the coefficient distribution before compression. Simulation results show that our algorithm is capable of fooling current forensic image compression detection algorithms 100% of the time.

33 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295