scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings ArticleDOI
13 Nov 1994
TL;DR: A new method of PVQ is proposed which not only improves upon the image compression performance of typical JPEG implementations, but also demonstrates excellent resilience to channel error.
Abstract: The robustness of image and video compression in the presence of time-varying channel error has received increased interest with the emergence of portable digital receivers and computers. To achieve robust compression, pyramid vector quantization (PVQ) can be used. It is a fixed-rate quantization scheme suited to Laplacian-like sources, such as those arising from transform and subband image coding. The authors propose a new method of PVQ which not only improves upon the image compression performance of typical JPEG implementations, but also demonstrates excellent resilience to channel error. >

45 citations

Proceedings ArticleDOI
04 Oct 1998
TL;DR: A modification of the discrete cosine transform (DCT) that produces integer coefficients from which the original image data can be reconstructed lossless and some experimental rate-distortion curves for this scheme are presented.
Abstract: This paper introduces a modification of the discrete cosine transform (DCT) that produces integer coefficients from which the original image data can be reconstructed losslessly. It describes an embedded coding scheme which incorporates this lossless DCT and presents some experimental rate-distortion curves for this scheme. The results show that the lossless compression ratio of the proposed scheme exceeds that of the lossless JPEG predictive coding scheme. On the other hand, in lossy operation the rate-distortion curve of the proposed scheme is very close to that of lossy JPEG. Also, the transform coefficients of the proposed scheme can be decoded with the ordinary DCT at the expense of a small error, which is only significant in lossless operation.

45 citations

Posted Content
TL;DR: This paper investigates the fundamental trade-off between the number of bits needed to encode compressed vectors and the compression error, and introduces an efficient compression operator, Sparse Dithering, which naturally achieves the lower bound.
Abstract: Communicating information, like gradient vectors, between computing nodes in distributed and federated learning is typically an unavoidable burden, resulting in scalability issues. Indeed, communication might be slow and costly. Recent advances in communication-efficient training algorithms have reduced this bottleneck by using compression techniques, in the form of sparsification, quantization, or low-rank approximation. Since compression is a lossy, or inexact, process, the iteration complexity is typically worsened; but the total communication complexity can improve significantly, possibly leading to large computation time savings. In this paper, we investigate the fundamental trade-off between the number of bits needed to encode compressed vectors and the compression error. We perform both worst-case and average-case analysis, providing tight lower bounds. In the worst-case analysis, we introduce an efficient compression operator, Sparse Dithering, which is very close to the lower bound. In the average-case analysis, we design a simple compression operator, Spherical Compression, which naturally achieves the lower bound. Thus, our new compression schemes significantly outperform the state of the art. We conduct numerical experiments to illustrate this improvement.

45 citations

Proceedings ArticleDOI
01 Jan 2000
TL;DR: This paper describes how the JPEG 2000 syntax and file format support the decomposition of the image into the codestream and examples of how the syntax enables some of the features of JPEG 2000 are offered.
Abstract: As the resolution and pixel fidelity of digital imagery grows, there is a greater need for more efficient compression and extraction of images and sub-images. The ability to handle many types of image data, extract images at different resolutions and quality, lossless and lossy, zoom and pan, and extract regions-of-interest is the new measures of image compression system performance. JPEG 2000 is designed to address the needs of high quality imagery. This paper describes how the JPEG 2000 syntax and file format support these features. The decomposition of the image into the codestream is described along with associated syntax markers. Examples of how the syntax enables some of the features of JPEG 2000 are offered.

44 citations

Patent
Shi-hwa Lee1
23 Sep 1996
TL;DR: In this article, the authors proposed a method of video coding associated with processing accumulated errors and a encoder therefor, the method comprising the steps of: (a) generating motion vectors of an input image in a predetermined unit and the difference image between an image of filtering a motion-compensated image on a reconstructed previous frame and the input image on current frame, and then performing discrete cosine transform (DCT), quantization and variable length coding on the difference images; (b) generating the motion-computed image on the reconstructed previous frames from the reconstructed last frame
Abstract: The present invention relating to a method of video coding associated with processing accumulated errors and a encoder therefor, the method comprising the steps of: (a) generating motion vectors of an input image in a predetermined unit and the difference image between an image of filtering a motion-compensated image on a reconstructed previous frame and the input image on current frame, and then performing discrete cosine transform (DCT), quantization and variable length coding on the difference image; (b) generating the motion-compensated image on the reconstructed previous frame from the reconstructed previous frame and the motion vectors; and (c) filtering off accumulated errors while preserving the edges within the motion-compensated image on the reconstructed previous frame. Therefore, random distributed noises due to accumulated errors can be removed and bit generation amounts by filtering off random accumulated errors with a high frequency characteristics before coding can be reduced.

44 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295