scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Posted Content
TL;DR: Bi3D, a method that estimates depth via a series of binary classifications, rather than testing if objects are at a particular depth D, as existing stereo methods do, it classifies them as being closer or farther than D, offering a powerful mechanism to balance accuracy and latency.
Abstract: Stereo-based depth estimation is a cornerstone of computer vision, with state-of-the-art methods delivering accurate results in real time. For several applications such as autonomous navigation, however, it may be useful to trade accuracy for lower latency. We present Bi3D, a method that estimates depth via a series of binary classifications. Rather than testing if objects are at a particular depth $D$, as existing stereo methods do, it classifies them as being closer or farther than $D$. This property offers a powerful mechanism to balance accuracy and latency. Given a strict time budget, Bi3D can detect objects closer than a given distance in as little as a few milliseconds, or estimate depth with arbitrarily coarse quantization, with complexity linear with the number of quantization levels. Bi3D can also use the allotted quantization levels to get continuous depth, but in a specific depth range. For standard stereo (i.e., continuous depth on the whole range), our method is close to or on par with state-of-the-art, finely tuned stereo methods.

26 citations

Proceedings ArticleDOI
10 Dec 2012
TL;DR: The experimental results show that the improved double quantization detection method introduced in this paper can support a reliable large-scale digital image evidence authenticity verification with consistent good accuracy in practical applications.
Abstract: Double JPEG image compression detection, or more specifically, double quantization detection, is an important digital image forensic method to detect the presence of image forgery or tampering. In this paper, we introduce an improved double quantization detection method to improve the accuracy of JPEG image tampering detection. We evaluate our detection method using the publicly available CASIA authentic and tampered image data set of 9501 JPEG images. We carry out 20 rounds of experiments with stringent parameter setting placed on our detection method to demonstrate its robustness. Each round of classifier is generated from a unique, non-overlapping and small subset composing of 1/20 of the tampered and 1/72 of the authentic images, to obtain a training data set of about 100 images per class, with the rest of the 19/20 of the tampered and 71/72 of the authentic images used for testing. Through the experiments, we show an average improvement of 40.31% and 44.85% in the true negative (TN) rate and true positive (TP) rate, respectively, when compared with the current state-of-the-art method. The average TN and TP rates obtained from 20 rounds of experiments carried out using our detection method, are 90.81% and 76.95%, respectively. The experimental results show that our JPEG image forensics method can support a reliable large-scale digital image evidence authenticity verification with consistent good accuracy. The low training to testing data ratio also indicates that our method is robust in practical applications even with a relatively limited or small training data set available.

26 citations

Patent
20 Nov 2003
TL;DR: A code stream generating part converts image data into two-dimensional wavelet coefficients, quantizes the same and codes the quantization result so as to compress the image data and generate a code stream as discussed by the authors.
Abstract: A code stream generating part converts image data into two-dimensional wavelet coefficients, quantizes the same and coding the quantization result so as to compress the image data and generate a code stream An additional information creating part creates additional information concerning the image data, and an additional information embedding part embeds the thus-created additional information into the code stream as a code in an off-rule zone which is not decoded by a JPEG 2000 standard rule

26 citations

Journal ArticleDOI
TL;DR: A layered discrete cosine transform (DCT) image compression scheme, which generates an embedded bit stream for DCT coefficients according to their importance, which allows progressive image transmission and simplifies the rate-control problem.
Abstract: Motivated by Shapiro's (1993) embedded zerotree wavelet (EZW) coding and Taubman and Zakhor's (1994) layered zero coding (LZC), we propose a layered discrete cosine transform (DCT) image compression scheme, which generates an embedded bit stream for DCT coefficients according to their importance. The new method allows progressive image transmission and simplifies the rate-control problem. In addition to these functionalities, it provides a substantial rate-distortion improvement over the JPEG standard when the bit rates become low. For example, we observe a bit rate reduction with respect to the JPEG Huffman and arithmetic coders by about 60% and 20%, respectively, for a bit rate around 0.1 b/p.

26 citations

Proceedings ArticleDOI
07 Nov 2009
TL;DR: A novel yet counter intuitive technique to “denoise” JPEG images by adding Gaussian noise to a resized and JPEG compressed image so that the periodicity due to JPEG compression is suppressed while that due to resizing is retained.
Abstract: A common problem affecting most image resizing detection algorithms is that they are susceptible to JPEG compression This is because JPEG introduces periodic artifacts, as it works on 8×8 blocks We propose a novel yet counter intuitive technique to “denoise” JPEG images by adding Gaussian noise We add a suitable amount of Gaussian noise to a resized and JPEG compressed image so that the periodicity due to JPEG compression is suppressed while that due to resizing is retained The controlled Gaussian noise addition works better than median filtering and weighted averaging based filtering for suppressing the JPEG induced periodicity

26 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295