scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Book ChapterDOI
18 May 2009
TL;DR: In this paper, custom JPEG quantization matrices are proposed to be used in the context of iris recognition and superior matching results in terms of average Hamming distance and improved ROC are found as compared to the use of the default quantization table especially for low FAR.
Abstract: Custom JPEG quantization matrices are proposed to be used in the context of compression within iris recognition. Superior matching results in terms of average Hamming distance and improved ROC is found as compared to the use of the default quantization table especially for low FAR. This leads to improved user convenience in case high security is required.

27 citations

Proceedings ArticleDOI
26 May 2013
TL;DR: A novel technique to detect double quantization, which results due to double compression of a tampered video, which can detect tampering of I, P or B frames in a GOP with high accuracy and can also detect forgery under wide range of double compression bitrates or quantization scale factors.
Abstract: In this paper, we propose a novel technique to detect double quantization, which results due to double compression of a tampered video. The proposed algorithm uses principles of estimation theory to detect double quantization. Each pixel of a given frame is estimated from the spatially colocated pixels of all the other frames in a Group of Picture (GOP). The error between the true and estimated value is subjected to a threshold to identify the double compressed frame or frames in a GOP. The advantage of this algorithm is that it can detect tampering of I, P or B frames in a GOP with high accuracy. In addition, the technique can also detect forgery under wide range of double compression bitrates or quantization scale factors. We compare our experimental results against popular video forgery detection techniques and establish the effectiveness of the proposed technique.

27 citations

Patent
Nenad Rijavec1
19 Feb 2002
TL;DR: In this article, a system and method for compressing raster image data that efficiently processes data that contains the same value for each pixel is presented. But the method is limited to single image and single color plane.
Abstract: A system and method for compressing raster image data that efficiently processes data that contains the same value for each pixel. Images are compressed according to the Joint Photographic Experts Groups (JPEG) standard. Raster image data for an image or single color plane is analyzed and if the image is determined to contain the same value for each pixel, the processing produces and replicates pre-computed compressed data output segments that replicate the output of JPEG compression.

27 citations

Proceedings ArticleDOI
26 Mar 2003
TL;DR: JPG2000 compression is more acceptable than, and superior to, JPEG in lossy compression, and the more conventional JPEG standards are compared.
Abstract: Due to the constraints on bandwidth and storage capacity, medical images must be compressed before transmission and storage. However, when the image is compressed, especially at lower bit rates, the image fidelity is reduced, a situation which cannot be tolerated in the medical field. The paper studies the compression performance of the new JPEG2000 and the more conventional JPEG standards. The parameters compared include the compression efficiency, peak signal-to-noise ratio (PSNR), picture quality scale (PQS), and mean opinion score (MOS). Three types of medical images are used - X-ray, magnetic resonance imaging (MRI) and ultrasound. Overall, the study shows that JPEG2000 compression is more acceptable than, and superior to, JPEG in lossy compression.

27 citations

Journal ArticleDOI
TL;DR: The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression, which has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering.
Abstract: The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of $8\times 8$ non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering.

27 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295