scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A new data hiding method based on Adaptive BTC Edge Quantization (ABTC-EQ) using an optimal pixel adjustment process (OPAP) to optimize two quantization levels to improve the embedding capacity and quality of an image.
Abstract: We present a new data hiding method based on Adaptive BTC Edge Quantization (ABTC-EQ) using an optimal pixel adjustment process (OPAP) to optimize two quantization levels. The reason we choose ABTC-EQ as a cover media is that it is superior to AMBTC in maintaining a high-quality image after encoding is executed. ABTC-EQ is represented by a form of t r i o ( Q 1 , Q 2 , [ Q 3 ] , BM) where Q is quantization levels ( Q 1 ≤ Q 2 ≤ Q 3 ) , and BM is a bitmap). The number of quantization levels are two or three, depending on whether the cover image has an edge or not. Before embedding secret bits in every block, we categorize every block into smooth block or complex block by a threshold. In case a block size is 4x4, the sixteen secret bits are replaced by a bitmap of the smooth block for embedding a message directly. On the other hand, OPAP method conceals 1 bit into LSB and 2LSB respectively, and maintains the quality of an image as a way of minimizing the errors which occur in the embedding procedure. The sufficient experimental results demonsrate that the performance of our proposed scheme is satisfactory in terms of the embedding capacity and quality of an image.

26 citations

Journal ArticleDOI
TL;DR: A new algorithm is proposed for forgery detection in MPEG videos using spatial and time domain analysis of quantization effect on DCT coefficients of I and residual errors of P frames to identify malicious inter-frame forgery comprising frame insertion or deletion.
Abstract: In this paper, a new algorithm is proposed for forgery detection in MPEG videos using spatial and time domain analysis of quantization effect on DCT coefficients of I and residual errors of P frames. The proposed algorithm consists of three modules, including double compression detection, malicious tampering detection and decision fusion. Double compression detection module employs spatial domain analysis using first significant digit distribution of DCT coefficients in I frames to detect single and double compressed videos using an SVM classifier. Double compression does not necessarily imply the existence of malignant tampering in the video. Therefore, malicious tampering detection module utilizes time domain analysis of quantization effect on residual errors of P frames to identify malicious inter-frame forgery comprising frame insertion or deletion. Finally, decision fusion module is used to classify input videos into three categories, including single compressed videos, double compressed videos without malicious tampering and double compressed videos with malicious tampering. The experimental results and the comparison of the results of the proposed method with those of other methods show the efficiency of the proposed algorithm.

26 citations

Proceedings ArticleDOI
01 Sep 2012
TL;DR: For each step of image processing chain, a statistical study of pixels' properties is performed to finally obtain a model of Discrete Cosine Transform (DCT) coefficients distribution.
Abstract: We propose a statistical model of natural images in JPEG format. The image acquisition is composed of three principal stages. First, a RAW image is obtained from sensor of Digital Still Cameras (DSC). Then, the RAW image is subject to some post-acquisition processes such as demosaicking, white-balancing and γ-correction to improve its visual quality. Finally, the processed images goes through the JPEG compression process. For each step of image processing chain, a statistical study of pixels' properties is performed to finally obtain a model of Discrete Cosine Transform (DCT) coefficients distribution.

26 citations

Patent
14 Apr 2011
TL;DR: In this article, a computerized method for independent disjoint block-level recompression of a first image generated by independent coding of disjointed blocks in a precursor image is presented.
Abstract: A computerized method for independent disjoint block-level recompression of a first image generated by independent coding of disjoint blocks in a precursor image, the first image having at least one first quantization matrix associated therewith, the method comprising performing at least one independent disjoint block-level compression operation, using a processor on the first image, thereby to generate a re-compressed second image including generating a new quantization matrix and using the new quantization matrix for the independent disjoint block-level compression, including computing a rounding error created by the quantization process utilizing the new quantization matrix and, if needed, adjusting at least one value of the new quantization matrix to reduce a rounding error created by the quantization process utilizing the new quantization matrix.

26 citations

Journal ArticleDOI
TL;DR: Wavelet compression of amplitude/phase and real/imaginary parts of the Fourier spectrum of filtered off-axis digital holograms is compared and the combination of frequency filtering, compression of the obtained spectral components, and extra compression ofThe wavelet decomposition coefficients by threshold processing and quantization is analyzed.
Abstract: Compression of digital holograms allows one to store, transmit, and reconstruct large sets of holographic data. There are many digital image compression methods, and usually wavelets are used for this task. However, many significant specialties exist for compression of digital holograms. As a result, it is preferential to use a set of methods that includes filtering, scalar and vector quantization, wavelet processing, etc. These methods in conjunction allow one to achieve an acceptable quality of reconstructed images and significant compression ratios. In this paper, wavelet compression of amplitude/phase and real/imaginary parts of the Fourier spectrum of filtered off-axis digital holograms is compared. The combination of frequency filtering, compression of the obtained spectral components, and extra compression of the wavelet decomposition coefficients by threshold processing and quantization is analyzed. Computer-generated and experimentally recorded digital holograms are compressed. The quality of the obtained reconstructed images is estimated. The results demonstrate the possibility of compression ratios of 380 using real/imaginary parts. Amplitude/phase compression allows ratios that are a factor of 2–4 lower for obtaining similar quality of reconstructed objects.

26 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295