scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings ArticleDOI
18 Jun 2018
TL;DR: The proposed CLIP-Q method (Compression Learning by In-Parallel Pruning-Quantization) compresses AlexNet, GoogLeNet, and ResNet-50 by 10-fold, while preserving the uncompressed network accuracies on ImageNet, to take advantage of the complementary nature of pruning and quantization.
Abstract: Deep neural networks enable state-of-the-art accuracy on visual recognition tasks such as image classification and object detection. However, modern deep networks contain millions of learned weights; a more efficient utilization of computation resources would assist in a variety of deployment scenarios, from embedded platforms with resource constraints to computing clusters running ensembles of networks. In this paper, we combine network pruning and weight quantization in a single learning framework that performs pruning and quantization jointly, and in parallel with fine-tuning. This allows us to take advantage of the complementary nature of pruning and quantization and to recover from premature pruning errors, which is not possible with current two-stage approaches. Our proposed CLIP-Q method (Compression Learning by In-Parallel Pruning-Quantization) compresses AlexNet by 51-fold, GoogLeNet by 10-fold, and ResNet-50 by 15-fold, while preserving the uncompressed network accuracies on ImageNet.

173 citations

Journal ArticleDOI
01 Jan 2018
TL;DR: The experimental results show that the proposed watermarking algorithm can obtain better invisibility of watermark and stronger robustness for common attacks, e.g., JPEG compression, cropping, and adding noise.
Abstract: This paper proposes a new blind watermarking algorithm, which embedding the binary watermark into the blue component of a RGB image in the spatial domain, to resolve the problem of protecting copyright. For embedding watermark, the generation principle and distribution features of direct current (DC) coefficient are used to directly modify the pixel values in the spatial domain, and then four different sub-watermarks are embedded into the different areas of the host image for four times, respectively. When watermark extraction, the sub-watermark is extracted with blind manner according to DC coefficients of watermarked image and the key-based quantization step, and then the statistical rule and the method of “first to select, second to combine” are proposed to form the final watermark. Hence, the proposed algorithm is executed in the spatial domain rather than in discrete cosine transform (DCT) domain, which not only has simple and quick performance of the spatial domain but also has high robustness feature of DCT domain. The experimental results show that the proposed watermarking algorithm can obtain better invisibility of watermark and stronger robustness for common attacks, e.g., JPEG compression, cropping, and adding noise. Comparison results also show the advantages of the proposed method.

172 citations

Proceedings ArticleDOI
22 May 2011
TL;DR: A statistical test to discriminate between original and forged regions in JPEG images, under the hypothesis that the former are doubly compressed while the latter are singly compressed, demonstrates a better discriminating behavior with respect to previously proposed methods.
Abstract: In this paper, we propose a statistical test to discriminate between original and forged regions in JPEG images, under the hypothesis that the former are doubly compressed while the latter are singly compressed. New probability models for the DCT coefficients of singly and doubly compressed regions are proposed, together with a reliable method for estimating the primary quantization factor in the case of double compression. Based on such models, the probability for each DCT block to be forged is derived. Experimental results demonstrate a better discriminating behavior with respect to previously proposed methods.

172 citations

Proceedings ArticleDOI
07 Sep 2009
TL;DR: This work describes how double quantization can introduce statistical artifacts that while not visible, can be quantified, measured, and used to detect tampering.
Abstract: We describe a technique for detecting double quantization in digital video that results from double MPEG compression or from combining two videos of different qualities (e.g., green-screening). We describe how double quantization can introduce statistical artifacts that while not visible, can be quantified, measured, and used to detect tampering. This technique can detect highly localized tampering in regions as small as 16 x 16 pixels.

171 citations

Journal ArticleDOI
TL;DR: This algorithm is based on the observation that in the process of recompressing a JPEG image with the same quantization matrix over and over again, the number of different JPEG coefficients will monotonically decrease in general.
Abstract: Detection of double joint photographic experts group (JPEG) compression is of great significance in the field of digital forensics. Some successful approaches have been presented for detecting double JPEG compression when the primary compression and the secondary compression have different quantization matrixes. However, when the primary compression and the secondary compression have the same quantization matrix, no detection method has been reported yet. In this paper, we present a method which can detect double JPEG compression with the same quantization matrix. Our algorithm is based on the observation that in the process of recompressing a JPEG image with the same quantization matrix over and over again, the number of different JPEG coefficients, i.e., the quantized discrete cosine transform coefficients between the sequential two versions will monotonically decrease in general. For example, the number of different JPEG coefficients between the singly and doubly compressed images is generally larger than the number of different JPEG coefficients between the corresponding doubly and triply compressed images. Via a novel random perturbation strategy implemented on the JPEG coefficients of the recompressed test image, we can find a “proper” randomly perturbed ratio. For different images, this universal “proper” ratio will generate a dynamically changed threshold, which can be utilized to discriminate the singly compressed image and doubly compressed image. Furthermore, our method has the potential to detect triple JPEG compression, four times JPEG compression, etc.

171 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295