scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Posted Content
TL;DR: In this paper, the authors focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network, and model this problem as a discretely constrained optimization problem.
Abstract: Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network.

115 citations

Proceedings ArticleDOI
TL;DR: A set of color spaces that allow reversible mapping between red-green-blue and luma-chroma representations in integer arithmetic can improve coding gain by over 0.5 dB with respect to the popular YCrCb transform, while achieving much lower computational complexity.
Abstract: This paper reviews a set of color spaces that allow reversible mapping between red-green-blue and luma-chroma representations in integer arithmetic. The YCoCg transform and its reversible form YCoCg-R can improve coding gain by over 0.5 dB with respect to the popular YCrCb transform, while achieving much lower computational complexity. We also present extensions of the YCoCg transform for four-channel CMYK pixel data. Thanks to their reversibility under integer arithmetic, these transforms are useful for both lossy and lossless compression. Versions of these transforms are used in the HD Photo image coding technology (which is the basis for the upcoming JPEG XR standard) and in recent editions of the H.264/MPEG-4 AVC video coding standard. Keywords: Image coding, color transforms, lossless coding, YCoCg, JPEG, JPEG XR, HD Photo. 1. INTRODUCTION In color image compression, usually the input image has three color values per pixel: red, green, and blue (RGB). Independent compression of each of the R, G, and B color planes is possible (and explicitly allowed in standards such as JPEG 2000

114 citations

Proceedings ArticleDOI
14 Mar 2010
TL;DR: It is shown how the proper addition of noise to an image's discrete cosine transform coefficients can sufficiently remove quantization artifacts which act as indicators of JPEG compression while introducing an acceptable level of distortion.
Abstract: The widespread availability of photo editing software has made it easy to create visually convincing digital image forgeries. To address this problem, there has been much recent work in the field of digital image forensics. There has been little work, however, in the field of anti-forensics, which seeks to develop a set of techniques designed to fool current forensic methodologies. In this work, we present a technique for disguising an image's JPEG compression history. An image's JPEG compression history can be used to provide evidence of image manipulation, supply information about the camera used to generate an image, and identify forged regions within an image. We show how the proper addition of noise to an image's discrete cosine transform coefficients can sufficiently remove quantization artifacts which act as indicators of JPEG compression while introducing an acceptable level of distortion. Simulation results are provided to verify the efficacy of this anti-forensic technique.

114 citations

Book ChapterDOI
07 May 2006
TL;DR: Wang et al. as discussed by the authors proposed an approach that can detect doctored JPEG images and further locate the doctored parts, by examining the double quantization effect hidden among the DCT coefficients.
Abstract: The steady improvement in image/video editing techniques has enabled people to synthesize realistic images/videos conveniently. Some legal issues may occur when a doctored image cannot be distinguished from a real one by visual examination. Realizing that it might be impossible to develop a method that is universal for all kinds of images and JPEG is the most frequently used image format, we propose an approach that can detect doctored JPEG images and further locate the doctored parts, by examining the double quantization effect hidden among the DCT coefficients. Up to date, this approach is the only one that can locate the doctored part automatically. And it has several other advantages: the ability to detect images doctored by different kinds of synthesizing methods (such as alpha matting and inpainting, besides simple image cut/paste), the ability to work without fully decompressing the JPEG images, and the fast speed. Experiments show that our method is effective for JPEG images, especially when the compression quality is high.

114 citations

Journal ArticleDOI
TL;DR: It is found that the performance of steganalysis techniques is affected by the JPEG quality factor, and JPEG recompression artifacts serve as a source of confusion for almost all steganalysed techniques.
Abstract: We investigate the performance of state of the art univer- sal steganalyzers proposed in the literature. These universal stega- nalyzers are tested against a number of well-known steganographic embedding techniques that operate in both the spatial and transform domains. Our experiments are performed using a large data set of JPEG images obtained by randomly crawling a set of publicly avail- able websites. The image data set is categorized with respect to size, quality, and texture to determine their potential impact on ste- ganalysis performance. To establish a comparative evaluation of techniques, undetectability results are obtained at various embed- ding rates. In addition to variation in cover image properties, our comparison also takes into consideration different message length definitions and computational complexity issues. Our results indi- cate that the performance of steganalysis techniques is affected by the JPEG quality factor, and JPEG recompression artifacts serve as a source of confusion for almost all steganalysis techniques. © 2006

113 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295