scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings ArticleDOI
25 Apr 2007
TL;DR: A quantitative comparison between the energy costs associated with direct transmission of uncompressed images and sensor platform-based JPEG compression followed by transmission of the compressed image data is presented.
Abstract: Energy-efficient image communication is one of the most important goals for a large class of current and future sensor network applications. This paper presents a quantitative comparison between the energy costs associated with 1) direct transmission of uncompressed images and 2) sensor platform-based JPEG compression followed by transmission of the compressed image data. JPEG compression computations are mapped onto various resource-constrained sensor platforms using a design environment that allows computation using the minimum integer and fractional bit-widths needed in view of other approximations inherent in the compression process and choice of image quality parameters. Detailed experimental results examining the tradeoffs in processor resources, processing/transmission time, bandwidth utilization, image quality, and overall energy consumption are presented.

33 citations

Proceedings ArticleDOI
19 Oct 2009
TL;DR: A novel method of JPEG steganalysis is proposed based on an observation of bi-variate generalized Gaussian distribution in Discrete Cosine Transform (DCT) domain, neighboring joint density features on both intra-block and inter-block are extracted.
Abstract: Detection of information-hiding in JPEG images is actively delivered in steganalysis community due to the fact that JPEG is a widely used compression standard and several steganographic systems have been designed for covert communication in JPEG images. In this paper, we propose a novel method of JPEG steganalysis. Based on an observation of bi-variate generalized Gaussian distribution in Discrete Cosine Transform (DCT) domain, neighboring joint density features on both intra-block and inter-block are extracted. Support Vector Machines (SVMs) are applied for detection. Experimental results indicate that this new method prominently improves a current art of steganalysis in detecting several steganographic systems in JPEG images. Our study also shows that it is more accurate to evaluate the detection performance in terms of both image complexity and information hiding ratio.

33 citations

Proceedings ArticleDOI
01 Jan 2017
TL;DR: Results show that the proposed method outperforms direct application of the reference state of the art image encoders, in terms of BD-PSNR gain and bit rate reduction.
Abstract: This paper proposes an algorithm for lossy compression of unfocused light field images. The raw light field is preprocessed by demosaicing, devignetting and slicing of the raw lenset array image. The slices are then rearranged in tiles and compressed by the standard JPEG 2000 encoder. The experimental analysis compares the performance of the proposed method against the direct compression with JPEG 2000, and JPEG XR, in terms of BD-PSNR gain and bit rate reduction. Obtained results show that the proposed method outperforms direct application of the reference state of the art image encoders.

33 citations

Proceedings ArticleDOI
27 Mar 2000
TL;DR: It is shown that the data hiding capacity is at most equal to the loss in storage efficiency bit rate if watermarking and quantization for lossy compression occur in the same domain.
Abstract: We derive capacity bounds for watermarking and data hiding in the presence of just noticeable difference (JND) perceptual coding for a class of techniques that do not suffer from host signal interference. By modeling the lossy compression distortions on the hidden data using non-Gaussian statistics, we demonstrate that binary antipodal channel codes achieve capacity. It is shown that the data hiding capacity is at most equal to the loss in storage efficiency bit rate if watermarking and quantization for lossy compression occur in the same domain.

33 citations

Proceedings ArticleDOI
02 Apr 2001
TL;DR: A novel linear model for the process of quantization is proposed which leads to analytical results estimating the data hiding capacity for various watermarking domains, and appropriate transforms for robust spread spectrum data hiding in the face of JPEG compression are predicted.
Abstract: We determine the watermark domain that maximizes data hiding capacity. We focus on the situation in which the watermarked signal undergoes lossy compression involving quantization in a specified compression domain. A novel linear model for the process of quantization is proposed which leads to analytical results estimating the data hiding capacity for various watermarking domains. Using this framework we predict appropriate transforms for robust spread spectrum data hiding in the face of JPEG compression. Simulation results verify our theoretical observations. We find that a repetition code used in conjunction with spread spectrum watermarking in a different domain than employed for compression improves data hiding capacity.

33 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295