scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Journal ArticleDOI
Lihua Tian1, Nanning Zheng1, Jianru Xue1, Ce Li1, Xiaofeng Wang1 
TL;DR: Experimental results on standard benchmark demonstrate that compared with the state-of-the-art watermarking scheme, the proposed method is more robust to white noise, filtering and JPEG compression attacks and can effectively detect tamper and locate forgery.
Abstract: This paper proposes an integrated visual saliency-based watermarking approach, which can be used for both synchronous image authentication and copyright protection. Firstly, regions of interest (ROIs), which are not in a fixed size and can present the most important information of one image, would be extracted automatically using the proto-object based saliency attention model. Secondly, to resist common signal processing attacks, for each ROI, an improved quantization method is employed to embed the copyright information into its DCT coefficients. Finally, the edge map of one ROI is chosen as the fragile watermark, and is then embedded into the DWT domain of the watermarked image to further resist the tampering attacks. Using ROI-based visual saliency as a bridge, this proposed method can achieve image authentication and copyright protection synchronously, and it can also preserve much more robust information. Experimental results on standard benchmark demonstrate that compared with the state-of-the-art watermarking scheme, the proposed method is more robust to white noise, filtering and JPEG compression attacks. Furthermore, it also shows that the proposed method can effectively detect tamper and locate forgery.

40 citations

Proceedings ArticleDOI
01 Sep 2013
TL;DR: A novel quantization table for the widely-used JPEG compression standard which leads to improved feature detection performance and is based on the observed impact of scale-space processing on the DCT basis functions.
Abstract: Keypoint or interest point detection is the first step in many computer vision algorithms. The detection performance of the state-of-the-art detectors is, however, strongly influenced by compression artifacts, especially at low bit rates. In this paper, we design a novel quantization table for the widely-used JPEG compression standard which leads to improved feature detection performance. After analyzing several popular scale-space based detectors, we propose a novel quantization table which is based on the observed impact of scale-space processing on the DCT basis functions. Experimental results show that the novel quantization table outperforms the JPEG default quantization table in terms of feature repeatability, number of correspondences, matching score, and number of correct matches.

40 citations

Proceedings ArticleDOI
23 Oct 2009
TL;DR: This work analyzes in more details the performances of existing approaches evaluating their effectiveness by making use of different input datasets with respect to resolution size, compression ratio and just considering different kind of forgeries (e.g., presence of duplicate regions or images composition).
Abstract: One of the key characteristics of digital images with a discrete representation is its pliability to manipulation. Recent trends in the field of unsupervised detection of digital forgery includes several advanced strategies devoted to reveal anomalies just considering several aspects of multimedia content. One of the promising approach, among others, considers the possibility to exploit the statistical distribution of DCT coefficients in order to reveal the irregularities due to the presence of a superimposed signal over the original one (e.g., copy and paste). As recently proved the ratio between the quantization tables used to compress the signal before and after the malicious forgery alter the histograms of the DCT coefficients especially for some basis that are close in terms of frequency content. In this work we analyze in more details the performances of existing approaches evaluating their effectiveness by making use of different input datasets with respect to resolution size, compression ratio and just considering different kind of forgeries (e.g., presence of duplicate regions or images composition). We also present possible post-processing techniques able to manipulate the forged image just to reduce the performance of the current state-of-art solution. Finally we conclude the papers providing future improvements devoted to increase robustness and reliability of forgery detection into DCT domain.

40 citations

Posted Content
TL;DR: This paper shows that comparable performances can be obtained with a unique learned transform in the case of autoencoders, and saves a lot of training time.
Abstract: This paper explores the problem of learning transforms for image compression via autoencoders. Usually, the rate-distortion performances of image compression are tuned by varying the quantization step size. In the case of autoen-coders, this in principle would require learning one transform per rate-distortion point at a given quantization step size. Here, we show that comparable performances can be obtained with a unique learned transform. The different rate-distortion points are then reached by varying the quantization step size at test time. This approach saves a lot of training time.

40 citations

Journal ArticleDOI
TL;DR: Results are presented which show LMS may provide almost 2 bits per symbol reduction in transmitted bit rate compared to DPCM when distortion levels are approximately the same for both methods.
Abstract: The LMS algorithm may be used to adapt the coefficients of an adaptive prediction filter for image source encoding. Results are presented which show LMS may provide almost 2 bits per symbol reduction in transmitted bit rate compared to DPCM when distortion levels are approximately the same for both methods. Alternatively, LMS can be used in fixed bit-rate environments to decrease the reconstructed image distortion. When compared with fixed coefficient DPCM, reconstructed image distortion is reduced by as much as 6-7 dB using LMS. Last, pictorial results representative of LMS processing are presented.

40 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295