scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Journal ArticleDOI
01 Oct 2015
TL;DR: A novel, efficient and robust watermarking scheme for protection of document image contents is proposed in this work and the experimental results show that the proposed technique is highly robust.
Abstract: A novel, efficient and robust watermarking scheme for protection of document image contents is proposed in this work. An integer wavelet-based watermarking scheme for embedding the compressed version of the binary watermark logo has been developed for robust watermarking. At the sender side, the source document image is divided into empty and non-empty segments depending on the absence or presence of the information. Watermarking is applied for non-empty segments and thus the amount of embedding capacity is reduced. A binary watermark logo is compressed using binary block coding technique of appropriate block-size. A level-2 integer wavelet transformation is applied on the non-empty segment of the source document image. LL-sub-band of level-2 of the transformed image is subdivided into blocks of uniform size and compressed watermark bitstream is embedded into it. The compressed watermark is redundantly embedded into blocks using quantization technique. Thus, multiple copies of compressed watermark are available and each block of the source document image segment need not include the entire compressed watermark stream. At the receiver side, the extracted segments from each set of blocks are merged to obtain a single extracted bitstream. The bitstream is further decoded to get the binary watermark. The extracted and embedded watermarks are compared and authentication decision is taken based on majority voting technique. Based on the quantization step size, size of the logo and the level of wavelet transform, the watermarks are extracted without accessing the original image. The experimental results show that the proposed technique is highly robust. The performance of the proposed approach is measured in parameters Peak Signal to Noise Ratio (PSNR) and Normalized Correlation Coefficient (NCC). Results show that the proposed approach is better than the existing methods. In the proposed scheme for decompression of watermark, the level of block coding technique is the key, which provides an additional layer of security.

32 citations

Proceedings ArticleDOI
26 Sep 2001
TL;DR: A new and statistically robust algorithm able to improve the performance of the standard DCT compression algorithm for both perceived quality and compression size is presented.
Abstract: The paper presents a new and statistically robust algorithm able to improve the performance of the standard DCT compression algorithm for both perceived quality and compression size. The approach proposed combines together an information theoretical/statistical approach with HVS (human visual system) response functions. The methodology applied permits us to obtain a suitable quantization table for specific classes of images and specific viewing conditions. The paper presents a case study where the right parameters are learned after an extensive experimental phase, for three specific classes: document, landscape and portrait. The results show both perceptive and measured (in term of PSNR) improvement. A further application shows how it is possible obtain significant improvement profiling the relative DCT error inside the pipeline of images acquired by typical digital sensors.

32 citations

Book ChapterDOI
19 Jun 2017
TL;DR: This paper proposes an adjustment of the recent guided fireworks algorithm from the class of swarm intelligence algorithms for quantization table optimization and tests the proposed approach on standard benchmark images and compared results with other approaches from literature.
Abstract: Digital images are very useful and ubiquitous, however there is a problem with their storage because of their large size and memory requirement JPEG lossy compression algorithm is prevailing standard that solves that problem It facilitates different levels of compression (and the corresponding quality) by using recommended quantization tables It is possible to optimize these tables for better image quality at the same level of compression This presents a hard combinatorial optimization problem for which stochastic metaheuristics proved to be efficient In this paper we propose an adjustment of the recent guided fireworks algorithm from the class of swarm intelligence algorithms for quantization table optimization We tested the proposed approach on standard benchmark images and compared results with other approaches from literature By using various image similarity metrics our approach proved to be more successful

32 citations

Book ChapterDOI
01 Oct 2010
TL;DR: An algorithm is proposed which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor and results prove the effectiveness of the proposed method.
Abstract: With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.

32 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Abstract: An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

32 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295