scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: An optimized watermark extraction scheme by using an adaptive receiver for quantization-based watermarking is presented, which improves the watermark robustness against median filtering, image intensity Direct Current (DC) change, histogram equalization, color reduction and image intensity linear scaling.
Abstract: In this paper, the watermarking channel is modeled as a generalized channel with fading and nonzero mean additive noise. In order to improve the watermark robustness against the generalized channel, we present an optimized watermark extraction scheme by using an adaptive receiver for quantization-based watermarking. In the proposed extraction scheme, we adaptively estimate the decision zone of the binary data bits and the quantization step size. A training sequence is embedded into the original image together with the informative watermark. The estimation of the decision zone takes advantage of the response function of the training sequence. Compared to those watermarking schemes without receiver adaptation, the main improvement is the enhanced robustness against median filtering, image intensity Direct Current (DC) change, histogram equalization, color reduction, image intensity linear scaling, image intensity nonlinear scaling such as Gamma correction etc.

33 citations

Journal ArticleDOI
TL;DR: This study shows that different wavelet implementations vary in their capacity to differentiate themselves from the old, established lossy JPEG, and verified other research studies which show that wavelet compression yields better compression quality at constant compressed file sizes compared with JPEG.
Abstract: This presentation focuses on the quantitative comparison of three lossy compression methods applied to a variety of 12-bit medical images. One Joint Photographic Exports Group (JPEG) and two wavelet algorithms were used on a population of 60 images. The medical images were obtained in Digital Imaging and Communications in Medicine (DICOM) file format and ranged in matrix size from 256 × 256 (magnetic resonance [MR]) to 2,560 × 2,048 (computed radiography [CR], digital radiography [DR], etc). The algorithms were applied to each image at multiple levels of compression such that comparable compressed file sizes were obtained at each level. Each compressed image was then decompressed and quantitative analysis was performed to compare each compressed-thendecompressed image with its corresponding original image. The statistical measures computed were sum of absolute differences, sum of squared differences, and peak signal-to-noise ratio (PSNR). Our results verify other research studies which show that wavelet compression yields better compression quality at constant compressed file sizes compared with JPEG. The DICOM standard does not yet include wavelet as a recognized lossy compression standard. For implementers and users to adopt wavelet technology as part of their image management and communication installations, there has to be significant differences in quality and compressibility compared with JPEG to justify expensive software licenses and the introduction of proprietary elements in the standard. Our study shows that different wavelet implementations vary in their capacity to differentiate themselves from the old, established lossy JPEG.

33 citations

Patent
19 Dec 2002
TL;DR: In this article, the authors proposed an encoder which encodes image data according to predictive coding including motion detection, and increases motion detection precision while suppressing an increase in the amount of data processed at the predictive coding.
Abstract: PROBLEM TO BE SOLVED: To provide an encoder which encodes image data according to predictive coding including motion detection, and increases motion detection precision while suppressing an increase in the amount of data processed at the predictive coding SOLUTION: The encoder includes: a 4×4 block divider 101 for dividing image data corresponding to a target frame to be processed into 4×4 pixel blocks; a motion compensation unit 111 for performing motion compensation for the image data in units of 4×4 pixel blocks thereby to generate predicted data; and an 8×8 pixel block configuration unit 103 for transforming difference data between image data corresponding to the 4×4 pixel block and predicted data corresponding to the 4×4 pixel block into prediction error data of an 8×8 pixel block This encoder carries out a DCT process, a quantization process, and a variable length coding process for prediction error data, in units of 8×8 pixel block COPYRIGHT: (C)2003,JPO

33 citations

Book
01 Jan 1997
TL;DR: Theoretical Assessment: Sensitivity and Spatial Response, Multiresolution Decomposition, and Electro-Optical Design.
Abstract: Preface. 1. Introduction. 2. Image Gathering and Reconstruction. 3. Image Gathering and Restoration. 4. Information-Theoretical Assessment. 5. Multiresolution Decomposition. 6. Multiresponse Image Gathering and Restoration. 7. Electro-Optical Design. A: Sensitivity and Spatial Response. B: Photodetector Noise. C: Insufficient Sampling. D: Quantization. E: Quantitative Assessment of Image Quality. Index.

33 citations

Book ChapterDOI
10 Nov 2010
TL;DR: A new method based on the firefly algorithm to construct the codebook of vector quantization is proposed, which gets higher quality than those generated from the LBG and PSO-LBG algorithms, but there are not significantly different to the HBMO- LBG algorithm.
Abstract: The vector quantization (VQ) was a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. This paper proposed a new method based on the firefly algorithm to construct the codebook of vector quantization. The proposed method uses LBG method as the initial of firefly algorithm to develop the VQ algorithm. This method is called FF-LBG algorithm. The FF-LBG algorithm is compared with the other three methods that are LBG, PSO-LBG and HBMO-LBG algorithms. Experimental results showed that the computation of this proposed FF-LBG algorithm is faster than the PSO-LBG, and the HBMO-LBG algorithms. Furthermore, the reconstructured images get higher quality than those generated from the LBG and PSO-LBG algorithms, but there are not significantly different to the HBMO-LBG algorithm.

33 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295