scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings ArticleDOI
04 Nov 2000
TL;DR: Jpeg2000 is not only intended to provide rate-distortion and subjective image quality performance superior to existing JPEG standard, but to also provide functionality that the current JPEG standard can either not address efficiently nor address at all.
Abstract: This paper presents an overview of the upcoming JPEG2000 still picture compression standard. JPEG2000 is not only intended to provide rate-distortion and subjective image quality performance superior to existing JPEG standard, but to also provide functionality that the current JPEG standard can either not address efficiently nor address at all. Lossless and lossy compression, encoding of very large images, progressive transmission by pixel accuracy and by resolution, robustness to the presence of bit-errors and region-of-interest coding, are some representative examples of its features.

39 citations

Journal ArticleDOI
TL;DR: Experimental results show that the new built JND model can effectively enhance the robustness of the quantization watermarking scheme and a new logarithmic quantization Watermarking Scheme is presented based on the proposed model to verify the feasibility and effectiveness.

39 citations

Journal ArticleDOI
TL;DR: This work proposes to jointly dequantize and contrast-enhanceJPEG images captured in poor lighting conditions in a single graph-signal restoration framework, adopting accelerated proximal gradient (APG) algorithms in the transform domain, with backtracking line search for further speedup.
Abstract: JPEG images captured in poor lighting conditions suffer from both low luminance contrast and coarse quantization artifacts due to lossy compression. Performing dequantization and contrast enhancement in separate back-to-back steps would amplify the residual compression artifacts, resulting in low visual quality. Leveraging on recent development in graph signal processing (GSP), we propose to jointly dequantize and contrast-enhance such images in a single graph-signal restoration framework. Specifically, we separate each observed pixel patch into illumination and reflectance via Retinex theory, where we define generalized smoothness prior and signed graph smoothness prior according to their respective unique signal characteristics. Given only a transform-coded image patch, we compute robust edge weights for each graph via low-pass filtering in the dual graph domain. We compute the illumination and reflectance components for each patch alternately, adopting accelerated proximal gradient (APG) algorithms in the transform domain, with backtracking line search for further speedup. Experimental results show that our generated images outperform the state-of-the-art schemes noticeably in the subjective quality evaluation.

39 citations

Journal ArticleDOI
TL;DR: An encrypted digital holographic data reconstruction method with data compression is proposed, and it is shown that the number of quantization levels of the digital hologram can be reduced.
Abstract: This paper is a revision of a paper presented at the SPIE conference on Algorithms and Systems for Optical Processing V, Jul. 2001, San Diego, California. The paper presented there appears (unrefereed) in SPIE Proceedings Vol. 4471. An encrypted digital holographic data reconstruction method with data compression is proposed. We show that the number of quantization levels of the digital hologram can be reduced. By computer simulations, we confirm that the method is especially useful for binary images. For gray-scale images, we propose a bit plane decomposition method. By this method, we show that both high reconstructed image quality and a high compression ratio can be achieved.

39 citations

Proceedings ArticleDOI
29 Dec 2011
TL;DR: This paper analyzes the relevant characteristics of SIFT features and categorizes the image Macroblocks into several groups and proposes a novel rate-distortion model which is based on the SIFT feature matching score.
Abstract: For image compression applications where the information sink is not a person but a computer algorithm, the image encoder should control the encoding process in such a way that the important and relevant features of the image are preserved after compression. In this paper, our goal is to preserve the strongest SIFT features for JPEG-encoded images. We analyze the relevant characteristics of SIFT features and categorize the image Macroblocks into several groups. Then we propose a novel rate-distortion model which is based on the SIFT feature matching score. The dependency between the quantization table in the JPEG file and the common Lagrange multiplier is obtained from a training image database. Then for a given image quality we exploit this relationship to perform R-D optimization for each group. Our results show that the proposed algorithm achieves better feature preservation when compared to standard JPEG encoding. The proposed approach is fully standard compatible.

39 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295