scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A new adaptive page segmentation method is proposed to extract text blocks from various types of color technical journals' cover images to speed up processing time and reduce the processing complexity on true color images.

46 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a general framework to select quantizers in each spatial and spectral region of an image to achieve the desired target rate while minimizing distortion and showed that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics, and is extremely competitive with respect to state-of-the-art transform coding.
Abstract: Predictive coding is attractive for compression on board of spacecraft due to its low computational complexity, modest memory requirements, and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation, where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop and the lack of a signal representation that packs the signal’s energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image to achieve the desired target rate while minimizing distortion. The rate control algorithm allows achieving lossy near-lossless compression and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper, we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows performing lossless, near-lossless, and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate–distortion characteristics, and is extremely competitive with respect to state-of-the-art transform coding.

46 citations

Journal ArticleDOI
TL;DR: A novel approach to secret image sharing based on a (k,n)-threshold scheme with the additional capability of share data re- duction is proposed, which is suitable for certain application environments, such as the uses of mobile or handheld devices, where only a small amount of network traffic and space for data storage are allowed.
Abstract: A novel approach to secret image sharing based on a (k,n)-threshold scheme with the additional capability of share data re- duction is proposed. A secret image is first transformed into the fre- quency domain using the discrete cosine transform (DCT), which is ap- plied in most compression schemes. Then all the DCT coefficients except the first 10 lower frequency ones are discarded. And the values of the 2nd through the 10th coefficients are disarranged in such a way that they cannot be recovered without the first coefficient and that the inverse DCT of them cannot reveal the details of the original image. Finally, the first coefficient is encoded into a number of shares for a group of secret- sharing participants and the remaining nine manipulated coefficients are allowed to be accessible to the public. The overall effect of this scheme is achievement of effective secret sharing with good reduction of share data. The scheme is thus suitable for certain application environments, such as the uses of mobile or handheld devices, where only a small amount of network traffic for shared transmission and a small amount of space for data storage are allowed. Good experimental results proving the feasibility of the proposed approach are also included. © 2003 Society

46 citations

Journal ArticleDOI
TL;DR: A novel blind color image watermarking based on Contourlet transform and Hessenberg decomposition is proposed to protect digital copyright of color image with higher imperceptibility and robustness against most common image attacks in comparison with other related methods.
Abstract: In this paper, a novel blind color image watermarking based on Contourlet transform and Hessenberg decomposition is proposed to protect digital copyright of color image. Firstly, each color channel of the host image is transformed by Contourlet transform and its low frequency sub-band is divided into 4ź×ź4 non-overlap coefficient block. Secondly, the coefficient block selected by MD5-based Hash pseudo-random algorithm is decomposed by Hessenberg decomposition. Thirdly, the watermark information permuted by Arnold transform is embedded into the biggest energy element of the upper Hessenberg matrix by quantization technique. In extraction process, the quantization strength is used for blindly extracting watermark information from the attacked host image without the help of any original image. The results show that the proposed scheme has higher imperceptibility and robustness against most common image attacks in comparison with other related methods.

46 citations

Proceedings ArticleDOI
16 Sep 2011
TL;DR: A spatial division design shows a speedup of 72x in the four-GPU-based implementation of the PPVQ compression scheme, which consists of linear prediction, bit depth partitioning, vector quantization, and entropy coding.
Abstract: For the ultraspectral sounder data which features thousands of channels at each observation location, lossless compression is desirable to save storage space and transmission time without losing precision in retrieval of geophysical parameters. Predictive partitioned vector quantization (PPVQ) has been proven to be an effective lossless compression scheme for ultraspectral sounder data. It consists of linear prediction, bit-depth partitioning, vector quantization, and entropy coding. In our previous work, the two most time consuming stages of linear prediction and vector quantization were identified for GPU implementation. For GIFTS data, using a spectral division strategy for sharing the compression workload among four GPUs, a speedup of ~42x was achieved. To further enhance the speedup, this work will explore a spatial division strategy for sharing workload in processing the six parts of a GIFTS datacube. As result, the total processing time of a GIFTS datacube on four GPUs can be less than 13 seconds which is equivalent to a speedup of ~72x. The use of multiple GPUs for PPVQ compression is thus promising as a low-cost and effective compression solution for ultraspectral sounder data for rebroadcast use.

46 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295