scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings ArticleDOI
18 May 2008
TL;DR: An effective Markov process (MP) based JPEG steganalysis scheme, which utilizes both the intrablock and interblock correlations among JPEG coefficients, is presented.
Abstract: JPEG image steganalysis has attracted increasing attention recently. In this paper, we present an effective Markov process (MP) based JPEG steganalysis scheme, which utilizes both the intrablock and interblock correlations among JPEG coefficients. We compute transition probability matrix for each difference JPEG 2-D array to utilize the intrablock correlation, and "averaged" transition probability matrices for those difference mode 2-D arrays to utilize the interblock correlation. All the elements of these matrices are used as features for steganalysis. Experimental works over an image database of 7,560 JPEG images have demonstrated that this new approach has greatly improved JPEG steganalysis capability and outperforms the prior arts.

248 citations

Patent
Navin Chaddha1
23 Mar 2000
TL;DR: In this paper, a multimedia compression system for generating frame rate scalable data in the case of video, and, more generally, universally scalable data is presented, which is scalable across all of the relevant characteristics of the data.
Abstract: A multimedia compression system for generating frame rate scalable data in the case of video, and, more generally, universally scalable data Universally scalable data is scalable across all of the relevant characteristics of the data. In the case of video, these characteristics include frame rate, resolution, and quality. The scalable data generated by the compression system is comprised of multiple additive layers for each characteristic across which the data is scalable. In the case of video, the frame rate layers are additive temporal layers, the resolution layers are additive base and enhancement layers, and the quality layers are additive index planes of embedded codes. Various techniques can be used for generating each of these layers (e.g., Laplacian pyramid decomposition or wavelet decomposition for generating the resolution layers; tree structured vector quantization or tree structured-scalar quantization for generating the quality layers). The compression system further provides for embedded inter-frame compression in the context of frame rate scalability, and non-redundant layered multicast network delivery of the scalable data.

245 citations

Journal ArticleDOI
TL;DR: This work quantifies the number of Fourier coefficients that can be removed from the hologram domain, and the lowest level of quantization achievable, without incurring significant loss in correlation performance or significant error in the reconstructed object domain.
Abstract: We present the results of applying lossless and lossy data compression to a three-dimensional object reconstruction and recognition technique based on phase-shift digital holography. We find that the best lossless (Lempel-Ziv, Lempel-Ziv-Welch, Huffman, Burrows-Wheeler) compression rates can be expected when the digital hologram is stored in an intermediate coding of separate data streams for real and imaginary components. The lossy techniques are based on subsampling, quantization, and discrete Fourier transformation. For various degrees of speckle reduction, we quantify the number of Fourier coefficients that can be removed from the hologram domain, and the lowest level of quantization achievable, without incurring significant loss in correlation performance or significant error in the reconstructed object domain.

240 citations

Journal ArticleDOI
TL;DR: A one-stage supervised deep hashing framework (SDHP) is proposed to learn high-quality binary codes, and a deep convolutional neural network is implemented to enforce the learned codes to meet the following criterions.
Abstract: Image content analysis is an important surround perception modality of intelligent vehicles. In order to efficiently recognize the on-road environment based on image content analysis from the large-scale scene database, relevant images retrieval becomes one of the fundamental problems. To improve the efficiency of calculating similarities between images, hashing techniques have received increasing attentions. For most existing hash methods, the suboptimal binary codes are generated, as the hand-crafted feature representation is not optimally compatible with the binary codes. In this paper, a one-stage supervised deep hashing framework (SDHP) is proposed to learn high-quality binary codes. A deep convolutional neural network is implemented, and we enforce the learned codes to meet the following criterions: 1) similar images should be encoded into similar binary codes, and vice versa; 2) the quantization loss from Euclidean space to Hamming space should be minimized; and 3) the learned codes should be evenly distributed. The method is further extended into SDHP+ to improve the discriminative power of binary codes. Extensive experimental comparisons with state-of-the-art hashing algorithms are conducted on CIFAR-10 and NUS-WIDE, the MAP of SDHP reaches to 87.67% and 77.48% with 48 b, respectively, and the MAP of SDHP+ reaches to 91.16%, 81.08% with 12 b, 48 b on CIFAR-10 and NUS-WIDE, respectively. It illustrates that the proposed method can obviously improve the search accuracy.

239 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295