scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Journal ArticleDOI
02 Oct 2006
TL;DR: In this article, blind classifiers are constructed for detecting steganography in JPEG images and assigning stego images to six popular JPEG embedding algorithms, using 23 calibrated DCT feature calculated from the luminance component.
Abstract: The goal of forensic steganalysis is to detect the presence of embedded data and to eventually extract the secret message. A necessary step towards extracting the data is determining the steganographic algorithm used to embed the data. In the paper, we construct blind classifiers capable of detecting steganography in JPEG images and assigning stego images to six popular JPEG embedding algorithms. The classifiers are support vector machines that use 23 calibrated DCT feature calculated from the luminance component.

29 citations

Journal ArticleDOI
TL;DR: The experimental results show that compared with the existing schemes, the proposed watermarking scheme has higher performance, such as better invisibility, stronger robustness and shorter execution time.
Abstract: In order to realize the copyright protection of color image effectively, combining the advantages of spatial-domain watermarking scheme and frequency-domain one, a blind color image watermarking scheme with high performance in the spatial domain is proposed in the paper. The presented scheme does not require real discrete cosine transform (DCT) and discrete Hartley transform (DHT), but only uses the different quantization steps to complete the embedding and blind extracting of color watermark in the spatial domain according to the unique characteristic of direct current (DC) components of DCT and DHT. The contributions of this paper include the following: (1) This scheme combined the strengths of watermarking scheme in the spatial domain and frequency domain, which has fast speed and strong robustness; (2) the scheme makes full use of the energy aggregation characteristics of image block, and the invisibility of the watermarking scheme has greatly improved; and (3) different quantization steps are chosen to embed and extract watermark in different layers, which reduce the modification range of pixel value effectively. The experimental results show that compared with the existing schemes, the proposed watermarking scheme has higher performance, such as better invisibility, stronger robustness and shorter execution time.

29 citations

Journal ArticleDOI
01 Sep 2017
TL;DR: The results demonstrate that the proposed scheme to achieve secret key generation from wireless channels is available to generate shared secret keys between transceivers even though their measurement sequences have too many discrepancies.
Abstract: For key generation between wireless transceivers, key generation leveraging channel reciprocity is a promising alternative to public key cryptography. Several existing schemes have validated its feasibility in real environments. However, in some scenarios, channel measurements collected by the involved transceivers are highly correlated but not identical, i.e., measurement sequences of these transceivers have too many discrepancies, which makes it difficult to extract the shared key from these measurements. In this paper, we propose a scheme to achieve secret key generation from wireless channels. During the proposed scheme, to reduce the amount of the referred discrepancies and further achieve efficient key generation, the involved transceivers separately apply a compressor based on the discrete wavelet transform (DWT) to pre-process their measurements. Then, multi-level quantization is implemented to quantify the output of DWT-based compressor. An encoding scheme based on gray code is employed to establish bit sequence and ensure that the resulting bit mismatch rate can be further reduced so that efficient information reconciliation can be implemented. Accordingly, the shared key between these transceivers can be derived after information reconciliation. Finally, 2-universal hash functions are used to guarantee the randomness of the shared secret key. Several experiments in real environments are conducted to validate the proposed scheme. The results demonstrate that the proposed scheme is available to generate shared secret keys between transceivers even though their measurement sequences have too many discrepancies.

29 citations

Proceedings ArticleDOI
11 Jul 2011
TL;DR: The combination of the two techniques is shown, named improved-DWT-DCT compression technique, showing that it yields a better performance than DCT-based JPEG in terms of PSNR.
Abstract: In this paper, a hybrid technique using the discrete cosine transform (DCT) and the discrete wavelet transform (DWT) is presented. We show evaluation and comparative results for DCT, DWT and hybrid DWT-DCT compression techniques. Using the Power Signal to Noise Ratio (PSNR) as a measure of quality, we show that DWT with a two-threshold method named “improved-DWT” provides a better quality of image compared to DCT and to DWT with a one-threshold method. Finally, we show that the combination of the two techniques, named improved-DWT-DCT compression technique, showing that it yields a better performance than DCT-based JPEG in terms of PSNR.

29 citations

Proceedings ArticleDOI
14 Jun 2020
TL;DR: An effective method named Probability Weighted Compact Feature Learning (PWCF), which provides inter-domain correlation guidance to promote cross-domain retrieval accuracy and learns a series of compact binary codes to improve the retrieval speed, is proposed.
Abstract: Domain adaptive image retrieval includes single-domain retrieval and cross-domain retrieval. Most of the existing image retrieval methods only focus on single-domain retrieval, which assumes that the distributions of retrieval databases and queries are similar. However, in practical application, the discrepancies between retrieval databases often taken in ideal illumination/pose/background/camera conditions and queries usually obtained in uncontrolled conditions are very large. In this paper, considering the practical application, we focus on challenging cross-domain retrieval. To address the problem, we propose an effective method named Probability Weighted Compact Feature Learning (PWCF), which provides inter-domain correlation guidance to promote cross-domain retrieval accuracy and learns a series of compact binary codes to improve the retrieval speed. First, we derive our loss function through the Maximum A Posteriori Estimation (MAP): Bayesian Perspective (BP) induced focal-triplet loss, BP induced quantization loss and BP induced classification loss. Second, we propose a common manifold structure between domains to explore the potential correlation across domains. Considering the original feature representation is biased due to the inter-domain discrepancy, the manifold structure is difficult to be constructed. Therefore, we propose a new feature named Histogram Feature of Neighbors (HFON) from the sample statistics perspective. Extensive experiments on various benchmark databases validate that our method outperforms many state-of-the-art image retrieval methods for domain adaptive image retrieval. The source code is available at {https://github.com/fuxianghuang1/PWCF}.

29 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295