Topic
Quantization (image processing)
About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: Experimental results show that the proposed unsupervised algorithm for the segmentation of salient regions in color images achieves excellent segmentation performance and the computation is very efficient.
Abstract: In this paper, we propose a novel unsupervised algorithm for the segmentation of salient regions in color images. There are three phases in this algorithm. In the first phase, we use nonparametric density estimation to extract candidates of dominant colors in an image, which are then used for the quantization of the image. The label map of the quantized image forms initial regions of segmentation. In the second phase, we define salient region with two properties; i.e., conspicuous; compact and complete. According to the definition, two new parameters are proposed. One is called ldquoImportance indexrdquo, which is used to measure the importance of a region, and the other is called ldquoMerging likelihoodrdquo, which is utilized to measure the suitability of region merging. Initial regions are merged based on the two new parameters. In the third phase, a similarity check is performed to further merge the surviving regions. Experimental results show that the proposed method achieves excellent segmentation performance for most of our test images. In addition, the computation is very efficient.
37 citations
••
TL;DR: A novel post-processing algorithm based on increment of inter-block correlation aimed at reducing blocking artifacts is presented, which first smooth the three lowest frequency discrete cosine transform coefficients between neighboring blocks, in order to reduce blocking artifacts in the flat region, which is most sensitive to the human visual system.
Abstract: Block-based coding introduces an undesirable discontinuity between neighboring blocks in reconstructed images. This image degradation, referred to as blocking artifacts, arises mainly due to the loss of inter-block correlation in the quantization process of discrete cosine transform coefficients. In many multimedia broadcasting applications, such as a television, decoded video sequences suffer from blocking artifacts. In this paper, we present a novel post-processing algorithm based on increment of inter-block correlation aimed at reducing blocking artifacts. We first smooth the three lowest frequency discrete cosine transform (DCT) coefficients between neighboring blocks, in order to reduce blocking artifacts in the flat region, which are most sensitive to the human visual system. We then group each edge block and its matched blocks together and apply group-based filtering to increase the correlation between grouped blocks. This suppresses blocking artifacts in the edge region while preserving details. In addition, the algorithm is extended to reduce flickering artifacts as well as blocking artifacts in video sequences. Experimental results show that the proposed method successfully alleviates blocking artifacts in both images and videos coded with low bit-rates.
37 citations
••
24 Aug 2015TL;DR: The proposed JPEG-compression resistant adaptive Steganography algorithm not only has a high correct rate of extracted messages after JPEG compression, which increases from about 60% to nearly 100% comparing with J-UNIWARD steganography under quality factor 75 of JPEG compression), but also has a strong detection resistant performance.
Abstract: Current typical adaptive Steganography algorithms cannot extract the embedded secret messages correctly after compression. In order to solve this problem, a JPEG-compression resistant adaptive steganography algorithm is proposed. Utilizing the relationship between DCT coefficients, the domain of messages embedding is determined. The modifying magnitude of different DCT coefficients can be determined according to the quality factors of JPEG compression. To ensure the completely correct extraction of embedded messages after JPEG compression, the RS codes is used to encode the messages to be embedded. Besides, based on the current energy function in the PQe steganography and the distortion function in J-UNIWARD Steganography, the corresponding distortion value of DCT coefficients is calculated. With the help of that, STCs is used to embed the encoded messages into the DCT coefficients, which have a smaller distortion value. The experimental results under different quality factors of JPEG compression and different payloads demonstrate that the proposed algorithm not only has a high correct rate of extracted messages after JPEG compression, which increases from about 60% to nearly 100% comparing with J-UNIWARD steganography under quality factor 75 of JPEG compression, but also has a strong detection resistant performance.
37 citations
••
TL;DR: This paper extracts the perceptual representation of an original color image, a statistical signature by modifying general color signature, which consists of a set of points with statistical volume, and presents a novel dissimilarity measure for a statistical signatures called Perceptually Modified Hausdorff Distance (PMHD) that is based on the Hhausdorff distance.
Abstract: In most content-based image retrieval systems, the color information is extensively used for its simplicity and generality. Due to its compactness in characterizing the global information, a uniform quantization of colors, or a histogram, has been the most commonly used color descriptor. However, a cluster-based representation, or a signature, has been proven to be more compact and theoretically sound than a histogram for increasing the discriminatory power and reducing the gap between human perception and computer-aided retrieval system. Despite of these advantages, only few papers have broached dissimilarity measure based on the cluster-based nonuniform quantization of colors. In this paper, we extract the perceptual representation of an original color image, a statistical signature by modifying general color signature, which consists of a set of points with statistical volume. Also we present a novel dissimilarity measure for a statistical signature called Perceptually Modified Hausdorff Distance (PMHD) that is based on the Hausdorff distance. In the result, the proposed retrieval system views an image as a statistical signature, and uses the PMHD as the metric between statistical signatures. The precision versus recall results show that the proposed dissimilarity measure generally outperforms all other dissimilarity measures on an unmodified commercial image database.
37 citations