scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Patent
26 Nov 2002
TL;DR: In this paper, a color character area discrimination unit divides an input color image into 16x16-pixel blocks, and determines whether or not each block expresses color characters, in accordance with the determination result, a sub-sampling ratio switching unit switches the sampling ratio for each of color components YCrCb composing the color image data so that the sample ratio is Y:Cr:Cb=4:2:2 or Y:cr:Cbm= 4:1:1.
Abstract: A color image processing apparatus which codes image data while suppressing image deterioration. A color character area discrimination unit divides an input color image into 16x16-pixel blocks, and determines whether or not each block expresses color characters. In accordance with the determination result, a sub-sampling ratio switching unit switches the sampling ratio for each of color components YCrCb composing the color image data so that the sample ratio is Y:Cr:Cb=4:2:2 or Y:Cr:Cb=4:1:1. Subsequently, sampling is performed in accordance with the switched sampling ratio, and DCT, linear quantization and entropy coding are then performed.

35 citations

Proceedings ArticleDOI
26 Feb 2015
TL;DR: The implementation shows that the proposed modified fragile watermarking technique can be used with promising result as an alternative approach to image recovery from tampered area effectively.
Abstract: Fragile watermarking is discovered for authentication and content integrity verification. This paper introduces a modified fragile watermarking technique for image recovery. Here we can detect as well as recovered the tampered image with its tampered region. This modified approach helps us to produce resistance on various attacks like birthday attack, college attack and quantization attacks. Using a non-sequential block chaining and randomized block chaining, which is created on the basis of secrete key this modified technique produces great amount of recovery from tampered regions. In this modified technique we put a watermark information and information of recovery of image block into the image block. These blocks are linked with next randomly generated block of image. In this modified process of block chaining to obtained first watermark image, modified technique uses original image and watermarked image. While to obtained self-embedded image we merge shuffled original image on original image so that we get final shuffled image. At last we merge first watermark image with shuffled image to produce final watermarked image. During recovery we follow reverse process of above to obtained original image from tampered image. By comparing block by block mean values of tampered blocks recovery of tampered blocks can be done. This modified technique can be used for color as well as gray scale images. The implementation shows that, the proposed modified technique can be used with promising result as an alternative approach to image recovery from tampered area effectively.

35 citations

Proceedings ArticleDOI
01 May 1994
TL;DR: In this article, gaze-contingent processing was implemented by adaptively varying image quality within each video field such that image quality was maximal in the region most likely to be viewed and was reduced in the periphery.
Abstract: Subjects rated the subjective image quality of video sequences that were processed using gaze- contingent techniques. Gaze-contingent processing was implemented by adaptively varying image quality within each video field such that image quality was maximal in the region most likely to be viewed and was reduced in the periphery. This was accomplished by blurring the image or by introducing quantization artifacts. Results showed that provision of a gaze- contingent, high-resolution region had a modest beneficial effect on perceived image quality, compared to having a high-resolution region that was not gaze-contingent. Given the modest benefits and high cost of implementation, we conclude that gaze-contingent processing is not suitable for general purpose image processing.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

35 citations

Journal ArticleDOI
TL;DR: It is shown that JPEG-based PQ data hiding distorts linear dependencies of rows/columns of pixel values, and proposed features can be exploited within a simple classifier for the steganalysis of PQ.
Abstract: Perturbed quantization (PQ) data hiding is almost undetectable with the current steganalysis methods. We briefly describe PQ and propose singular value decomposition (SVD)-based features for the steganalysis of JPEG-based PQ data hiding in images. We show that JPEG-based PQ data hiding distorts linear dependencies of rows/columns of pixel values, and proposed features can be exploited within a simple classifier for the steganalysis of PQ. The proposed steganalyzer detects PQ embedding on relatively smooth stego images with 70% detection accuracy on average for different embedding rates

35 citations

Patent
21 Aug 2001
TL;DR: In this article, an apparatus and method for compressing and decompressing digital image files is described, where an input image file, which may be of bilevel, grayscale, or color file type, is subdivided into one or more sub-images.
Abstract: An apparatus and method for compressing and decompressing digital image files is described. At the image encoder/compressor, an input image file, which may be of bilevel, grayscale, or color file type, is subdivided into one or more sub-images. Each sub-image may be separately compressed. In the case of grayscale or color input images, the input image may be pre-processed using a threshold parameter to quantize the intensity and/or color vectors into a compressed palette of intensities and/or color vectors. A forward transform is performed on each pre-processed sub-image, based on a binary quincunx image pyramid and a set of pixel value prediction equations. The output of the forward transform comprises a set of prediction error signals and a coarse low-band signal. Each of the prediction error signals and the coarse low-band signal are run length and/or tree encoded, and all of the outputs of the run length and/or tree encoders are combined into a single array, which is encoded using a Huffman coding algorithm or arithmetic algorithm. The output of the Huffman encoder or its equivalent is written to a compressed output file in a format conducive to image reconstruction at the image decompressor. The compressed output file is stored and/or transmitted to an image decoder/decompressor. At the image decoder/decompressor, the encoding/compression process is reversed to generate a decompressed image file, which may be stored and/or displayed as appropriate. Embodiments of the system may implement either lossless compression or lossy compression, where lossiness may be due to transmission of fewer pixels than those present in the original image and/or by quantization of the pixel intensities and/or color vectors into fewer intensity/color levels than those present in the original image.

35 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295