scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings Article
01 Dec 2012
TL;DR: A robust median filtering detection scheme based on the autoregressive model of median filtered residual, which achieves not only much better performance than the existing state-of-the-art methods, but also has very small dimension of feature, i.e., 10-D.
Abstract: One important aspect of multimedia forensics is exposing an image's processing history Median filtering is a popular noise removal and image enhancement tool It is also an effective tool in anti-forensics recently An image is usually saved in a compressed format such as the JPEG format The forensic detection of median filtering from a JPEG compressed image remains challenging, because typical filter characteristics are suppressed by JPEG quantization and blocking artifacts In this paper, we introduce a robust median filtering detection scheme based on the autoregressive model of median filtered residual Median filtering is first applied on a test image and the difference between the initial image and the filtered output image is called the median filtered residual (MFR) The MFR is used as the forensic fingerprint Thus, the interference from the image edge and texture, which is regarded as a limitation of the existing forensic methods, can be reduced Because the overlapped window filtering introduces correlation among the pixels of MFR, an autoregressive (AR) model of the MFR is calculated and the AR coefficients are used by a support vector machine (SVM) for classification Experimental results show that the proposed median filtering detection method is very robust to JPEG post-compression with a quality factor as low as 30 It distinguishes well between median filtering and other manipulations, such as Gaussian filtering, average filtering, and rescaling and performs well on low-resolution images of size 32 × 32 The proposed method achieves not only much better performance than the existing state-of-the-art methods, but also has very small dimension of feature, ie, 10-D

28 citations

Proceedings ArticleDOI
Paul G. Roetling1
09 Jul 1976
TL;DR: In this paper, it was shown that the visual MTF can be interpreted in terms of perceptible levels as a function of spatial frequency and that the total information perceived by the eye is much less than 8 bits times the number of pixels.
Abstract: Sample spacing and quantization levels are usually chosen for digitizing images such that the eye should not see degradations due to either process. Sample spacing is chosen based on the resolution (or high frequency) limit of the eye and quantization is based on perception of low contrast differences at lower frequencies. This process results in about 8 bit/pixel, 20 pixel/mm digitization, but, being based on two different visual limits, the total number of bits is an overestimate of the information perceived by the eye. The visual MTF can be interpreted in terms of perceptible levels as a function of spatial frequency. We show by this interpretation that the total information perceived by the eye is much less than 8 bits times the number of pixels. We consider the classic halftone as an image coding process, yielding 1 bit/ pixel. This approach indicates that halftones approximate the proper distribution of levels as a function of spatial frequency; therefore we have a possible explanation of why halftone images retain most of the visual quality of the original.© (1976) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

28 citations

Proceedings ArticleDOI
22 Oct 2003
TL;DR: This paper introduces an algorithm based on transform coding to compress the interval information of the cells in a dataset, thus eliminating the need to retrieve the original representation of the intervals at run-time.
Abstract: In this paper, we present a space efficient algorithm for speeding up isosurface extraction. Even though there exist algorithms that can achieve optimal search performance to identify isosurface cells, they prove impractical for large datasets due to a high storage overhead. With the dual goals of achieving fast isosurface extraction and simultaneously reducing the space requirement, we introduce an algorithm based on transform coding to compress the interval information of the cells in a dataset. Compression is achieved by first transforming the cell intervals (minima, maxima) into a form which allows more efficient compaction. It is followed by a dataset optimized non-uniform quantization stage. The compressed data is stored in a data structure that allows fast searches in the compression domain, thus eliminating the need to retrieve the original representation of the intervals at run-time. The space requirement of our search data structure is the mandatory cost of storing every cell ID once, plus an overhead for quantization information. The overhead is typically in the order of a few hundredths of the dataset size.

28 citations

Patent
31 Aug 2009
TL;DR: In this article, an image signal processing unit (ISP) has at least one fixed-size line buffer which is smaller than the width of the image buffer for processing the image data.
Abstract: An Image Signal Processing unit (ISP) has at least one fixed-size line buffer which is smaller than the width of the image buffer. To handle the image data, the image buffer is divided into regions which are sequentially loaded into the at least one fixed-size line buffer of the ISP for processing. Since functions of the ISP operate with neighboring pixels of the target pixel, margins of the regions need to be transmitted as well. After processing by the ISP, the data is encoded which includes a DCT, Quantization, and VLC. The result is then stored in segments in a buffer storage. VLC also inserts a Restart Marker which is used as a pointer to stitch together all the segments thus producing a new and seamless image.

28 citations

Proceedings ArticleDOI
01 Jan 2003
TL;DR: This work proposes an effective watermarking scheme to embed and extract based on the JPEG2000 codec process that is robust to attacks like blurring, edge enhancement, mosaic, and more.
Abstract: We propose an effective watermarking scheme to embed and extract based on the JPEG2000 codec process. Our embedding algorithm applied the Torus Automorphisms technique to break up the watermark, which were then embedded into the bitstreams after the JPEG2000 quantization step but prior to entropy coding. Distortion reduction technique was used on the compressed image to lessen image degradation caused by embedding. Our watermark scheme is simple and easy to implement. Furthermore, our scheme is robust to attacks like blurring, edge enhancement, mosaic, and more.

28 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295