scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Patent
17 Jul 2002
TL;DR: In this article, a quantization table is generated that specifies frequency bands to be filtered and the DCT coefficients are digitized using the quantization tables, and it is preferred that the coefficients be ordered in a zigzag sequence to facilitate run-length encoding.
Abstract: The method for adjusting quality during image capture includes computing a discrete cosine transform of a digital image to create DCT coefficients. A quantization table is generated that specifies frequency bands to be filtered and the DCT coefficients are digitized using the quantization table. It is preferred that the DCT coefficients be ordered in a zig-zag sequence to facilitate run-length encoding.

30 citations

Patent
04 Sep 2008
TL;DR: An image processing apparatus for generating a motion compensation image is described in this paper. But it does not specify the relative position between a pixel position and a reference pixel of a reference image to calculate a pixel value of the pixel on the basis of motion vector information.
Abstract: An image processing apparatus for generating a motion compensation image. The image processing apparatus includes: a pixel-relative-position calculation section calculating a relative position between a pixel position of a pixel constituting the motion compensation image and a pixel position of a reference pixel of a reference image to be used for calculating a pixel value of the pixel on the basis of motion vector information; a relative-position quantization section performing quantization processing of relative-position information calculated by the pixel-relative-position calculation section and generating quantized-relative-position information; and a motion-compensation-image generation section generating a motion compensation image by calculating a pixel value of a constituent pixel of the motion compensation image on the basis of the quantized-relative-position information and a pixel value of the reference pixel.

30 citations

Proceedings ArticleDOI
30 Oct 2000
TL;DR: This work proposes a fast face detection algorithm that works directly on the compressed DCT domain and analyzes both color and texture information contained in the DCT parameters, therefore could generate more reliable detection results.
Abstract: We propose a fast face detection algorithm that works directly on the compressed DCT domain. Unlike the previous DCT domain processing designs that are mainly based on skin-color detection, our algorithm analyzes both color and texture information contained in the DCT parameters, therefore could generate more reliable detection results. Our texture analysis is mainly based on statistical model training and detection. A number of fundamental problems, e.g., block quantization, preprocessing in the DCT domain, and feature vector selection and classification in the DCT domain, are discussed.

30 citations

Proceedings ArticleDOI
23 Oct 2009
TL;DR: The study shows that the proposed method to detect resized JPEG images and spliced images, which are widely used in image forgery, is highly effective and related to both image complexity and resize scale factor.
Abstract: Today's ubiquitous digital media are easily tampered by, e.g., removing or adding objects from or into images without leaving any obvious clues. JPEG is a most widely used standard in digital images and it can be easily doctored. It is therefore necessary to have reliable methods to detect forgery in JPEG images for applications in law enforcement, forensics, etc. In this paper, based on the correlation of neighboring Discrete Cosine Transform (DCT) coefficients, we propose a method to detect resized JPEG images and spliced images, which are widely used in image forgery. In detail, the neighboring joint density features of the DCT coefficients are extracted; then Support Vector Machines (SVM) are applied to the features for detection. To improve the evaluation of JPEG resized detection, we utilize the shape parameter of generalized Gaussian distribution (GGD) of DCT coefficients to measure the image complexity.The study shows that our method is highly effective in detecting JPEG images resizing and splicing forgery. In the detection of resized JPEG image, the performance is related to both image complexity and resize scale factor. At the same scale factor, the detection performance in high image complexity is, as can be expected, lower than that in low image complexity.

30 citations

Posted Content
TL;DR: This work explores an network-binarization approach for SR tasks without sacrificing much reconstruction accuracy, and shows that binarized SR networks achieve comparable qualitative and quantitative results as their real-weight counterparts.
Abstract: Deep convolutional neural networks (DCNNs) have recently demonstrated high-quality results in single-image super-resolution (SR). DCNNs often suffer from over-parametrization and large amounts of redundancy, which results in inefficient inference and high memory usage, preventing massive applications on mobile devices. As a way to significantly reduce model size and computation time, binarized neural network has only been shown to excel on semantic-level tasks such as image classification and recognition. However, little effort of network quantization has been spent on image enhancement tasks like SR, as network quantization is usually assumed to sacrifice pixel-level accuracy. In this work, we explore an network-binarization approach for SR tasks without sacrificing much reconstruction accuracy. To achieve this, we binarize the convolutional filters in only residual blocks, and adopt a learnable weight for each binary filter. We evaluate this idea on several state-of-the-art DCNN-based architectures, and show that binarized SR networks achieve comparable qualitative and quantitative results as their real-weight counterparts. Moreover, the proposed binarized strategy could help reduce model size by 80% when applying on SRResNet, and could potentially speed up inference by 5 times.

30 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295