scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
01 Jan 2008
TL;DR: This paper is being attempted to implement basic JPEG compression using only basic MATLAB functions and is using JPEG, a still frame compression standard, which is based on, the Discrete Cosine Transform and it is also adequate for most compression applications.
Abstract: Image compression is the application of data compression on digital images. Image compression can be lossy or lossless. In this paper it is being attempted to implement basic JPEG compression using only basic MATLAB functions. In this paper the lossy compression techniques have been used, where data loss cannot affect the image clarity in this area. Image compression addresses the problem of reducing the amount of data required to represent a digital image. It is also used for reducing the redundancy that is nothing but avoiding the duplicate data. It also reduces the storage area to load an image. For this purpose we are using JPEG. JPEG is a still frame compression standard, which is based on, the Discrete Cosine Transform and it is also adequate for most compression applications. The discrete cosine transform (DCT) is a mathematical function that transforms digital image data from the spatial domain to the frequency domain.

52 citations

Journal ArticleDOI
TL;DR: A blind watermark embedding/detection method to embed watermarks into H.264's I pictures that can survive H. 264's compression attacks with good invisibility.
Abstract: We present a blind watermark embedding/detection al- gorithm to embed watermarks into H.264's I pictures. The embed- ded watermark can survive H.264's compression attacks with more than a 40:1 compression ratio in I pictures. One pair of predicted discrete cosine transform (DCT) coefficients within the blocks of size 434 are used to embed 1-bit watermark information. The embed- ding locations of DCT coefficients are switched from lower sub- bands to higher subbands in a predefined order, while the distortions between predicted values and original values are larger than a tol- erable bound. With these methods, the embedded watermarks can survive H.264's compression attacks with good invisibility. © 2005

52 citations

Proceedings ArticleDOI
E. Linzer1, Ephraim Feig1
14 Apr 1991
TL;DR: The authors present novel scaled discrete cosine transform (DCT) and inverse scaled DCT algorithms designed for fused multiply/add architecture, and discuss the most popular case used in image processing involves 8*8 blocks.
Abstract: The authors present novel scaled discrete cosine transform (DCT) and inverse scaled DCT algorithms designed for fused multiply/add architecture. Since the most popular case used in image processing involves 8*8 blocks (both emerging JPEG and MPEG standards call for DCT coding on blocks of this size), the authors discuss this case in detail. The scaled DCT and inverse scaled DCT each use 416 operations, so that, combined with scaling or descaling, each uses 480 operations. For the inverse, the descaling can be combined with computation of the IDCT (inverse DCT). If multiplicative constants, which depend on the quantization matrix, can be computed offline, then the descaling and IDCT can be computed simultaneously with 417 operations. >

52 citations

Patent
01 Mar 1995
TL;DR: In this article, an image processing apparatus for extracting lines from an image using the Hough transform is presented, where a processor element is assigned to each quantization point in a Hough space and each processor element calculates intersections of the scanning line and the line corresponding to this processor element once per scanning line.
Abstract: An image processing apparatus for extracting lines from an image using the Hough transform. A processor element is assigned to each quantization point in a Hough space. Each processor element calculates, for each scanning line of the image, intersections of the scanning line and the line corresponding to this processor element once per scanning line. A black pixel (a Hough transform object point) on the scanning line is also obtained sequentially. The coordinate values of the intersection are compared with those of the Hough transform object point, and when they agree, voting to a ballot box memory of the processor element is performed. The voting results become Hough transform data. This makes it possible to implement a high speed Hough transform, and to reduce the size of the apparatus.

52 citations

Patent
19 Dec 1996
TL;DR: In this article, a process for decoding MPEG encoded image data stored in a system memory utilizing a configurable image decoding apparatus is described, which comprises the steps of extracting macroblock information from MPEG-encoded image data, the macroblocks containing image data and motion compensation data.
Abstract: A process for decoding MPEG encoded image data stored in a system memory utilizing a configurable image decoding apparatus. The process comprises the steps of: (a) extracting macroblock information from said MPEG encoded image data, the macroblocks containing image data and motion compensation data; (b) extracting a series of parameters from the MPEG encoded image data for decoding the MPEG encoded data; (c) determining quantization factors from the encoded image data; (d) configuring the configurable image decoding apparatus, including (i) configuring a means for parsing the macroblock data into motion vectors and image data with the series of parameters with the parameters for decoding the encoded data; (ii) configuring a means for performing inverse quantization with the quantization co-efficients; (e) determining a decoding order of the extracted macroblock information to be decoded; (f) providing said extracted macroblock information to the parsing means in the decoding order; (g) combining decoded image data with motion vectors extracted by the parsing means; and (h) storing the combined data in the system memory.

51 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295