scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings Article
01 Jan 2002
TL;DR: The experimental results show that there are a number of parameters that control the effectiveness of ROI coding, the most important being the size and number of regions of interest, code-block size, and target bit rate.
Abstract: This paper details work undertaken on the application of JPEG 2000, the recent ISO/ITU-T image compression standard based on wavelet technology, to region of interest (ROI) coding. The paper briefly outlines the JPEG 2000 encoding algorithm and explains how the packet structure of the JPEG 2000 bit-stream enables an encoded image to be decoded in a variety of ways dependent upon the application. The three methods by which ROI coding can be achieved in JPEG 2000 (tiling; coefficient scaling; and codeblock selection) are then outlined and their relative performance empirically investigated. The experimental results show that there are a number of parameters that control the effectiveness of ROI coding, the most important being the size and number of regions of interest, code-block size, and target bit rate. Finally, some initial results are presented on the application of ROI coding to face images.

74 citations

Journal ArticleDOI
TL;DR: DCT and DWT compression techniques are analyzed and implemented using TinyOS on a hardware platform TelosB and experimental results show that the overall performance of DWT is better than DCT, and DCT provides better compression ratio than DWT.

74 citations

Journal ArticleDOI
TL;DR: The local cosine transform (LCT) can be added as an optional step for improving the quality of existing DCT (JPEG) encoders by reducing the blocking effect and smoothing the image quality.
Abstract: This paper presents the local cosine transform (LCT) as a new method for the reduction and smoothing of the blocking effect that appears at low bit rates in image coding algorithms based on the discrete cosine transform (DCT). In particular, the blocking effect appears in the JPEG baseline sequential algorithm. Two types of LCT were developed: LCT-IV is based on the DCT type IV, and LCT-II is based on DCT type II, which is known as the standard DCT. At the encoder side the image is first divided into small blocks of pixels. Both types of LCT have basis functions that overlap adjacent blocks. Prior to the DCT coding algorithm a preprocessing phase in which the image is multiplied by smooth cutoff functions (or bells) that overlap adjacent blocks is applied. This is implemented by folding the overlapping parts of the bells back into the original blocks, and thus it permits the DCT algorithm to operate on the resulting blocks. At the decoder side the inverse LCT is performed by unfolding the samples back to produce the overlapped bells. The purpose of the multiplication by the bell is to reduce the gaps and inaccuracies that may be introduced by the encoder during the quantization step. LCT-IV and LCT-II were applied on images as a preprocessing phase followed by the JPEG baseline sequential compression algorithm. For LCT-IV, the DCT type IV replaced the standard DCT as the kernel of the transform coding. In both cases, for the same low bit rates the blocking effect was smoothed and reduced while the image quality in terms of mean-square error became better. Subjective tests performed on a group of observers also confirm these results. Thus the LCT can be added as an optional step for improving the quality of existing DCT (JPEG) encoders. Advantages over other methods that attempt to reduce the blocking effect due to quantization are also described.

73 citations

Proceedings Article
01 Sep 1992
TL;DR: This approach falls under the general rubric of visible surface algorithms, providing an objectspace algorithm which under certain conditions requires only sub-linear time for a partitioning tree represented model, and in general exploits occlusion so that the computational cost converges toward the complexity of the image as the depth complexity increases.
Abstract: While almost all research on image representation has assumed an underlying discrete space, the most common sources of images have the structure of the continuum. Although employing discrete space representations leads to simple algorithms, among its costs are quantization errors, significant verbosity and lack of structural information. A neglected alternative is the use of continuous space representations. In this paper we discuss one such representation and algorithms for its generation from views of 3D continuous space geometric models. For this we use binary, space partitioning trees for representing both the model and the image. Our approach falls under the general rubric of visible surface algorithms, providing an objectspace algorithm which under certain conditions requires only sub-linear time for a partitioning tree represented model, and in general exploits occlusion so that the computational cost converges toward the complexity of the image as the depth complexity increases. Visible edges can also be generated as a step following visible surface determination. However, an important contextual difference is that the resulting image trees are used in subsequent continuous space operations. These include affine transformations, set operations, and metric calculations, which can be used to provide image compositing, incremental image modification in a sequence of frames, and facilitating matching for computer vision/robotics. Image trees can also be used with the hemicube and light buffer illumination methods as a replacement for regular grids, thereby providing exact rather than approximate visibility. Discrete vs. Continuous Space We have come to think of images as synonymous with a 2D array of pixels. However, this is an artifact of the transducers we use to convert between the physical domain and the informational domain. Physical space at the resolution with which we are concerned is most effectively modeled mathematically as being continuous; that is, as having the structure of the Reals (or at least the Rationals) as opposed to the structure of the Integers. Modeling space as being defined on a regular lattice, while simple, is verbose and induces quantization which reduces accuracy and can introduce visible artifacts. Using nothing other than a lattice for the representation provides no image dependent structure such as edges. Consider applying to a discrete image an affine transformation, an elementary spatial operation. The solution for this is developed by reasoning not merely in discrete space but in the continuous domain as well: samples are used to reconstruct a "virtual" continuous function which is then resampled. However, the quantization effects can become rather apparent should the transform entail a significant increase in size and a rotation by some small angle, despite the use of high quality filters. This is due to such factors as ringing, blurring, aliasing, and anisotropic effects which cannot all be simultaneously minimized (see, for example, [Mitchell and Netravali 88]). More importantly, discontinuities become increasingly smeared as one increases the size, since the convolution assumes a band-limited signal, i.e. an image with no edges. This has practical implications when texture mapping is used to define the color of surfaces in 3D: since a texture map can be enlarged arbitrarily, a brick texture, for example, will become diffuse instead of exhibiting distinctly separate bricks. Now consider applying affine transformations to images represented by quadtrees, a spatial structure, developed within the context of a finite discrete space, for reducing verbosity and inducing structure on an image. The algorithm for constructing the new quadtree of the transformed image seems relatively complicated when compared to the corresponding algorithms for continuous space representations: it must resample each transformed leaf node and construct an entirely new tree. In contrast, boundary representations, simplical decompositions, or binary space partitioning trees only require transforming points and/or

73 citations

Patent
21 Jun 1996
TL;DR: In this paper, a method of assigning quantization values for use during the compression of images is disclosed, which includes constructing a non-parametric model during a training phase based on relationships between image characteristics, quantization value, and required bit resources to encode images.
Abstract: A method of assigning quantization values for use during the compression of images is disclosed. The method includes constructing a non-parametric model during a training phase based on relationships between image characteristics, quantization values, and required bit resources to encode images. The model is built by considering a wide sample of images. The consideration includes determining the temporal and spatial characteristics of the images, compressing the images over a range of quantization values, and recording the resultant required bit resource on a per characterization/quantization level basis. Once built, the model may be used during real time compression by determining the characteristics of the input image and using the allocated resource to find a corresponding match in the non-parametric model. The associated quantization value is then assigned to the input image.

73 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295