scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 1991"


Journal ArticleDOI
TL;DR: The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications.
Abstract: For the past few years, a joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG’s proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT-based method is specified for “lossy’’ compression, and a predictive method for “lossless’’ compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. This article provides an overview of the JPEG standard, and focuses in detail on the Baseline method.

3,944 citations


Patent
10 Jun 1991
TL;DR: In this paper, an image data compression technique is described which utilizes calculating means and a selected series of bit calculating stages having delays, to estimate one or more quantization parameters for such data.
Abstract: An image data compression technique is described which utilizes calculating means and a selected series of bit calculating stages having delays, to estimate one or more quantization parameters for such data. The estimation process preferably is iterated a number of times, with the values found through each estimation being used as the trial values for subsequent estimations. In addition, an initial trial value is selected by a data look ahead technique, which assures that its value is within range of the final quantization parameter used to quantize the data. The final quantization parameter insures that the compressed data fits within a predetermined number of encoded data bits to be transmitted or recorded, for example, in a recording medium.

146 citations


Book ChapterDOI
01 Jan 1991
TL;DR: This chapter discusses efficient statistical computations for optimal color quantization based on variance minimization, a 3D clustering process that leads to significant image data compression, making extra frame buffer available for animation and reducing bandwidth requirements.
Abstract: Publisher Summary This chapter discusses efficient statistical computations for optimal color quantization. Color quantization is a must when using an inexpensive 8-bit color display to display high-quality color images. Even when 24-bit full color displays become commonplace in the future, quantization will still be important because it leads to significant image data compression, making extra frame buffer available for animation and reducing bandwidth requirements. Color quantization is a 3D clustering process. A color image in an RGB mode corresponds to a three-dimensional discrete density. In this chapter, quantization based on variance minimization is discussed. Efficient computations of color statistics are described. An optimal color quantization algorithm is presented. The algorithm was implemented on a SUN 3/80 workstation. It took only 10 s to quantize a 256 × 256 image. The impact of optimizing partitions is very positive. The new algorithm achieved, on average, one-third and one-ninth of mean-square errors for the median-cut and Wan et. al. algorithms, respectively.

117 citations


Journal ArticleDOI
TL;DR: This work proposes an adaptive multiscale method, where the discretization scale is chosen locally according to an estimate of the relative error in the velocity estimation, based on image properties, and provides substantially better estimates of optical flow than do conventional algorithms, while adding little computational cost.
Abstract: Single-scale approaches to the determination of the optical flow field from the time-varying brightness pattern assume that spatio-temporal discretization is adequate for representing the patterns and motions in a scene. However, the choice of an appropriate spatial resolution is subject to conflicting, scene-dependent, constraints. In intensity-base methods for recovering optical flow, derivative estimation is more accurate for long wavelengths and slow velocities (with respect to the spatial and temporal discretization steps). On the contrary, short wavelengths and fast motions are required in order to reduce the errors caused by noise in the image acquisition and quantization process. Estimating motion across different spatial scales should ameliorate this problem. However, homogeneous multiscale approaches, such as the standard multigrid algorithm, do not improve this situation, because an optimal velocity estimate at a given spatial scale is likely to be corrupted at a finer scale. We propose an adaptive multiscale method, where the discretization scale is chosen locally according to an estimate of the relative error in the velocity estimation, based on image properties. Results for synthetic and video-acquired images show that our coarse-to-fine method, fully parallel at each scale, provides substantially better estimates of optical flow than do conventional algorithms, while adding little computational cost.

101 citations


Proceedings ArticleDOI
01 Jun 1991
TL;DR: Rather than studying perceptually lossless compression, research must carry out research to determine what types of lossy transformations are least disturbing to the human observer.
Abstract: Several topics connecting basic vision research to image compression and image quality are discussed: (1) A battery of about 7 specially chosen simple stimuli should be used to tease apart the multiplicity of factors affecting image quality. (2) A 'perfect' static display must be capable of presenting about 135 bits/min2. This value is based on the need for 3 pixels/min and 15 bits/pixel. (3) Image compression allows the reduction from 135 to about 20 bits/min2 for perfect image quality. 20 bit/min2 is the information capacity of human vision. (4) A presumed weakness of the JPEG standard is that it does not allow for Weber's Law nonuniform quantization. We argue that this is an advantage rather than a weakness. (5) It is suggested that all compression studies should report two numbers separately: the amount of compression achieved from quantization and the amount from redundancy coding. (6) The DCT, wavelet and viewprint representations are compared. (7) Problems with extending perceptual losslessness to moving stimuli are discussed. Our approach of working with a 'perfect' image on a 'perfect' display with 'perfect' compression is not directly relevant to the present situation with severely limited channel capacity. Rather than studying perceptually lossless compression we must carry out research to determine what types of lossy transformations are least disturbing to the human observer. Transmission of 'perfect', lossless images will not be practical for many years.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

90 citations


Patent
William B. Pennebaker1
17 May 1991
TL;DR: In this paper, a scaling factor for the quantization tables of the multiple image components is defined, and the scaling factor signals changes in quantization for successive blocks of the image data.
Abstract: A system and method for masking adaptive quantization during compressed image data transmission by defining a scaling factor for the quantization tables of the multiple image components, wherein the scaling factor signals changes in quantization for successive blocks of the image data. The scaling factor is transmitted as a further component together with the image components to thereby signal adaptive quantization of the image data.

73 citations


Journal ArticleDOI
TL;DR: It is shown that there exists an optimal threshold for the quantization in BTC algorithms (fixed and variable) that minimizes the errors and the use of the variable BTC (vBTC) with optimal threshold leads to a reduction of the error in the reconstructed images.
Abstract: A variable block truncation coding (BTC) algorithm is proposed for image compression. It is shown that there exists an optimal threshold for the quantization in BTC algorithms (fixed and variable) that minimizes the errors. Compared to the fixed BTC (fBTC), the variable BTC (vBTC) gives better performance on all the tested images. The use of vBTC with optimal threshold leads to a reduction of the error in the reconstructed images by almost 40% of the error in the reconstructed images obtained by fBTC. This enhanced performance suggests that the vBTC with optimal threshold is a better alternative to the fixed block truncation coding. >

73 citations


Proceedings ArticleDOI
Eric Viscito1, Cesar A. Gonzales1
01 Nov 1991
TL;DR: This paper describes an MPEG encoder designed to produce good quality coded sequences for a wide range of video source characteristics and over a range of bit rates.
Abstract: The emerging ISO MPEG video compression standard is a hybrid algorithm which employs motion compensation, spatial discrete cosine transforms, quantization, and Huffman coding. The MPEG standard specifies the syntax of the compressed data stream and the method of decoding, but leaves considerable latitude in the design of the encoder. Although the algorithm is geared toward fixed-bit-rate storage media, the rules for bit rate control allow a good deal of variation in the number of bits allocated to each picture. In addition, the allocation of bits within a picture is subject to no rules whatsoever. One would like to design an encoder that optimizes visual quality of the decoded video sequence subject to these bit rate restrictions. However, this is difficult due to the elusive nature of a quantitative distortion measure for images and motion sequences that correlates well with human perception. This paper describes an MPEG encoder designed to produce good quality coded sequences for a wide range of video source characteristics and over a range of bit rates. The novel parts of the algorithm include a temporal bit allocation strategy, spatially adaptive quantization, and a bit rate control scheme.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

70 citations


Patent
Wieland Jass1
04 Nov 1991
TL;DR: In this article, a process for adaptive quantization with quantization errors which have little disturbing visual effects, using a digital block-related process for data reduction in digital images or image sequences, is described.
Abstract: A process for adaptive quantization with quantization errors which have little disturbing visual effects, using a digital block-related process for data reduction in digital images or image sequences. It is provided in this process that an image to be transmitted is subdivided into a multiplicity of blocks, and that there is calculated for each block a parameter for setting the quantization assigned to this block. The calculation of this parameter is performed in this case with the aid of a subdivision of each block into subregions, there being calculated for each subregion an activity measure with the aid of which a quantization parameter is determined for each subregion. Finally, the quantization parameters of all subregions are summed up within the blocks in the case of all blocks and multiplicatively scaled.

67 citations


Journal ArticleDOI
TL;DR: A discussion of the JPEG standard is presented in the first part with an explanation of all the basic tools available in the standard and two typical applications are described, to demonstrate the potential of this worldwide standard.
Abstract: After three years of active and constructive international competition, a consensus was reached by the International Standards Organization (ISO) and the International Consultative Committee for Telephone and Telegraph (CCITT) in 1988 on one compression technique for still images. The collaboration of the CCITT and ISO took the form of the JPEG (Joint Photographic Expert Group), which is now referred to as the ISO/IEC JTC1/SC2/WG1O. Since then, the large-scale development of compatible interactive applications has begun to expand rapidly. A discussion of the JPEG standard is presented in the first part with an explanation of all the basic tools available in the standard. Then, to demonstrate the potential of the JPEG standard, two typical applications are described. One of them concerns the application to the printing industry currently under development in Japan, and the other deals with the large-scale introduction of photovideotex in Europe. Finally, other areas of active development around JPEG are briefly summarized to give the reader references and an overview of the evolving propagation of this worldwide standard.

60 citations


Proceedings ArticleDOI
E. Linzer1, Ephraim Feig1
14 Apr 1991
TL;DR: The authors present novel scaled discrete cosine transform (DCT) and inverse scaled DCT algorithms designed for fused multiply/add architecture, and discuss the most popular case used in image processing involves 8*8 blocks.
Abstract: The authors present novel scaled discrete cosine transform (DCT) and inverse scaled DCT algorithms designed for fused multiply/add architecture. Since the most popular case used in image processing involves 8*8 blocks (both emerging JPEG and MPEG standards call for DCT coding on blocks of this size), the authors discuss this case in detail. The scaled DCT and inverse scaled DCT each use 416 operations, so that, combined with scaling or descaling, each uses 480 operations. For the inverse, the descaling can be combined with computation of the IDCT (inverse DCT). If multiplicative constants, which depend on the quantization matrix, can be computed offline, then the descaling and IDCT can be computed simultaneously with 417 operations. >

Patent
14 Mar 1991
TL;DR: In this article, a method of determining a parameter for encoding an image to be encoded in accordance with an encoding parameter of the already encoded image is disclosed, and a method for determining an image encoding parameter is described.
Abstract: This invention relates to an image encoding method and apparatus and, more particularly, to determination of an encoding parameter. According to this invention, a method of determining a parameter for encoding an image to be encoded in accordance with an encoding parameter of the already encoded image is disclosed.

Journal ArticleDOI
TL;DR: By allowing the algorithm to adapt to the local picture statistics and by paying particular attention to the nature and reproduction of edges in the picture the authors are able to substantially improve the visual picture quality and at the same time allow for a moderate increase in the compression ratio.
Abstract: Block truncation coding-vector quantization (BTC-VQ) is an extremely simple non-adaptive block-based image compression technique. It has a relatively low compression ratio; however, the simplicity of the algorithm makes it an attractive option. Its main drawback is the fact that the reconstructed pictures suffer from ragged edges. In this paper we show that by allowing the algorithm to adapt to the local picture statistics and by paying particular attention to the nature and reproduction of edges in the picture we are able to substantially improve the visual picture quality and at the same time allow for a moderate increase in the compression ratio.

Journal ArticleDOI
TL;DR: Techniques for the display of natural colour images on high resolution graphics displays and adaptation of the colour quantization techniques to the case of high resolution display are described.

Patent
Norman Dennis Richards1
22 Jul 1991
TL;DR: In this article, a method of encoding an image for CD-I players comprises obtaining the pixel information as a first matrix (M1) of 768×560 pixel component values, decimation filtering (1) the first matrix to produce a second matrix(M2) of 384×560pixel component values and encoding (2) the second matrix to output a first set of DYUV digital data (RDD1) for storage on a compact disc (SM).
Abstract: A method of encoding an image for CD-I players comprises obtaining the pixel information as a first matrix (M1) of 768×560 pixel component values, decimation filtering (1) the first matrix (M1) to produce a second matrix (M2) of 384×560 pixel component values, encoding (2) the second matrix (M2) to produce a first set of DYUV digital data (RDD1) for storage on a compact disc (SM), applying the digital data (RDD1) to a decoder (3) to form a third matrix (M3) of 384×560 pixel component values, interpolation filtering (4) the third matrix (M3) to form a fourth matrix (M4) of 768×560 pixel component values, forming the difference (5) between the first (M1) and fourth (M4) matrices to produce a fifth matrix (M5) of 768×560 difference values and encoding the fifth matrix (M5) as respective multi-bit and/or run length codes (RDD1) for storage on a compact disc (SM). Compatibility with known hardware (the CD-I standard `basecase` player) is obtained by encoding negative difference values using the quantization levels in the guard range (0-15) below black level (16, on a scale of zero to 255).

Journal ArticleDOI
TL;DR: A charge coupled device (CCD)-based image processor that performs 2D filtering of a gray-level image with 20 programmable 8-b 7*7 spatial filters is described and the effect of weight quantization imposed by use of this CCD device on the performance of the neocognitron is presented.
Abstract: A charge coupled device (CCD)-based image processor that performs 2D filtering of a gray-level image with 20 programmable 8-b 7*7 spatial filters is described. The processor consists of an analog input buffer, 49 multipliers, and 49 8-b 20-stage local memories in a 29-mm/sup 2/ chip area. Better than 99.999% charge transfer efficiency and greater than 42-dB dynamic range have been achieved by the processor, which performs one billion arithmetic operations per second and dissipates less than 1 W when clocked at 10 MHz. The device is also suited for neural networks with local connections and replicated weights. Implementation of a specific neural network, the neocognitron, based on this CCD processor has been simulated. The effect of weight quantization imposed by use of this CCD device on the performance of the neocognitron is presented. >

Journal ArticleDOI
TL;DR: An adaptive image coding technique-two-channel conjugate classified discrete cosine transform/vector quantization (TCCCDCT/VQ)-is proposed to efficiently exploit correlation in large image blocks by taking advantage of the discrete Cosine transform and vector quantization.
Abstract: An adaptive image coding technique-two-channel conjugate classified discrete cosine transform/vector quantization (TCCCDCT/VQ)-is proposed to efficiently exploit correlation in large image blocks by taking advantage of the discrete cosine transform and vector quantization. In the transform domain, a classified discrete cosine transform/vector quantization (CDCT/VQ) scheme is proposed and TCCCDCT/VQ is developed based on the CDCT/VQ scheme. These two techniques were applied to encode test images at about 0.51 b/pixel and 0.73 b/pixel. The performances of both techniques over a noisy channel have been also tested in the transform domain. The performances of both adaptive VQ techniques are perceptually very similar for the noise-free channel case. However, when channel error is injected for the same bit rate, the TCCCDCT/VQ results in less visible distortion than the CDCT/VQ, which is based on ordinary VQ. >

Patent
16 Jul 1991
TL;DR: An image processing apparatus includes an input unit for inputting data on an object pixel, a calculation unit for calculating an average density value of a predetermined area, and a quantization unit for converting the data on the object pixel into multi-level data.
Abstract: An image processing apparatus includes an input unit for inputting data on an object pixel, a calculation unit for calculating an average density value of a predetermined area, and a quantization unit for converting the data on the object pixel into multi-level data on the basis of the average density value obtained by the calculation means.

Proceedings ArticleDOI
01 May 1991
TL;DR: 3D-DCT compression is a viable technique for efficiently reducing the size of data volumes which must be analyzed with various rendering methods, and oblique angle slicing, which involves the fewest operations was found to be the most demanding of small compression errors.
Abstract: We performed volume compression on CT and MR data sets, each consisting of 256 X 256 X 64 or 32 images, using three-dimensional (3D) DCT followed by quantization, adaptive bit-allocation, and Huffman encoding. Cuberille based surface rendering and oblique angle slicing was performed on the reconstructed compression data using a multi-stream vector processor. For CT images 3D-DCT was found to be successful in exploiting the additional degree of voxel correlations between image frames, resulting in compression efficiency greater than 2D-DCT of individual images. During rendering operations, a substantial amount of thresholding, resampling, and filtering operations are performed on the data. At compression ratios in the range 6 - 15:1, 3D compression was not found to have any adverse visual impact on rendered output. Of these two methods, oblique angle slicing, which involves the fewest operations was found to be the most demanding of small compression errors. We conclude that 3D-DCT compression is a viable technique for efficiently reducing the size of data volumes which must be analyzed with various rendering methods.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
08 Apr 1991
TL;DR: An overview of the previous results expands the argument that frequency-amplitude response curves that arise quite naturally in problems involving human visual and audio perception should be used to decide the quantization strategy for wavelet coefficients and the norm in which to measure the error in compressed data.
Abstract: A theory developed by DeVore, Jawerth and Popov of nonlinear approximation by both orthogonal and nonorthogonal wavelets has been applied to problems in surface and image compression. This theory relates precisely the norms in which the error is measured, the rate of decay in that error as the compression decreases, and the smoothness of the data. This overview of the previous results expands the argument, made earlier for image compression, that frequency-amplitude response curves that arise quite naturally in problems involving human visual and audio perception should be used to decide the quantization strategy for wavelet coefficients and the norm in which to measure the error in compressed data. >

Journal ArticleDOI
TL;DR: Efficient adaptive discrete cosine transform (DCT) encoding for data compression is described, and by adopting nonlinear quantizers, the Huffman code table was successfully compressed with very little degradation in the images.
Abstract: Efficient adaptive discrete cosine transform (DCT) encoding for data compression is described. This algorithm is simple and is suitable for digital still cameras. At a bit rate of 1.2 b/pixel, a signal-to-noise (S/N) ratio of about 40 dB was achieved when applied to the Standard Image Database (SIDBA) 'girl Y' images. The data for the DCT coefficients are compressed by adaptive run-length coding, in which either the run-length coding or the non-run-length coding was selected based on the number of zero data in each of the subdivided pictures. By adopting nonlinear quantizers, the Huffman code table was successfully compressed with very little degradation in the images. >

Proceedings ArticleDOI
25 Feb 1991
TL;DR: The CL550 provides a single-chip real-time solution for image compression and decompression applications that implements the baseline ISO/CCITT Joint Photographic Experts Group (JPEG) proposed international standard for image decompression.
Abstract: The CL550 provides a single-chip real-time solution for image compression and decompression applications. It implements the baseline ISO/CCITT Joint Photographic Experts Group (JPEG) proposed international standard for image compression and decompression. The CL550 is designed for applications that manipulate high-quality digital pictures and motion sequences. The CL550 can encode and decode gray-scale and color images at video rates. The compression ratio is controlled by on-chip quantization tables. Images can be compressed from 8:1 to 100:1 depending on the quality, storage, and bandwidth requirements of each application. >

Proceedings ArticleDOI
14 Apr 1991
TL;DR: It is shown theoretically that CC2VQ performs as well as single-stage VQ under asymptotic conditions and some experimental results for the vector quantization of speech waveforms are presented to support these theoretical results.
Abstract: Two-stage vector quantization (2VQ) reduces the complexity of single-stage VQ at the cost of reduced performance. Cell-conditioned (CC) 2VQ is proposed to improve the performance of 2VQ, while retaining the advantage of reduced complexity. In CC2VQ, the error vector from the first stage is transformed prior to its quantization by second stage. The effect of transformation is to cause the size and orientation of different first-stage cells to be as similar as possible. It is shown theoretically that CC2VQ performs as well as single-stage VQ under asymptotic conditions. Some experimental results for the vector quantization of speech waveforms are presented to support these theoretical results. >

Journal ArticleDOI
TL;DR: This paper proposes two adaptive algorithms for image vector quantization which provide a good compromise between coding performance and computational complexity resulting in a very good performance at a reduced complexity.
Abstract: Vector quantization (VQ) is a powerful technique for low bit-rate image coding. The two basic steps in vector quantization are codebook generation and encoding. In VQ, a universal codebook is usually designed from a training set of vectors drawn from many different kinds of images. The coding performance of vector quantization can be improved by employing adaptive techniques. The applicability of vector quantization is, however, limited by its computational complexity. In this paper, we propose two adaptive algorithms for image vector quantization which provide a good compromise between coding performance and computational complexity resulting in a very good performance at a reduced complexity. In the first algorithm, a subset of codewords from a universal codebook is used to code an image. The second algorithm starts with the reduced codebook and requires one iteration to adapt the codewords to the image to be coded. Simulation results demonstrate the gains in coding performance and the savings in computational complexity.

Journal ArticleDOI
TL;DR: A protocol is described for subjective and objective evaluation of the fidelity of compressed/decompressed images compared to originals and the results of its application to four representative and promising compression methods are presented.
Abstract: Image compression at rates oflO:1 orgreatercould make picture archiving and communication systems (PACS) much more responsive and economically attractive. A protocol is described for subjective and objective evaluation of the fidelity of compressed/decompressed images compared to originals. The results of its application to four representative and promising compression methods are presented. The four compression methods examined are predictive pruned tree-structured vector quantization, fractal compression, the full-frame discrete cosine transform with equal weighting of block bit allocation, and the full-frame discrete cosine transform with human visual system weighting of block bit allocation. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 x 1024 computed radiography (CR) images andtwo 512 x 512 x-ray computed tomography(CT) images were viewed at six bit rates by nine radiologists at the University of Washington Medical Center. The radiologists' subjective evaluations of image fidelity were compared to calculations of mean square error for each decompressed image.

Journal ArticleDOI
S.W. Ra1, J.-K. Kim1
TL;DR: A new fast search algorithm for vector quantisation using the weight of image vectors is proposed, making use of the observation that the two codevectors are close to each other in most real images.
Abstract: A new fast search algorithm for vector quantisation using the weight of image vectors is proposed. The codevectors are sorted according to their weights, and the search for the codevector having the minimum Euclidean-distance to a given input vector starts with the one having the minimum weight-distance, making use of the observation that the two codevectors are close to each other in most real images. The search is then made to terminate as soon as a simple yet novel test reports that any remaining vector in the codebook should have a larger Euclidean distance. Simulations show that the number of calculations can be reduced by up to four times the number achieved by the well known partial distance method.

Journal ArticleDOI
01 Aug 1991
TL;DR: A digital video codec has been developed for the Zenith/AT&T HDTV (high-definition TV) system for terrestrial broadcast over NTSC taboo channels that results in a highly robust reception and decoding of the compressed video signal.
Abstract: A digital video codec has been developed for the Zenith/AT&T HDTV (high-definition TV) system for terrestrial broadcast over NTSC taboo channels. The codec works on an image progressively scanned with 1575 scan lines every 1/30th of a second and achieves a compression ratio of approximately 50 to 1. The transparent image quality is achieved using motion-compensated transform coding coupled with a perceptual criterion to determine the quantization accuracy required for each transform coefficient. The combination of a sophisticated encoded video format and advanced bit error protection techniques results in a highly robust reception and decoding of the compressed video signal. >

Patent
Kataoka Tatsuhito1
09 Aug 1991
TL;DR: An image processing method and apparatus in which image data is divided into blocks each containing a plurality of pixels and is coded and decoded on the basis of such blocks is described in this article.
Abstract: An image processing method and apparatus in which image data is divided into blocks each containing a plurality of pixels and is coded and decoded on the basis of such blocks. When the image data is enlarged or reduced, enlargement is conducted before the coding and decoding, while reduction is performed after coding and decoding, whereby quantization errors caused by the coding and decoding is minimized.

Proceedings ArticleDOI
08 Apr 1991
TL;DR: A class of lossy data compression algorithms capable of encoding images so that the loss of information complies with certain distortion requirements, based on tree-structured vector quantizers (TSVQ), are presented.
Abstract: This paper presents a class of lossy data compression algorithms capable of encoding images so that the loss of information complies with certain distortion requirements. The developed algorithms are based on tree-structured vector quantizers (TSVQ). The first distortion controlled algorithm uses variable-size image blocks encoded on quad-tree data structures to encode efficiently image areas with different information content. Another class of distortion controlled algorithms presented is based on recursive quantization of error image blocks that represent the difference between the current approximation and the original block. The progressive compression properties of these algorithms are described. Compression/distortion performance using satellite images provided by NASA is better than that of the TSVQ algorithms at high bit rates. >

Journal ArticleDOI
TL;DR: An 8-b quantization scheme to reduce the data volume for single-look complex scattering matrix data measured by polarimetric imaging radar systems is described, and it is shown, with measured data, that the signal to quantization noise ratio for the compression scheme is more than 35 dB for the cross-polarized channels, and more than 45dB for the copolarization channels.
Abstract: An 8-b quantization scheme to reduce the data volume for single-look complex scattering matrix data measured by polarimetric imaging radar systems is described. The scattering matrices are not symmetrized before compression, thereby retaining information about background noise and system effects. The data volume is reduced by a factor of 3.2. It is shown, with measured data, that the signal to quantization noise ratio for the compression scheme is more than 35 dB for the cross-polarized channels, and more than 45 dB for the copolarized channels. >