scispace - formally typeset
Search or ask a question

Showing papers on "Image compression published in 1986"


Patent
Victor S. Miller1, Mark N. Wegman1
11 Aug 1986
TL;DR: In this paper, a data compression method for communications between a host computing system and a number of remote terminals is enhanced by adding new character and string extensions to improve the compression ratio and deletion of a least recently used routine.
Abstract: Communications between a Host Computing System and a number of remote terminals is enhanced by a data compression method which modifies the data compression method of Lempel and Ziv by addition of new character and new string extensions to improve the compression ratio, and deletion of a least recently used routine to limit the encoding tables to a fixed size to significantly improve data transmission efficiency.

162 citations


Journal ArticleDOI
TL;DR: Despite the simplicity of the decoder, the performance of the adaptive vector quantizer is found to be comparable with block transform coding schemes for both monochrome and color pictures.
Abstract: The paper demonstrates the application of adaptive vector quantization techniques to the coding of monochromatic and color pictures. Codebooks of representative vectors are generated for different portions of the image. By means of extensive simulations, the performance of the coder for monochrome images is estimated as a function of the various coding parameters-vector dimensionality, number of representative vectors, and degree of adaptivity. It is shown that for a vector size of dimension 4, there is a zone of operation, ranging from 1.0 to 1.5 bits/pixel, where adaptive vector quantization is advantageous. Despite the simplicity of the decoder, the performance of the adaptive vector quantizer is found to be comparable with block transform coding schemes for both monochrome and color pictures.

147 citations


Patent
28 Jan 1986
TL;DR: An image signal processing system processing system reads an image to obtain image signals, then compresses the image signals for storage, and expands the stored compressed signals for regenerating the original image as discussed by the authors.
Abstract: An image signal processing system processing system reads an image to obtain image signals, then compresses the image signals for storage, and expands the stored compressed signals for regenerating the original image. An efficient extraction of a part of image is achieved by means of an extraction signal, and the image expansion is terminated when the extraction of image is completed.

48 citations


Proceedings ArticleDOI
E. Walach1, E. Karnin
07 Apr 1986
TL;DR: The proposed approach is very simple (both conceptually and from the point of view computational complexity), and it seems to be well suited to the psycho-visual characteristics of human eye.
Abstract: We introduce a new approach to the issue of lossy data compression. The basic concept has been inspired by the theory of fractal geometry. The idea is to traverse the entire data string utilizing a fixed length "yardstick". The coding is achieved by transmitting, only, the sign bit (to distinguish between the ascent and the descent) and the horizontal distance covered by the "yardstick". All data values are estimated, at the receiver's site, based on this information. We have applied this approach in the context of image compression, and the preliminary results seem to be very promising. Indeed, the proposed approach is very simple (both conceptually and from the point of view computational complexity), and it seems to be well suited to the psycho-visual characteristics of human eye. The paper includes a brief description of the coding concept. Next a number of possible modifications and extensions are discussed. Finally a number of simulations are included in order to support the theoretical derivations. Good quality images are achieved with as low as .5 bit/pel.

41 citations


Proceedings ArticleDOI
20 Nov 1986
TL;DR: An adaptive cosine transform coding scheme capable of real-time operation that incorporates the human visual system properties into the coding scheme showed that the subjective quality of the processed images is significantly improved even at a low bit rate.
Abstract: An adaptive cosine transform coding scheme capable of real-time operation is described. It incorporates the human visual system properties into the coding scheme. Results showed that the subjective quality of the processed images is significantly improved even at a low bit rate of 0.2 bit/pixel. With the adaptive scheme, an average of 0.26 bix/pixel can be achieved with very little perceivable degragation.

40 citations


Journal ArticleDOI
TL;DR: A novel progressive quantization scheme is developed for optimal progressive transmission of transformed diagnostic images that delivers intermediately reconstructed images of comparable quality twice as fast as the more usual zig-zag sampled approach.
Abstract: In radiology, as a result of the increased utilization of digital imaging modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), over a third of the images produced in a typical radiology department are currently in digital form, and this percentage is steadily increasing. Image compression provides a means for the economical storage and efficient transmission of these diagnostic pictures. The level of coding distortion that can be accepted for clinical diagnosis purposes is not yet well-defined. In this paper we introduce some constraints on the design of existing transform codes in order to achieve progressive image transmission efficiently. The design constraints allow the image quality to be asymptotically improved such that the proper clinical diagnoses are always possible. The modified transform code outperforms simple spatial-domain codes by providing higher quality of the intermediately reconstructed images. The improvement is 10 dB for a compression factor of 256:1, and it is as high as 17.5 dB for a factor of 8:1. A novel progressive quantization scheme is developed for optimal progressive transmission of transformed diagnostic images. Combined with a discrete cosine transform, the new approach delivers intermediately reconstructed images of comparable quality twice as fast as the more usual zig-zag sampled approach. The quantization procedure is suitable for hardware implementation.

34 citations


Patent
30 May 1986
TL;DR: In this article, a compressor compresses the edges of a wide screen image to provide a compressed wide screen video signal which may be displayed on a conventional 4:3 aspect ratio receiver with the compressed edge portions largely hidden from view because of receiver overscan.
Abstract: A compressor compresses the edges of a wide screen image to provide a compressed wide screen video signal which may be displayed on a conventional 4:3 aspect ratio receiver with the compressed edge portions largely hidden from view because of receiver overscan. Complementary edge expansion restores the compressed signal to its original form for display by a wide screen receiver. The relative proportions of compression applied to the left and right edges of the wide screen images are varied to reduce the appearance of edge distortion in the compressed signal when displayed on a standard aspect ratio receiver and to reduce the appearance of loss of edge resolution in the expanded signal when displayed on a wide screen receiver.

24 citations


Proceedings ArticleDOI
10 Dec 1986
TL;DR: A recursive algorithm for DCT whose structure allows the generation of the next higher-order DCT from two identical lower order DCT's, which requires fewer multipliers and adders than other DCT algorithms.
Abstract: The Discrete Cosine Transform (DCT) has found wide applications in various fields, including image data compression, because it operates like the Karhunen-Loeve Transform for stationary random data. This paper presents a recursive algorithm for DCT whose structure allows the generation of the next higher-order DCT from two identical lower order DCT's. As a result, the method for implementing this recursive DCT requires fewer multipliers and adders than other DCT algorithms.

21 citations


Book ChapterDOI
01 Jan 1986
TL;DR: A novel approach to image data compression is proposed which uses a stochastic learning automaton to predict the conditional probability distribution of the adjacent pixels and these conditional probabilities are used to code the gray level values using a Huffman coder.
Abstract: A novel approach to image data compression is proposed which uses a stochastic learning automaton to predict the conditional probability distribution of the adjacent pixels. These conditional probabilities are used to code the gray level values using a Huffman coder. The system achieves a 4/1.7 compression ratio. This performance is achieved without any degradation to the received image.

17 citations


Proceedings ArticleDOI
01 Apr 1986
TL;DR: The Warp implementation of the 2- dimensional Discrete Cosine Transform and singular value decomposition is outlined, which is crucial to many real-time signal processing tasks.
Abstract: Warp is a programmable systolic array machine designed by CMU and built together with its industrial partners-GE and Honeywell. The first large scale version of the machine with an array of 10 linearly connected cells will become operational in January 1986. Each cell in the array is capable of performing 10 million 32-bit floating-point operations per second (10 MFLOPS). The 10-cell array can achieve a performance of 50 to 100 MFLOPS for a large variety of signal processing operations such as digital filtering, image compression, and spectral decomposition. The machine, augmented by a Boundary Processor, is particularly effective for computationally expensive matrix algorithms such as solution of linear systems, QR-decomposition and singular value decomposition, that are crucial to many real-time signal processing tasks. This paper outlines the Warp implementation of the 2- dimensional Discrete Cosine Transform and singular value decomposition.

14 citations


Proceedings Article
01 Jan 1986
TL;DR: An algorithm is presented for image data compression based upon vector quantization of the two-dimensional discrete cosine transformed coefficients of the ac energies of the transformed blocks to classify them into eight different ac classes.
Abstract: In this paper, an algorithm is presented for image data compression based upon vector quantization of the two-dimensional discrete cosine transformed coefficients. The ac energies of the transformed blocks are used to classify them into eight different ac classes. The ac coefficients of the transformed blocks of class one are set to zero, while those of classes two through eight are transmitted by seven different code books. The dc coefficients of all eight classes are scalar quantized by an adaptive uniform quantizer. As a result, only 4.5 bits instead of eight bits are required to transmit the dc coefficient with negligible additional degradation. Overall, this algorithm requires approximately 0.75 bits per pixel and gives an average reconstruction error of 7.1.

Proceedings ArticleDOI
12 Jun 1986
TL;DR: Vector Quantization has only recently been applied to image data compression, but shows promise of outperforming more traditional transform coding methods, especially at high compression.
Abstract: This paper addresses the problem of data compression of medical imagery such as X-rays, Computer Tomography, Magnetic Resonance, Nuclear Medicine and Ultrasound. The Discrete Cosine Transform (DCT) has been extensively studied for image data compression, and good compression has been obtained without unduly sacrificing image quality. Vector Quantization has only recently been applied to image data compression, but shows promise of outperforming more traditional transform coding methods, especially at high compression. Vector Quantization is quite well suited for those applications where the images to be processed are very much alike, or can be grouped into a small number of classifications. These and similar studies continue to suffer from the lack of a uniformly agreed upon measure of image quality. This is also exacerbated by the large variety of electronic displays and viewing conditions.

Proceedings ArticleDOI
Jorma Rissanen1
01 Oct 1986
TL;DR: A lossless image compression system is described, which consists of a statistical model and an arithmetic code that collects the occurrence counts of the prediction errors, conditioned on past pixels forming a "context".
Abstract: A lossless image compression system is described, which consists of a statistical model and an arithmetic code. The model first performs a prediction of each pixel by a plane, and then it collects the occurrence counts of the prediction errors, conditioned on past pixels forming a "context". The counts are collected in a tree, constructed adaptively, and the size of the context with which each pixel is encoded is optimized.


Proceedings ArticleDOI
01 May 1986
TL;DR: A nonlinear mathematical model for the human visual system was selected as a pre-processing stage for monochrome and color digital image compression and leads to an image quality metric which compares to subjective evaluations of an extensive image set with a correlation of 0.92.
Abstract: A nonlinear mathematical model for the human visual system (HVS) was selected as a pre-processing stage for monochrome and color digital image compression. Rate distortion curves and derived power spectra were used to develop coding algorithms in the preprocessed "perceptual space." Black and white images were compressed to 0.1 bit per pel. In addition, color images were compressed to 1 bit per pel (1/3 bit per pel per color) with less than 1 percent mean square error and no visible degradations. Minor distortions are in-curred with compressions down to 1/4 bit per pel (1/2 bit per pel per color). It appears that the perceptual power spectrum coding technique "puts" the noise where one cannot see it. The result is bit rates up to an order of magnitude lower than those previously obtained with comparable quality. In addition, the model leads to an image quality metric which compares to subjective evaluations of an extensive image set with a correlation of 0.92.


Proceedings ArticleDOI
Narciso Garcia1, C. Munoz, Alberto Sanz
01 May 1986
TL;DR: Hierarchical encoding, initially developed for image decomposition, is a feasible alternative for image transmission and storage and several independent compression strategies can be implemented, and, therefore, applied at the same time.
Abstract: Hierarchical encoding, initially developed for image decomposition, is a feasible alternative for image transmission and storage. Several independent compression strategies can be implemented, and, therefore, applied at the same time. Lossless encoding o Universal statistical compression on the hierarchical code. A unique Huffman code, valid for every hierarchical transform is built. Lossy encoding o Improvement of the intermediate approximations, as this can decrease the effective bit rate for transmission applications. Interpolating schemes and non-uniform spatial out-growth help solve this problem. o Prediction strategies on the hierarchical code. A three-dimensional predictor (space and hierarchy) on the code-pyramid reduces the information required to build new layers. o Early branch ending. Analysis of image homogeneities detects areas of similar values that can be approximated by a unique value.


Proceedings ArticleDOI
20 Nov 1986
TL;DR: An application of the new approach to the classical linear predictive coding (LPC) of images and an HVS based segmentation technique for the second genera-tion coders will be discussed.
Abstract: Recently, ways to obtain a new generation of image-coding techniques have been proposed. The incorpordtion of the human visual system (IIVS) models and tools of the image analysis, such as segmentation, are two defining features of these techniques. In this paper, an application of the new approach to the classical linear predictive coding (LPC) of images and an HVS based segmentation technique for the second genera-tion coders will be discussed. In the case of LPC, the error image is encoded using an image decomposition approach and binary image coding. This improves the compression ratio keeping the quality nearly the same. The new segmentation technique can be used in single frame image coding applications to obtain acceptable images at extremely high compression rates.

Proceedings ArticleDOI
10 Dec 1986
TL;DR: This paper describes an image compression method based on a hierarchical segmentation scheme into polygonal homogeneous regions by formingacompact representation of region shapes together with a suitable approximation of each region's content.
Abstract: This paper describes an image compression method based on a hierarchical segmentation scheme into polygonal homogeneous regions. Adequate homogeneity criteria are selected through statistical discriminant analysis. Once segmentation is completed, the coding process consists in formingacompact representation of region shapes together with a suitable approximation of each region's content. Two types of coding are represented: a graphics-quality coding based on zero-order approximation of region content and a television-quality coding based on moment-preserving bi-level truncation.

Proceedings Article
01 Jan 1986
TL;DR: It is shown that VQ can be mapped onto VLSI implementation via systolic type architecture, making real time application possible and using multistage or cascade VQ high quality images can be obtained at very low bit rates for real time applications while using smaller codebooks than is necessary in single stage VQ.
Abstract: Many image compression or coding techniques have been developed to reduce the amount of bits of information needed to represent digital images. Among these, Vector Quantization (VQ) seems to have the edge; its theoretical distortion is lower than that of other block coding techniques at comparable bit rates. However, the application of Vector Quantization remains limited due to its high computational complexity. Presently, it is limited to low to medium quality image compression. In this paper it is shown that VQ can be mapped onto VLSI implementation via systolic type architecture, making real time application possible. In addition, it is shown that using multistage or cascade VQ high quality images can be obtained at very low bit rates for real time applications while using smaller codebooks than is necessary in single stage VQ. Examples of processed images are presented.


Proceedings ArticleDOI
20 Nov 1986
TL;DR: A two-stage transform coding scheme to reduce discontinuities between subimages in low-bit rate applications and the preliminary simulation results obtained from the proposed scheme and the traditional method are compared.
Abstract: With the advancement in computational speed, transform coding has been a promising technique in image data compression. Traditionally, an image is divided into exclusive rectangular blocks or subimages. Each subimage is a partial scene of the original image and they are processed independently. In low-bit rate applications, block boundary can develop due to discontinuities between the subimages. A two-stage transform coding scheme to reduce such effect is proposed. The first stage transformation is applied to the subimages, each being a reduced under-sampled image of the original. The second stage transformation is applied to the transform coefficients obtained at the first stage. A simple coder with discrete Walsh-Hadamard transform and uniform quantization is used to compare the preliminary simulation results obtained from the proposed scheme and the traditional method.

Proceedings ArticleDOI
01 May 1986
TL;DR: This approach is optimized with several steps of refinements in order to improve the compression ratio (up to 120:1) and the quality of the decoded image.
Abstract: The directional decomposition has already been introduced as a pertinent tool for image coding. In this paper, this approach is optimized with several steps of refinements in order to improve the compression ratio (up to 120:1) and the quality of the decoded image. After the presentation of the coding strategy, results are shown for various real images.

Proceedings ArticleDOI
05 May 1986
TL;DR: The Discrete Cosine Transform (DCT) is recognized as an important tool for image compression techniques and its use in image restoration is not well known as mentioned in this paper, however, it is shown that the DCT can play an interesting role in the deconvolution problem for linear imaging systems with finite, invariant and symmetric impulse response.
Abstract: The Discrete Cosine Transform (DCT) is recognized as an important tool for image compression techniques. Its use in image restoration is, however, not well known. It is the aim of this paper to provide a restoration method for a sequence of images using the DCT as well for the deblurring as for the noise reduction. It is shown that the DCT can play an interesting role in the deconvolution problem for linear imaging systems with finite, invariant and symmetric impulse response. It is further shown that the noise reduction can be performed onto an image sequence using a time adaptive Kalman filter in the domain of the Karhunen-Loeve transform which is approximated by the DCT.

Proceedings ArticleDOI
01 May 1986
TL;DR: Using vector quantization in a transformed domain, TV images are coded using spatial redundancies of small 4x4 blocks of pixel using a DCT (or Hadamard) trans-form.
Abstract: first, a DCT (or Hadamard) trans-form is performed on these blocks. A classification algorithm ranks them into visual andtransform properties -based classes. For each class, high energy carrying coefficients are re-tained and using vector quantization, a codebook is built for the AC remaining part of thetransformed blocks. The whole of the codeworks are referenced by an index. Each block is thencoded by specifying its DC coefficient and associated index.IntroductionTV images contain many similar areas than can be coded with a unique representation. Fur-ther more, including visual perception criteria of these areas in the coder, one can selectthe appropriate minimum information to be transmitted so that the distortion between originaland reconstructed image is minimum. Vector quantization, as demonstrated recently (1), (2),(3) can remove spatial redundancies between blocks of pixels.In this paper, this method is applied in a transform domain to take advantage of some in-teresting properties. A general coding scheme is given in figure 1

Proceedings ArticleDOI
01 May 1986
TL;DR: Two fixed rate transform coding techniques use a simple non-stationary model for images to characterize the random generators based on a training sequence and encode the new images using the learned parameters.
Abstract: Two fixed rate transform coding techniques are presented in this paper. They use a simple non-stationary model for images. This model supposes that the transformed sub-blocks of an image signal could be generated by switching the outputs of several random vector generators. It leads to two adaptive block quantizer techniques which characterize the random generators based on a training sequence and encode the new images using the learned parameters. These techniques show some improvements with respect to the well-known Chen-Smith method. The encoder-decoder complexity is lower and the techniques are easily hardware implementable.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: This work proposes a method that uses Adaptive Delta Modulation with one stage look ahead as a preprocessor in conjunction with vector quantization of the appropriately defined error signals to reduce the dynamic range of the signals encoded using VQ.
Abstract: In the search for reduction of the unacceptably high bit rates of PCM digital images, vector quantization seems to have the edge: its theoretical distortion is lower than that of other block coding methods at comparable bit rates. However, to obtain high quality pictures, when used as a stand-alone process, a vector quantizer requires a codebook of large size and is excessively computational intensive. For the dynamic range of the original images, 0 to 255, it is doubtful that a single optimal and universal codebook of manageable size can be obtained. Hence, to reduce the dynamic range of the signals encoded using VQ, we propose a method that uses Adaptive Delta Modulation with one stage look ahead as a preprocessor in conjunction with vector quantization of the appropriately defined error signals. Using such a scheme, it is shown that good quality composite color video images can be obtained using 1.5 to 2.0 bits per pixel and codebooks which are small and universal.

Proceedings ArticleDOI
10 Dec 1986
TL;DR: It is shown that the performance of image coding techniques can be enhanced via the utilization of a priori knowledge and pre-enhancement of the detected features within the image prior to coding is shown to noticeably reduce the severity of the coding degrada-tion.
Abstract: It is shown that the performance of image coding techniques can be enhanced via the utilization of a priori knowledge. Critical features of the image are first identified and then they are accounted for more favorably in the coding process. For satellite imagery, thin lines and point objects constitute critical features of interest whose preservation in the coding process is crucial. For a human visual system, the impact of the coding degradation at low rates is much more detrimental for these features than for the edges which constitute boundaries between regions of different contrasts. A highly non-linear, matched filter-based algorithm to detect such features has been developed. Pre-enhancement (highlighting) of the detected features within the image prior to coding is shown to noticeably reduce the severity of the coding degrada-tion. A yet more robust approach is the pre-enhancement of the slightly smoothed image. This operation gives rise to an image in which all critical thin lines and point ojects are crisp and well-defined at the cost of non-essential edges of the image being slightly rounded off. For the transform coding techniques, distortion parameter readjustment and variable-block size coding provide promising alternatives to the pre-enhancement ap-proaches. In the former, the sub-blocks containing any part of the detected critical features are kept within a low distortion bound via the local rate adjustment mechanism. The latter approach is similar to the former except that the image is partitioned into varying size sub-blocks based on the extracted feature map.

Proceedings ArticleDOI
01 May 1986
TL;DR: In the framework of directional decomposition-based image coding, a new strategy is proposed in order to improve previous results and particular attention is devoted to the representation of the high-frequency component of the image.
Abstract: In the framework of directional decomposition-based image coding, a new strategy is proposed in order to improve previous results. Particular attention is devoted to the representation of the high-frequency component of the image, that is critical for both the quality of the decoded picture and the compression ratio obtained.