scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 1990"


Journal ArticleDOI
TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >

690 citations


Patent
18 Jun 1990
TL;DR: In this paper, a method and apparatus for compressing digital data that is represented as a sequence of characters drawn from an alphabet is presented, where an input data block is processed into an output data block composed of sections of variable length.
Abstract: A method and apparatus for compressing digital data that is represented as a sequence of characters drawn from an alphabet. An input data block is processed into an output data block composed of sections of variable length. Unlike most prior art methods which emphasize the creation of a dictionary comprised of a tree with nodes or a set of strings, the present invention creates its own pointers from the sequence characters previously processed and emphasizes the highest priority on maximizing the data rate-compression factor product. The use of previously input data acting as the dictionary combined with the use of a hashing algorithm to find candidates for string matches and the absence of a traditional string matching table and associated search time allows the compressor to very quickly process the input data block. Therefore, the result is a high data rate-compression factor product achieved due to the absence of any string storage table and matches being tested only against one string.

158 citations


Patent
20 Feb 1990
TL;DR: In this paper, the authors propose an image processing apparatus consisting of a memory for storing image data, an expansion circuit for expanding the loss-compressed image data and a difference circuit for calculating a difference between the original image data of the image memory and the expanded image data obtained by the difference circuit.
Abstract: An image processing apparatus includes a memory for storing image data, a loss-compression circuit for loss-compressing the image data, an expansion circuit for expanding the loss-compressed image data, and a difference circuit for calculating a difference between the original image data of the image memory and the expanded image data of the expansion circuit. A lossless compression circuit lossless-compresses the difference image data obtained by the difference circuit. A multiplexer multiplexes the loss-compressed image data output from the loss-compression circuit, with the lossless-compressed difference data obtained by the lossless-compression circuit.

131 citations


Journal ArticleDOI
TL;DR: This letter has found that using the wavelet transform in time and space, combined with a multiresolution approach, leads to an efficient and effective method of compression.
Abstract: This letter present results on using wavelet transforms in both space and time for compression of real time digital video data. The advantages of the wavelet transform for static image analysis are well known.2 We have found that using the wavelet transform in time and space, combined with a multiresolution approach, leads to an efficient and effective method of compression. In addition, the computational requirements are considerably less than for other compression methods, and are more suited to VLSI implementation. Some preliminary results of compression on a sample video will be presented.

97 citations


Journal ArticleDOI
TL;DR: Tree compression can be seen as a trade-off problem between time and space in which the authors can choose different strategies depending on whether they prefer better compression results or more efficient operations in the compressed structure.
Abstract: Different methods for compressing trees are surveyed and developed. Tree compression can be seen as a trade-off problem between time and space in which we can choose different strategies depending on whether we prefer better compression results or more efficient operations in the compressed structure. Of special interest is the case where space can be saved while preserving the functionality of the operations; this is called data optimization. The general compression scheme employed here consists of separate linearization of the tree structure and the data stored in the tree. Also some applications of the tree compression methods are explored. These include the syntax-directed compression of program files, the compression of pixel trees, trie compaction and dictionaries maintained as implicit data structures.

81 citations


Journal ArticleDOI
TL;DR: A model of the CSF is described that includes changes as a function of image noise level by using the concepts of internal visual noise, and is tested in the context of image compression with an observer study.
Abstract: The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display- observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise. The model is tested in the context of image compression with an observer study.

71 citations


Patent
26 Apr 1990
TL;DR: In this paper, a computer program is used to compress color graphics animation sequences for local storage and transmission to remotely networked sites in a significantly reduced size, allowing playback of these animations on a wide variety of workstations ranging from personal computer class machines to high end scientific and engineering workstens such as Sun and Silicon Graphics machines.
Abstract: A computer program is used to compress color graphics animation sequences for local storage and transmission to remotely networked sites in a significantly reduced size It allows playback of these animations on a wide variety of workstations ranging from personal computer class machines to high end scientific and engineering workstations such as Sun and Silicon Graphics machines. The basic raster image to be compressed is full color with 24-bits per pixel. The compressed form of a raster image is such that the original image can be reconstructed exactly, with no loss of information, on the same computer that compressed it (typically a large mainframe supercomputer) or on the smaller scientific or engineering workstation. A precompression conversion to palette form of a full color image is possible if the computer on which the reconstruction and display is to take place has limited color capabilities; this is the case for lower cost personal computers and workstations. The time required to compress, and the size of the compressed form, are allowed to be data dependent in order to achieve 100 percent lossless compression. The algorithm is specially designed for raster animations, in such a way that a reasonable frame rate (number of frames per second during decompression) can be achieved across a wide range of personal computers, workstations and mainframes; typical frames rates are 5 to 30 frames per second on conventional workstations. A compression ratio achieved is nominally 100:1 for computer generated animations of 3-dimensional scenes or data visualization.

60 citations


Journal ArticleDOI
TL;DR: The notion of a formal grammar as a flexible model of text generation that encompasses most of the models offered before as well as extending the possibility of compression to a much more general class of languages is proposed.
Abstract: Text compression is of considerable theoretical and practical interest. It is, for example, becoming increasingly important for satisfying the requirements of fitting a large database onto a single CD-ROM. Many of the compression techniques discussed in the literature are model based. We here propose the notion of a formal grammar as a flexible model of text generation that encompasses most of the models offered before as well as, in principle, extending the possibility of compression to a much more general class of languages. Assuming a general model of text generation, a derivation is given of the well known Shannon entropy formula, making possible a theory of information based upon text representation rather than on communication. The ideas are shown to apply to a number of commonly used text models. Finally, we focus on a Markov model of text generation, suggest an information theoretic measure of similarity between two probability distributions, and develop a clustering algorithm based on this measure. This algorithm allows us to cluster Markov states, and thereby base our compression algorithm on a smaller number of probability distributions than would otherwise have been required. A number of theoretical consequences of this approach to compression are explored, and a detailed example is given.

26 citations


Journal ArticleDOI
N. Fache1, D. De Zutter1
TL;DR: In this paper, a coupled transmission line model describing the two fundamental modes of any two-conductor (above a ground plane or shielded) dispersive or non-ispersive lossless waveguide system is given.
Abstract: A coupled transmission line model describing the two fundamental modes of any two-conductor (above a ground plane or shielded) dispersive or nondispersive lossless waveguide system is given. The model is based on a power-current formulation of the impedances but does not need an a priori supposition about the power distribution over each transmission line. The analysis is extended to lossy structures and to the multiconductor situation. Impedance calculations for a typical coupled microstrip configuration are used to illustrate the approach. >

23 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of finding a structure to cover all 2D FIR lossless matrices of a given degree has been solved, and a structure which completely covers a well-defined subclass of 2D digital FIR matrices is obtained.
Abstract: The role of one-dimensional (1-D) digital finite impulse response (FIR) lossless matrices in the design of FIR perfect reconstruction quadrature mirror filter (QMF) banks has been explored previously. Structures which can realize the complete family of FIR lossless transfer matrices, have also been developed, with QMF application in mind. For the case of 2-D QMF banks, the same concept of lossless polyphase matrix has been used to obtain perfect reconstruction. However, the problem of finding a structure to cover all 2-D FIR lossless matrices of a given degree has not been solved. Progress in this direction is reported. A structure which completely covers a well-defined subclass of 2-D digital FIR lossless matrices is obtained. >

21 citations


Proceedings ArticleDOI
01 Jul 1990
TL;DR: In this paper, a 3D cosine transform based image compression method was proposed to take advantage of the correlations between adjacent pLtels in an image for time-sequenced studies.
Abstract: Transform based compression methods achieve their effect by taking advantage of the correlations between adjacent pLtels in an image. The increasing use of three-dimensional imaging studies in radiology requires new techniques for image compression. For time-sequenced studies such as digital subtraction angiography, pixels are correlated between images, as well as within an image. By using three-dimensional cosine transforms, correlations in time as well as space can be exploited for image compression. Sequences of up to eight 512 x 512 x 8-bit images were compressed using a single full volume three-dimensional cosine transform, followed by quantization and bit-allocation. The quantization process is a uniform thresholding type and an adaptive three-dimensional bit-allocation table is used. The resultant image fidelity vs. compression ratio was shown to be superior to that achieved by compressing each image individually.© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Proceedings ArticleDOI
02 Dec 1990
TL;DR: The VLSI design of a new run-length encoding chip that can provide on-the-fly compression of images within real-time networks is described.
Abstract: A scheme that can be used to compress image and textlike data efficiently and to meet the real-time constraints of distributed simulation networks is presented. An initial transformation step is applied to image data in order to induce desirable properties that allow the efficient application of popular lossless text compression schemes to image data. The VLSI design of a new run-length encoding chip that can provide on-the-fly compression of images within real-time networks is described. >

Proceedings ArticleDOI
02 Dec 1990
TL;DR: The proposed architecture is systolic and uses the principles of pipelining and parallelism in order to obtain high speed and throughput in the LZ technique for data compression.
Abstract: The authors describe a parallel algorithm and architecture for implementing the LZ technique for data compression. Data compression is the reduction of redundancy in data representation in order to decrease storage and communication costs. The LZ-based compression method is a very powerful technique and gives very high compression efficiency for text as well as image data. The proposed architecture is systolic and uses the principles of pipelining and parallelism in order to obtain high speed and throughput. The order of complexity of the computations is reduced from n/sup 2/ to n. The data compression hardware can be integrated into real time systems so that data can be compressed and decompressed on-the-fly. The basic processor cell for the systolic array is currently being implemented using CMOS VLSI technology. >

Journal ArticleDOI
TL;DR: Digital lossless transfer matrices and vectors (power-complementary vectors) are discussed for applications in digital filter bank systems, both single rate and multirate.
Abstract: Digital lossless transfer matrices and vectors (power-complementary vectors) are discussed for applications in digital filter bank systems, both single rate and multirate. Two structures for the implementation of rational lossless systems are presented. The first structure represents a characterization of single-input, multioutput lossless systems in terms of complex planar rotations, whereas the second structure offers a representation of M-input, M-output lossless systems in terms of unit-norm vectors. This property makes the second structure desirable in applications that involve optimization of the parameters. Modifications of the second structure for implementing single-input, multioutput, and lossless bounded real (LBR) systems are also included. The main importance of the structures is that they are completely general, i.e. they span the entire set of M*1 and M*M lossless systems. This is demonstrated by showing that any such system can be synthesized using these structures. The structures are also minimal in the sense that they use the smallest number of scalar delays and parameters to implement a lossless system of given degree and dimensions. A design example to demonstrate the main results is included. >

J. Waite1
03 Dec 1990
TL;DR: In this paper, the authors present a tutorial introduction to the theory of iterated function systems (IFSs) as applied to digital images and digital image compression, first developed by Jacquin (see ICASSP '90).
Abstract: Presents a tutorial introduction to the theory of iterated function systems (IFSs) as applied to digital images and digital image compression, first developed by Jacquin (see ICASSP '90). There have been a number of excellent mathematical accounts of IFS theory applied to images but the aim is to show, using only elementary methods, why the IFS approach to digital image compression works. The author describes how an arbitrary image can be encoded using an IFS starting with a very brief overview of IFS based coding. >

Proceedings ArticleDOI
01 Jun 1990
TL;DR: This paper introduces a reduced-difference pyramid data structure where the number of nodes, corresponding to a set of decorrelated difference values, is exactly equal to thenumber of pixels.
Abstract: Pyramid data structures have found an important role in progressive image transmission. In these data structures, the image is hierarchically represented where each level corresponds to a reduced-resolution approximation. To achieve progressive image transmission, the pyramid is transmitted starting from the top level. However, in the usual pyramid data structures, extra significant bits may be required to accurately record the node values, the number of data to be transmitted may be expanded and the node values may be highly correlated. In this paper, we introduce a reduced-difference pyramid data structure where the number of nodes, corresponding to a set of decorrelated difference values, is exactly equal to the number of pixels. Experimental results demonstrate that the reduced-difference pyramid results in lossless progressive image transmission with some degree of compression. By using an appropriate interpolation niethod, reasonable quality approximations are achieved at a bit rate less than 0.1 bits/pixel and excellent quality at a bit rate of about 1.2 bits/pixel.© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: This study shows whether ROI compression can provide greater compression or diagnostic accuracy than uniform quadtree compression in single CT images from 75 consecutive abdominal examinations.
Abstract: A quadtree-based data compression algorithm can provide different levels of compression within and outside of regions of interest (ROIs). The current study shows whether ROI compression can provide greater compression or diagnostic accuracy than uniform quadtree compression. In 75 single CT images from 75 consecutive abdominal examinations, 43 abnormalities were identified and surrounded by ROIs. Three radiologists interpreted the images following (1) 50:1 compression of the entire image; (2) ROI compression at five decreasing compression ratios (with 50:1 compression outside the ROI); and (3) reversible (lossless) compression of the entire image. Reversible compression (compression ratio 3:1) yielded a sensitivity of 96%. ROI compression of 15:1 was achieved with no loss of sensitivity; ROI compression of 28:1 yielded a sensitivity of 91% (not significantly different). At any given compression ratio, diagnostic sensitivity was greater with ROI compression than with uniform quadtree compression. For purposes of image archiving, quadtree-based ROI compression is superior to uniform compression of CT images.

Book ChapterDOI
01 Jan 1990-Sequence
TL;DR: The authors use the term data to mean digital data: data that is represented as a sequence of characters drawn from the input alphabet ∑, which is defined by some fidelity criteria that is provided to the algorithm.
Abstract: We use the term data to mean digital data: data that is represented as a sequence of characters drawn from the input alphabet ∑. Data compression is the process of encoding (“compressing”) a body of data into a smaller body of data. With lossless data compression, it must be possible for the compressed data to be decoded (“decompressed”) back to the original data, whereas with lossy data compression, compressed data need only be decoded back to some acceptable approximation to the original data (where “acceptable” is defined by some fidelity criteria that is provided to the algorithm).

Proceedings ArticleDOI
01 Aug 1990
TL;DR: A technique is presented for intraframe color image data compression which produces visually lossless imagery compared to the original because predicted values are used for codebook selection instead of the computation and coding of a residual signal.
Abstract: A technique is presented for intraframe color image data compression which produces visually lossless imagery compared to the original. This algorithm consists of a color vector quantizer operating in the Luv uniform color space, followed by a reversible codeword assignment strategy that uses prediction to achieve conditional entropy typebit rates. Unlike differential pulse code modulation (DPCM), predicted values are used for codebook selection insteadof the computation and coding of a residual signal. 1. INTRODUCTION Frequently in digital imaging applications, data compression techniques are required due to system memoryand/or transmission speed constraints that cannot be satisfied by quantization alone for a given level of image quality.Examples include television, facsimile, and other digital image storage applications. As an illustration of the amountof data necessary to represent an original digital image, consider a 512 x 512 monochrome image with 8 bits/pixel oftonal resolution. These numbers are typical of display applications. In uncompressed form this image requires 256

01 Jan 1990
TL;DR: In this paper, a method for improving the compression ratio of the Lempel-Ziv data compression algorithm is presented, and curves comparing the compression efficiencies of the improved LempelZiv algorithm, the LZWelch algorithm, and the Lziv algorithm are given.
Abstract: Recently, people have become interested in improving the capacity of storage systems through the use of information lossless data compression. Several algorithms are being investigated and implemented. One such algorithm is the Lempel-Ziv data compression algorithm. However, before implementing any algorithm, attempts need to be made to optimize the algorithm's compression efficiency over a wide range of data. In this talk, a method for improving the compression ratio of the Lempel-Ziv data compression algorithm is presented. First, the Lempel-Ziv algorithm is explained. This is followed by an explanation of the improved Lempel-Ziv algorithm. Finally, curves comparing the compression efficiencies of the improved Lempel-Ziv algorithm, the Lempel-Ziv algorithm, and the Lempel-ZivWelch algorithm are given.

Journal ArticleDOI
TL;DR: The synthesis of real digital filters based on the lossless bounded real (LBR) two-pair extraction procedure is extended to the complex domain and permits the synthesis of any bounded complex (BC) or lossless bound complex (LBC) digital transfer function.
Abstract: The synthesis of real digital filters based on the lossless bounded real (LBR) two-pair extraction procedure is extended to the complex domain. the only component then required is a first-order reciprocal lossless bounded complex (LBC) two-pair. This two-pair permits the synthesis of any bounded complex (BC) or lossless bounded complex (LBC) digital transfer function. Lastly, a unitary implementation in terms of Givens (planar) rotation modules is given for the proposed two-pair.

Proceedings ArticleDOI
01 Oct 1990
TL;DR: The present analysis suggests that the visual system requires many more bits/mm2 than the results of other researchers who find that .5 bits/ mm2 are sufficient to represent an image without perceptible loss.
Abstract: The information gathering capacity of the visual system can be specified in units of bits/mm2. The fall-off in sensitivity of the human visual system at high spatial frequencies allows a reduction in the bits/mm2 needed to specify an image. A variety of compression schemes attempt to achieve a further reduction in the number of bit/mm2 while maintaining perceptual losslessness. This paper makes the point that whenever one reports the results of an image compression study, numbers should be provided. The first is the number of bits/mm2 that can be achieved using properties of the human visual system, but ignoring the redundancy of the image (entropy coding). The second number is the bits/mm2 including the effects of entropy coding. The first number depends mainly on the properties of the visual system, the second number includes, in addition, the properties of the image. The Discrete Cosine Transform (DCT) compression method is used to determine the first number. It is shown that the DCT requires between 16 and 24 bits/mm2 for perceptually lossless encoding of images, depending on the size of the blocks into which the image is subdivided. In addition, the efficiency of DCT compression is found to be limited by its susceptibility to interference from adjacent maskers. The present analysis suggests that the visual system requires many more bits/mm2 than the results of other researchers who find that .5 bits/mm2 are sufficient to represent an image without perceptible loss.© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
01 May 1990
TL;DR: In this article, two techniques for compressing digitized images, vector quantization and enhanced differential pulse code modulation, are explained, and comparisons are drawn between the human vision system and artificial compression techniques.
Abstract: Digital compression of video images is a possible avenue for HDTV transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compressing digitized images, vector quantization and enhanced differential pulse code modulation, are explained, and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given. >

Patent
Gottlieb Dipl Ing Schwarz1
30 Nov 1990
TL;DR: In this article, the most significant bit of the character for controlling the data compression was used for data compression of characters of constant word length, coded in accordance with a coding rule.
Abstract: 2.1 Known methods use various algorithms for data compression. 2.2 The method according to the invention for data compression of characters of constant word length, coded in accordance with a coding rule, uses the most significant bit of the character for controlling the data compression. 2.3 The method according to the invention can be advantageously used with coded characters having a constant word length.


Proceedings ArticleDOI
01 Jul 1990
TL;DR: A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution, which has been found to produce a low mean-square-error and a high compression ratio.
Abstract: A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.


Proceedings ArticleDOI
01 Nov 1990
TL;DR: Factor Analysis of Dynamic Structures (FADS) which allows to process dynamic image studies (nuclear medicine, CT or MRI) to estimate underlying physiological functions and simultaneously to compress data as a first step of an image sequence compression is used.
Abstract: Today the necessity to compress data in order to improve archiving and transmission of images is a widely held opinion in the medical imaging field. Medical image sequence compression is a real challenge which has not yet received much attention even if the use of 3D orthogonal transforms to achieve interframe compression was considered. We used Factor Analysis of Dynamic Structures (FADS) which allows to process dynamic image studies (nuclear medicine, CT or MRI) to estimate underlying physiological functions and simultaneously to compress data as a first step of an image sequence compression. To compress 3D spatial sequences, the proposed method is based on Correspondence Analysis (CA) of an array obtained after dividing the 3D initial data into cubic subarrays. To improve compression, an adaptive coding is applied to the results obtained after the factor or the correspondence analysis. It achieved high compression ratios as high as 2O:l to 100:l.

Proceedings ArticleDOI
01 Aug 1990
TL;DR: Under this system, distributed processing in the image compression processor and the image reconstruction displays reduces the load on the host computer, and supplies an environment where the control routines for PACS and the hospital information system (HIS) can co-operate.
Abstract: We previously developed an image reconstruction display for reconstructing images compressed by our hybrid compression algorithm. The hybrid algorithm, which improves image quality, applies Discrete Cosine Transform coding (DCT) and Block Truncation Coding (BTC) adaptively to an image, according to its local properties. This reconstruction display receives the compressed data from the host computer through a BMC channel and quickly reconstructs good quality images using a pipeline-based microprocessor. This paper describes a prototype of a system for compression and reconstruction of medical images. It also describes the architecture of the image compression processor, one of the components of the system. This system consists of the image compression processor, a host main-frame computer and reconstruction displays. Under this system, distributed processing in the image compression processor and the image reconstruction displays reduces the load on the host computer, and supplies an environment where the control routines for PACS and the hospital information system (HIS) can co-operate. The compression processor consists of a maximum of four parallel compression units with communication ports. In this architecture, the hybrid algorithm, which includes serial operations, can be processed at high speed by communicating the internal data. In experiments, the compression system proved effective: the compression processor compressed a 1k x 1k image in about 2 seconds using four compression units. The three reconstruction displays showed the image at almost the same time. Display took less than 7 seconds for the compressed image, compared with 28 seconds for the original image.© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.