scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 1984"


Journal ArticleDOI
TL;DR: A new compression algorithm is introduced that is based on principles not found in existing commercial methods in that it dynamically adapts to the redundancy characteristics of the data being compressed, and serves to illustrate system problems inherent in using any compression scheme.
Abstract: Data stored on disks and tapes or transferred over communications links in commercial computer systems generally contains significant redundancy. A mechanism or procedure which recodes the data to lessen the redundancy could possibly double or triple the effective data densitites in stored or communicated data. Moreover, if compression is automatic, it can also aid in the rise of software development costs. A transparent compression mechanism could permit the use of "sloppy" data structures, in that empty space or sparse encoding of data would not greatly expand the use of storage space or transfer time; however , that requires a good compression procedure. Several problems encountered when common compression methods are integrated into computer systems have prevented the widespread use of automatic data compression. For example (1) poor runtime execution speeds interfere in the attainment of very high data rates; (2) most compression techniques are not flexible enough to process different types of redundancy; (3) blocks of compressed data that have unpredictable lengths present storage space management problems. Each compression ' This article was written while Welch was employed at Sperry Research Center; he is now employed with Digital Equipment Corporation. 8 m, 2 /R4/OflAb l strategy poses a different set of these problems and, consequently , the use of each strategy is restricted to applications where its inherent weaknesses present no critical problems. This article introduces a new compression algorithm that is based on principles not found in existing commercial methods. This algorithm avoids many of the problems associated with older methods in that it dynamically adapts to the redundancy characteristics of the data being compressed. An investigation into possible application of this algorithm yields insight into the compressibility of various types of data and serves to illustrate system problems inherent in using any compression scheme. For readers interested in simple but subtle procedures, some details of this algorithm and its implementations are also described. The focus throughout this article will be on transparent compression in which the computer programmer is not aware of the existence of compression except in system performance. This form of compression is "noiseless," the decompressed data is an exact replica of the input data, and the compression apparatus is given no special program information, such as data type or usage statistics. Transparency is perceived to be important because putting an extra burden on the application programmer would cause

2,426 citations


Proceedings Article
01 Jan 1984

44 citations


Journal ArticleDOI
TL;DR: It is shown through exhaustive analysis that the direct data compression technique utilizing adaptive least-squares curve fitting yields a relatively fast and efficient representation of ECG signals at about 1.6 bits/sample, while maintaining visual fidelity and a normalized mean-squared error less than 1%.
Abstract: Many different techniques have recently been proposed for efficient storage of ECG data with data compression as one of the main objectives. Although high compression ratios have been claimed for some of these techniques, the techniques did not always include the word length considerations with regard to the parameters representing the compressed signal. The authors feel that any technique can be meaningfully evaluated only if the resulting compression is expressed in bits/sample rather than the compression ratio that is often used in this field. This paper provides a critical evaluation of two classes of techniques, the direct data compression technique and the transformation technique. It is shown through exhaustive analysis that the direct data compression technique utilizing adaptive least-squares curve fitting yields a relatively fast and efficient representation of ECG signals at about 1.6 bits/sample, while maintaining visual fidelity and a normalized mean-squared error less than 1%.

11 citations


Journal ArticleDOI
John Grant1
TL;DR: The dependency preserving and lossless join properties for relational database decomposition are generalized to the constraint preserving and Lossless properties for mappings between database systems.

7 citations


Journal ArticleDOI
Moshe Y. Vardi1
TL;DR: This note supply an alternative proof of the characterization of a database decomposition as lossless if and only if one of the relation schemes is a key for the universal scheme.

4 citations


Proceedings ArticleDOI
Narciso Garcia1, C. Munoz, A. Sanz
01 Mar 1984
TL;DR: A universal compression statistical code based on a hierarchical transform of an image, valid for every hierarchical transform, is presented and tested, showing that a nearly 2:1 lossless compression is achieved, still keeping low the necessary computing requirements of the approach.
Abstract: A universal compression statistical code based on a hierarchical transform of an image is presented. The properties of the transform that make it valuable for lossless image compression are studied. Based on them, a unique Huffman code, valid for every hierarchical transform, is constructed. Performance of the proposed coding scheme has been tested on a complete collection of images including small objects, faces, groups, remote sensing, ... obtained in different conditions. Results show that, on the average, a nearly 2:1 lossless compression is achieved, still keeping low the necessary computing requirements of the approach.

4 citations


Proceedings ArticleDOI
09 Jan 1984
TL;DR: The result of an effort to explore the potential of utilizing the existing compression techniques for the synthetic aperture radar (SAR) imagery, further compression on the order of 8:1, with little image degradation, is achievable on most portions of the scene that is examined.
Abstract: This paper reports the result of an effort to explore the potential of utilizing the existing compression techniques for the synthetic aperture radar (SAR) imagery. Both adaptive and non-adaptive transform coding techniques were utilized to simulate an end-to-end system, where SAR imagery goes through block-quantization, re-sampling, filtering and encoding, to achieve the desired rate reduction with minimum possible amount of degradation in image quality. Although this investigation utilized limited amounts of SAR data, the approach is not specific to a certain case and is applicable for compression of various SAR imagery. Using simple low-pass filtering, resampling and fast Fourier transform technique, 2:1 or 4:1 data compression leaves all details of the original imagery intact and produces no degradation in image quality based on subjective visual examination. Using semi-adaptive bit-mapping techniques and assigning bit rates of 4, 2, and 1 bit per pixel, further compression on the order of 8:1, with little image degradation, is achievable on most portions of the scene that is examined. This approach has potential for even higher compression ratios if a more adaptive bit-mapping scheme is utilized.© (1984) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

2 citations


Journal ArticleDOI
TL;DR: A process of sample preselection is used allowing high compression of smooth signals and the number of stored data is not subordinated to thenumber of samples taken from a given waveform, which allows for good efficiency in the observation and compression of almost unknown signals.
Abstract: A new data compression method is shown, implementation of the algorithm is based on the classical theory of numerical analysis. The values of the k th.order finite differences of the samples are calculated and their greatest value determines the length of the time interval, which will be compressed by means of k data stored in memory. In this time domain analysis method (TDAM), it is possible to fix initially the desired peak error. Logically the length of the compressed interval is also a function of this peak error. A polynomial interpolation passing through the stored data perforins the reconstruction of the compressed samples. To improve the method, a process of sample preselection is used allowing high compression of smooth signals. Moreover, the number of stored data is not subordinated to the number of samples taken from a given waveform. This procedure allows for good efficiency in the observation and compression of almost unknown signals, as the experimental results show.

2 citations


Proceedings ArticleDOI
01 Mar 1984
TL;DR: This work proposes that increased compression may be achieved by a decomposition of the compression problem into two steps, one to extract the global redundancy in an image and the other to code the resulting localized data.
Abstract: The emphasis of many algorithms that have been proposed for the compression of binary images has been the efficient coding of local redundancy in data. We propose that increased compression may be achieved by a decomposition of the compression problem into two steps. The goal of the first step is to extract the global redundancy in an image. This is achieved by a color shrinking algorithm, The goal of the second step is to code the resulting localized data.

2 citations


Book ChapterDOI
01 Jan 1984
TL;DR: In this article, the authors present the theory leading to a theorem that describes all the rational solutions of the lossless inverse scattering problem (LIS-problem) for lossless networks.
Abstract: We present the theory leading to a theorem that describes all the rational solutions of the lossless inverse scattering problem (LIS-problem) for lossless networks. They are parametrized by a set of points in the closed unit disc of the complex plane. Quite a few classical problems in estimation theory and network theory may be viewed as a special case of the LIS problem. We present a global method to construct LIS solutions using reproducing kernel Hilbert space methods. Finally, we give connections with applications and with some classical interpolation problems and relate the results to maximum entropy approximation theory.

1 citations