scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 1985"


Journal ArticleDOI
TL;DR: The algorithm described here provides an efficient code for the boundary of each region by taking advantage of certain first-order constraints related to the segmentation algorithm, the result being an asymptotic decrease in the number of bits per contour point.

153 citations


Journal ArticleDOI
TL;DR: Parallel algorithms for data compression by textual substitution that are suitable for VLSI implementation are studied and both “static” and “dynamic” dictionary schemes are considered.
Abstract: Parallel algorithms for data compression by textual substitution that are suitable for VLSI implementation are studied. Both “static” and “dynamic” dictionary schemes are considered.

97 citations


Journal ArticleDOI
TL;DR: In this article, the Schur and fast Chokesky recursions are used to study several inverse problems such as the reconstruction of nonuniform lossless transmission lines, the inverse problem for a layered acoustic medium, and the linear least-squares estimation of stationary stochastic processes.
Abstract: The Schur algorithm and its time-domain counterpart, the fast Cholseky recursions, are some efficient signal processing algorithms which are well adapted to the study of inverse scattering problems. These algorithms use a layer stripping approach to reconstruct a lossless scattering medium described by symmetric two-component wave equations which model the interaction of right and left propagating waves. In this paper, the Schur and fast Chokesky recursions are presented and are used to study several inverse problems such as the reconstruction of nonuniform lossless transmission lines, the inverse problem for a layered acoustic medium, and the linear least-squares estimation of stationary stochastic processes. The inverse scattering problem for asymmetric two-component wave equations corresponding to lossy media is also examined and solved by using two coupled sets of Schur recursions. This procedure is then applied to the inverse problem for lossy transmission lines.

54 citations


Journal ArticleDOI
TL;DR: Run-length data compression techniques are described that preserve image content and a simple method for efficient picture archiving is illustrated and a general solution to the optimal run-length compression of digital data is outlined.
Abstract: Run-length data compression techniques are described that preserve image content. After decompression, images are restored to their original state without loss in image gray scale or resolution. The first technique introduces terminology and illustrates a simple method for efficient picture archiving. It demonstrates the principle of run-length techniques. A second more general approach encodes picture information in a manner that adapts to local variation in pixel standard deviation. Among several options of compression formats, the one that delivers the best local compression is selected. Results of our compression techniques are given for several hundred computed tomography (CT) pictures with comparison to image entropy measures. A general solution to the optimal run-length compression of digital data is outlined. Routine application of the locally optimal method is also described.

27 citations


Proceedings ArticleDOI
16 Sep 1985
TL;DR: It is revealed that it is possible to obtain a 3:1 compression ratio for error-free methods, and for irreversible compression, the compression ratio achieved depends on the image type, the image size and the number of bits per pixel.
Abstract: Some error-free and irreversible data compression techniques applied to radiographic images are discussed in this paper. In the case of error-free compression, clipping and bit trunction, run-length coding, run-zero coding and Huffman coding are reviewed. In each case, an example is given to explain the steps involved. In the case of irreversible compression, the full-frame bit allocation in cosine transform domain method is described. Utilizing these compression techniques, we have compressed more than one hundred of radiographic images of different types. Our experience reveals that (a) it is possible to obtain a 3:1 compression ratio for error-free methods, and (b) for irreversible compression, the compression ratio achieved depends on the image type, the image size and the number of bits per pixel. In general, for a 512x512x8 image a 10:1, and for a 1024x1024x8 a 16:1 compression ratio can be achieved. Reconstructed images from these high compression ratio data do not appear to have visual degredation from the original image.

12 citations



Book ChapterDOI
01 Jan 1985
TL;DR: A constructive coding theorem is demonstrated that provides a useful performance criteria for finite practical data compression algorithms, and leads to a universal encoding algorithm which is asymptotically optimal for all sequences.
Abstract: For every individual infinite sequence x, a quantity ρ(x) is defined, called the normalized complexity (or compressibility) of x, which is shown to be the asymptotically attainable lower bound on the compression ratio (i.e., normalized encoded length) that can be achieved for x by any finite-state information lossless encoder. This is demonstrated by a constructive coding theorem and its converse that, apart from their asymptotic significance, also provide a useful performance criteria for finite practical data compression algorithms, and leads to a universal encoding algorithm which is asymptotically optimal for all sequences.

5 citations


Proceedings ArticleDOI
11 Jul 1985
TL;DR: It is concluded that ESPCM is a practical, real-time (on-board) compression algorithm that offers compression ratios approaching DPCM with no information loss and little or no increase in complexity.
Abstract: Advanced Landsat Sensor (ALS) technology has produced requirements for increasing data rates that may exceed space to ground data link capacity, so that identification of appropriate data compression techniques is of interest. Unlike many other applications, Landsat requires information lossless compression. DPCM, Interpolated DPCM, and error-correcting successive-difference PCM (ESPCM) are compared, leading to the conclusion that ESPCM is a practical, real-time (on-board) compression algorithm. ESPCM offers compression ratios approaching DPCM with no information loss and little or no increase in complexity. Moreover, adaptive ESPCM (AESPCM) yields an average compression efficiency of 84% relative to successive difference entropy, and 97% relative to scene entropy. Compression ratios vary from a low of 1.18 for a high entropy (6.64 bits/pixel) mountain scene to a high of 2.38 for low entropy (2.54 bits/pixel) ocean data. The weighted average lossless compression ratio to be expected, using a representative selection of Landsat Thematic Mapper eight-bit data as a basis, appears to be approximately 2.1, for an average compressed data rate of about 3.7 bits/pixel.

2 citations


Proceedings ArticleDOI
01 Apr 1985
TL;DR: A predictive approach for hierarchical line encoding is presented, giving rise to a two step procedure (hierarchical plus predictive), both predictor and quantiser have been adapted to the different pixel weight within the hierarchical code; the selective quantiser has so been developed as a function of code outgrowth layer.
Abstract: Properties related with statistical redundancy of hierarchical image codes have been unveiled recently, allowing for a potential high bit rate reduction in storage or transmission. Appropriate schemes can be developed to take advantage of these properties, either for lossless or lossy applications. Here, a predictive approach for hierarchical line encoding is presented, giving rise to a two step procedure (hierarchical plus predictive). Both predictor and quantiser have been adapted to the different pixel weight within the hierarchical code; the selective quantiser has so been developed as a function of code outgrowth layer. Zero bit quantisation is also used to reduce bit rate in the large uniform areas of last code layers. Performance of the proposed scheme has been tested on a significative set of images, behaving well even with high entropy pictures. Results offer encoding at a bit rate around 0.5 bit/pixel, while subjective quality is still kept high.

2 citations


Journal ArticleDOI
01 Jan 1985
TL;DR: In this paper, a simple yet comprehensive proof of an important sensitivity formula for lossless two-ports stated by Orchard, Temes, and Cataltepe is presented, invoking the principle of conservation of energy and the lossless property of the network under consideration and employing the Cauchy-Riemann equations of complex differentiation.
Abstract: A simple, yet comprehensive proof of an important sensitivity formula for lossless two-ports stated by Orchard, Temes, and Cataltepe is presented. Our derivation invokes the principle of conservation of energy and the lossless property of the network under consideration and employs the Cauchy-Riemann equations of complex differentiation. Hence, it bears clear physical interpretation and mathematical elegance.

Book ChapterDOI
01 Jan 1985
TL;DR: In this paper, the authors developed numerical techniques to reconstruct the acoustic impedance, density, and sound velocity profiles of a one-dimensional lossless inhomogeneous medium and investigated the effect of noise, limited transducer bandwidth, and deconvolution on the algorithms.
Abstract: In our previous work, we developed numerical techniques to reconstruct the acoustic impedance, density, and sound velocity profiles of a one-dimensional lossless inhomogeneous medium and investigated the effect of noise, limited transducer bandwidth, and deconvolution on the algorithms [1–2]. In addition, we studied the direct problem in lossy media and extended the previously developed transmission matrix method and impediography method [3] to lossy media [4].