scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 1986"


Journal ArticleDOI
TL;DR: A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described and proves that it never performs much worse than Huffman coding and can perform substantially better.
Abstract: A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described. The scheme is based on a simple heuristic for self-organizing sequential search and on variable-length encodings of integers. We prove that it never performs much worse than Huffman coding and can perform substantially better; experiments on real files show that its performance is usually quite close to that of Huffman coding. Our scheme has many implementation advantages: it is simple, allows fast encoding and decoding, and requires only one pass over the data to be compressed (static Huffman coding takes two passes).

564 citations


Journal ArticleDOI
TL;DR: The proposed picture compressibility is shown to possess the properties that one would expect and require of a suitably defined concept of two-dimensional entropy for arbitrary probabilistic ensembles of infinite pictures.
Abstract: Distortion-free compressibility of individual pictures, i.e., two-dimensional arrays of data, by finite-state encoders is investigated. For every individual infinite picture I , a quantity \rho(I) is defined, called the compressibility of I , which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved for I by any finite-state information-lossless encoder. This is demonstrated by means of a constructive coding theorem and its converse that, apart from their asymptotic significance, might also provide useful criteria for finite and practical data-compression tasks. The proposed picture compressibility is also shown to possess the properties that one would expect and require of a suitably defined concept of two-dimensional entropy for arbitrary probabilistic ensembles of infinite pictures. While the definition of \rho(I) allows the use of different machines for different pictures, the constructive coding theorem leads to a universal compression scheme that is asymptotically optimal for every picture. The results are readily extendable to data arrays of any finite dimension.

217 citations


Patent
Victor S. Miller1, Mark N. Wegman1
11 Aug 1986
TL;DR: In this paper, a data compression method for communications between a host computing system and a number of remote terminals is enhanced by adding new character and string extensions to improve the compression ratio and deletion of a least recently used routine.
Abstract: Communications between a Host Computing System and a number of remote terminals is enhanced by a data compression method which modifies the data compression method of Lempel and Ziv by addition of new character and new string extensions to improve the compression ratio, and deletion of a least recently used routine to limit the encoding tables to a fixed size to significantly improve data transmission efficiency.

162 citations


Proceedings ArticleDOI
E. Walach1, E. Karnin
07 Apr 1986
TL;DR: The proposed approach is very simple (both conceptually and from the point of view computational complexity), and it seems to be well suited to the psycho-visual characteristics of human eye.
Abstract: We introduce a new approach to the issue of lossy data compression. The basic concept has been inspired by the theory of fractal geometry. The idea is to traverse the entire data string utilizing a fixed length "yardstick". The coding is achieved by transmitting, only, the sign bit (to distinguish between the ascent and the descent) and the horizontal distance covered by the "yardstick". All data values are estimated, at the receiver's site, based on this information. We have applied this approach in the context of image compression, and the preliminary results seem to be very promising. Indeed, the proposed approach is very simple (both conceptually and from the point of view computational complexity), and it seems to be well suited to the psycho-visual characteristics of human eye. The paper includes a brief description of the coding concept. Next a number of possible modifications and extensions are discussed. Finally a number of simulations are included in order to support the theoretical derivations. Good quality images are achieved with as low as .5 bit/pel.

41 citations


Proceedings ArticleDOI
12 Jun 1986
TL;DR: Vector Quantization has only recently been applied to image data compression, but shows promise of outperforming more traditional transform coding methods, especially at high compression.
Abstract: This paper addresses the problem of data compression of medical imagery such as X-rays, Computer Tomography, Magnetic Resonance, Nuclear Medicine and Ultrasound. The Discrete Cosine Transform (DCT) has been extensively studied for image data compression, and good compression has been obtained without unduly sacrificing image quality. Vector Quantization has only recently been applied to image data compression, but shows promise of outperforming more traditional transform coding methods, especially at high compression. Vector Quantization is quite well suited for those applications where the images to be processed are very much alike, or can be grouped into a small number of classifications. These and similar studies continue to suffer from the lack of a uniformly agreed upon measure of image quality. This is also exacerbated by the large variety of electronic displays and viewing conditions.

9 citations


Proceedings ArticleDOI
02 Jun 1986
TL;DR: In this paper, an extension of traveling wave amplifiers is applied to a lossless 2 - 18 GHz two-port combiner, realized in hybrid microstrip technology, whose amplitude weighting capability allows its use in vector phase shifters for phased array applications.
Abstract: An extension of traveling wave amplifiers is applied to a lossless 2 - 18 GHz two-port combiner, realized in hybrid microstrip technology. Its amplitude weighting capability allows its use in vector phase shifters for phased array applications ; it enables also to realize a lossless n-port combiner by cascading several identical modules.

8 citations


Proceedings ArticleDOI
Jorma Rissanen1
01 Oct 1986
TL;DR: A lossless image compression system is described, which consists of a statistical model and an arithmetic code that collects the occurrence counts of the prediction errors, conditioned on past pixels forming a "context".
Abstract: A lossless image compression system is described, which consists of a statistical model and an arithmetic code. The model first performs a prediction of each pixel by a plane, and then it collects the occurrence counts of the prediction errors, conditioned on past pixels forming a "context". The counts are collected in a tree, constructed adaptively, and the size of the context with which each pixel is encoded is optimized.

7 citations


Proceedings ArticleDOI
Narciso Garcia1, C. Munoz, Alberto Sanz
01 May 1986
TL;DR: Hierarchical encoding, initially developed for image decomposition, is a feasible alternative for image transmission and storage and several independent compression strategies can be implemented, and, therefore, applied at the same time.
Abstract: Hierarchical encoding, initially developed for image decomposition, is a feasible alternative for image transmission and storage. Several independent compression strategies can be implemented, and, therefore, applied at the same time. Lossless encoding o Universal statistical compression on the hierarchical code. A unique Huffman code, valid for every hierarchical transform is built. Lossy encoding o Improvement of the intermediate approximations, as this can decrease the effective bit rate for transmission applications. Interpolating schemes and non-uniform spatial out-growth help solve this problem. o Prediction strategies on the hierarchical code. A three-dimensional predictor (space and hierarchy) on the code-pyramid reduces the information required to build new layers. o Early branch ending. Analysis of image homogeneities detects areas of similar values that can be approximated by a unique value.

6 citations



Journal ArticleDOI
TL;DR: The given solutions appear for the first time in relation to switched-capacitor filters, but they rely on classical techniques in the area of passive (distributed) networks and cannot be obtained from lumped-element filters.
Abstract: Exact analytic techniques are given for the design of strays-insensitive lossless discrete integrator (LDI) switched-capacitor low-pass and high-pass filters, with phase linearity taken into consideration. An improved synthesis algorithm is also presented for the low-pass case. The given solutions appear for the first time in relation to switched-capacitor filters, but they rely on classical techniques in the area of passive (distributed) networks and cannot be obtained from lumped-element filters. Due to the paucity of useful results, a semi tutorial style is adopted in order to make the design techniques accessible to as wide a readership as possible.

2 citations



Book ChapterDOI
01 Jan 1986
TL;DR: This chapter explores the possibility of an efficient source coding technique that is self-error-correcting, as often employed in facsimile coding.
Abstract: A major problem of the predictive source coding of images is the inevitable fact that errors often propagate and destroy large portions of the reconstructed image. While this occurs for all source coding techniques, including information reducing but synchronous DPCM, it is particularly troublesome for information preserving, asynchronous, predictive techniques. There have been many efforts to deal with this problem, including, as often employed in facsimile coding, dividing the image into segments that are alternatively source encoded and then uncompressed so that significant distortions can be limited to only one segment. All such efforts cause a reduction of the efficiency of the code. This chapter explores the possibility of an efficient source coding technique that is self-error-correcting.

Journal Article
TL;DR: An extension of the method based on the Goertzel-algorithm is given applying general two-output second-order lossless discrete resonators to the proposed structure of resonators avoiding the problems of circuit complexity of SC delay elements.
Abstract: An extension of the method based on the Goertzel-algorithm is given applying general two-output second-order lossless discrete resonators In the proposed structure of resonators simple SC integrators are used avoiding the problems of circuit complexity of SC delay elements The different resonators of the bank are of the same topology and differ only by one capacitor value The solution is, therefore, very suitable for integration

Journal ArticleDOI
TL;DR: In this paper, a theoretical analysis of the propagation constant with lossless and lossy jackets of an optical fiber in the long-wavelength range was carried out, and two determinants representing the eigenvalue equations for the lossless or lossy coatings were derived, eigenvalues are computed and the results are discussed.
Abstract: A theoretical investigation has been done on propagation constant with lossless and lossy jackets of an optical fibre. In this analysis, we have considered single-mode step-index fibre in the long wavelength range. Two determinants representing the eigenvalue equations for the lossless and lossy coatings are derived, eigenvalues are computed and the results are discussed.