scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 1987"


Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations



Journal ArticleDOI
TL;DR: A lossless progressive transmission method for grey-scale images which concentrates early transmission efforts on areas of greater image information content is described, and is computationally simple with a complexity which grows linearly with the number of pixels.
Abstract: A lossless progressive transmission method for grey-scale images which concentrates early transmission efforts on areas of greater image information content is described. The receiver does not have a priori knowledge of which image areas are to receive preferential treatment, and the preferential level of resolution is the pixel. The method makes use of simultaneous geometric and information content decompositions. The method is computationally simple with a complexity which grows linearly with the number of pixels. Compression achieved approaches that obtained by nonprogressive lossless methods, and is approximately the same as for homogeneous progressive lossless methods. Extensions of the method for progressive transmission with limited distortion and greater compression are also discussed.

40 citations


Journal ArticleDOI
TL;DR: Some error-free and irreversible two-dimensional direct-cosine transform (2D-DCT) coding, image-compression techniques applied to radiological images are discussed in this paper.
Abstract: Some error-free and irreversible two-dimensional direct-cosine-transform (2D-DCT) coding, image-compression techniques applied to radiological images are discussed in this paper. Run-length coding and Huffman coding are described, and examples are given for error-free image compression. In the case of irreversible 2D-DCT coding, the block-quantization technique and the full-frame bit-allocation (FFBA) technique are described. Error-free image compression can achieve a compression ratio from 2:1 to 3:1, whereas the irreversible 2D-DCT coding compression technique can, in general, achieve a much higher acceptable compression ratio. The currently available block-quantization hardware may lead to visible block artifacts at certain compression ratios, but FFBA may be employed with the same or higher compression ratios without generating such artifacts. An even higher compression ratio can be achieved if the image is compressed by using first FFBA and then Huffman coding. The disadvantages of FFBA are that it is sensitive to sharp edges and no hardware is available. This paper also describes the design of the FFBA technique.

24 citations


Proceedings ArticleDOI
01 Nov 1987
TL;DR: A family of compression methods using a hash table for searching the prediction information, which are especially apt for “on-the-fly” compression of transmitted data and could be a basis for specialized hardware.
Abstract: The knowledge of a short substring constitutes a good basis for guessing the next character in a natural language text. This observation, i.e. repeated guessing and encoding of subsequent characters, is very fundamental for the predictive text compression. The paper describes a family of such compression methods, using a hash table for searching the prediction information. The experiments show that the methods produce good compression gains and, moreover, are very fast. The one-pass versions are especially apt for “on-the-fly” compression of transmitted data, and could be a basis for specialized hardware.

20 citations


Journal ArticleDOI
TL;DR: A simple and direct algebraic proof is given to show that a rational m \times n (m \geq n) lossless transfer function has an orthogonal realization.
Abstract: In this paper, a simple and direct algebraic proof is given to show that a rational m \times n (m \geq n) lossless transfer function has an orthogonal realization The proof exploits two interesting properties of the impulse response of a lossless transfer function

13 citations


Patent
04 Sep 1987
TL;DR: In this article, two input channels of vector data are compressed and encoded through a vector quantizer encoder to provide a first stage of data compression and the output of the first encoder is further decoded and encoded in a novel high-speed computing mapping means which may be implemented in the form of a look-up table.
Abstract: Apparatus is provided for performing two stages of high-speed compression of vector data inputs. Two input channels of vector data are compressed and encoded through a vector quantizer encoder to provide a first stage of data compression. The output of the first encoder is further decoded then compressed and encoded in a novel high-speed computing mapping means which may be implemented in the form of a look-up table. Vector quantized encoded output is double the previous data compression of a single stage. The second stage of data compression causes very little degradation of the data from the first stage of data compression.

13 citations


01 Mar 1987
TL;DR: In this article, the feasibility of micro-computer based simulation of scalar wave propagation in various media was investigated, including lossless media and media with a loss coefficient which is linear in frequency.
Abstract: : This thesis investigates the feasibility of micro-computer based simulation of scalar wave propagation in various media. Models for lossless media and media with a loss coefficient which is linear in frequency have been coded in FORTRAN and simulated successfully on a commercially available micro-computer, with simulation times less than 30 minutes. The spatial impulse responses for classical problems using square and circular-piston excitation are presented graphically, along with new, innovative, spatial excitation source shapes. Keywords include: Acoustic imaging, Lossy media, Ultrasonic, Propagation, Transfer functions, and Linear systems.

3 citations


Dissertation
01 Jan 1987
TL;DR: An arithmetic universal coding scheme to achieve the entropy of an information source with the given probabilities from the modeler to avoid a model file being, too large, nonadaptive models are transformed into adaptive models simply by keeping the appropriate counts of symbols or strings.
Abstract: In data compression systems, most existing codes have integer code length because of the nature of block codes. One of the ways to improve compression is to use codes with noninteger length. We developed an arithmetic universal coding scheme to achieve the entropy of an information source with the given probabilities from the modeler. W e show how the universal coder can be used for both the probabilistic and nonprobabiIistic approaches in data compression, t o avoid a model file being, too large, nonadaptive models are transformed into adaptive models simply by keeping the appropriate counts of symbols or strings. The idea of the universal coder is from the Elias code so it is quite close to the arithmetic code suggested by Rissanen and Langdon. The major difference is the way that we handle the precision problem and the carry-over problem. Ten to twenty percent extra compression can be expected as a result of using the universal coder. The process of adaptive modeling, however, may be forty times slower unless parallel algorithms are used to speed up the probability calculating process.

3 citations


Journal ArticleDOI
TL;DR: Reversible data compression of computer-generated digital color pictures is studied and a simple algorithm called “skip-on-equal” is shown to result in a compression ratio around six.

2 citations


01 Jan 1987
TL;DR: In this article, a parallel compression algorithm for the 16,384 processor MPP machine was developed, which can be viewed as a combination of on-line dynamic lossless test compression techniques and vector quantization.
Abstract: A parallel compression algorithm for the 16,384 processor MPP machine was developed The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization These concepts are described How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed

Proceedings ArticleDOI
13 Oct 1987
TL;DR: The concept of simultaneous decomposition is extended to include a third component: grey level approximation, which is computationally simple and intended for general purpose processor architectures.
Abstract: Swift recognition of grey scale images transmitted through low bandwidth channels has been demonstrated by various progressive techniques in which a series of gradually refined image approximations are received and displayed. A previous technique demonstrated non-homogeneous progressive transmission in which image content controls transmission priorities, resulting in a decrease of the time required to receive a usable image. The non-homogeneous technique utilized two simultaneous decompositions, a quad-tree based spatial and a subimage information measure. The concept of simultaneous decomposition is extended to include a third component: grey level approximation. Just as an undecomposed quad-subtree provides a spatial approximation, values of pixels used as quad-subtree representatives are initially approximated and later refined. The three simultaneous decompositions are integrated so as to achieve, for a given transmission time, a higher degree of received image usefulness. The receiver does not have a priori knowledge about which image areas are to receive preferential treatment, and the level of preference is the pixel. The total transmission time for the series of approximations concluding in lossless reception, including preferential decomposition overhead, is comparable to the time required by non-progressive lossless methods. The technique is computationally simple and intended for general purpose processor architectures.

Proceedings ArticleDOI
09 Nov 1987
TL;DR: In this paper, a lossy nonlinear directional coupler is studied numerically, consisting of two lossless planar optical waveguides coupled through lossy multiple quantum well nonlinear medium.
Abstract: A lossy nonlinear directional coupler is studied numerically. It consists of two lossless planar optical waveguides coupled through a lossy multiple quantum well nonlinear medium. GaAs-based material parameters were used for the analysis, and the input/output optical transfer characteristics are presented here.

DOI
01 Dec 1987
TL;DR: An approach to image data compression using the 2-D lattice modelling method is presented, which includes uniform quantisation and entropy coding of the prediction errors of the predictor.
Abstract: An approach to image data compression using the 2-D lattice modelling method is presented. In addition to the 2-D lattice predictor, this realisation includes uniform quantisation and entropy coding of the prediction errors of the predictor. Results show that coded pictures with an signal/noise ratio of 30.5 dB can be obtained at an information rate of 0.8 bit/pixel. a