scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 1992"


Journal ArticleDOI
TL;DR: The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method, which has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications.
Abstract: A joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG's proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT (discrete cosine transform)-based method is specified for 'lossy' compression, and a predictive method for 'lossless' compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method. >

3,425 citations


Journal ArticleDOI
TL;DR: If pictures can be characterized by their membership in the smoothness classes considered, then wavelet-based methods are near-optimal within a larger class of stable transform-based, nonlinear methods of image compression.
Abstract: A novel theory is introduced for analyzing image compression methods that are based on compression of wavelet decompositions. This theory precisely relates (a) the rate of decay in the error between the original image and the compressed image as the size of the compressed image representation increases (i.e., as the amount of compression decreases) to (b) the smoothness of the image in certain smoothness classes called Besov spaces. Within this theory, the error incurred by the quantization of wavelet transform coefficients is explained. Several compression algorithms based on piecewise constant approximations are analyzed in some detail. It is shown that, if pictures can be characterized by their membership in the smoothness classes considered, then wavelet-based methods are near-optimal within a larger class of stable transform-based, nonlinear methods of image compression. Based on previous experimental research it is argued that in most instances the error incurred in image compression should be measured in the integral sense instead of the mean-square sense. >

1,038 citations


Journal ArticleDOI
TL;DR: A novel design procedure is presented based on the two-channel lossless lattice that enables the design of a large class of FIR (finite impulse response)-PR filter banks, and includes the N=2M case.
Abstract: The authors obtain a necessary and sufficient condition on the 2M (M=number of channels) polyphase components of a linear-phase prototype filter of length N=2 mM (where m=an arbitrary positive integer), such that the polyphase component matrix of the modulated filter is lossless. The losslessness of the polyphase component matrix, in turn, is sufficient to ensure that the analysis/synthesis system satisfies perfect reconstruction (PR). Using this result, a novel design procedure is presented based on the two-channel lossless lattice. This enables the design of a large class of FIR (finite impulse response)-PR filter banks, and includes the N=2M case. It is shown that this approach requires fewer parameters to be optimized than in the pseudo-QMF (quadrature mirror filter) designs and in the lossless lattice based PR-QMF designs (for equal length filters in the three designs). This advantage becomes significant when designing long filters for large M. The design procedure and its other advantages are described in detail. Design examples and comparisons are included. >

395 citations


Journal ArticleDOI
TL;DR: Results from an image compression scheme based on iterated transforms are presented as a function of several encoding parameters including maximum allowed scale factor, number of domains, resolution of scale and offset values, minimum range size, and target fidelity.

231 citations


Patent
08 Apr 1992
TL;DR: In this paper, a method and apparatus for image compression suitable for personal computer applications, which compresses and stores data in two steps, is presented, where an image is captured in real-time and compressed using an efficient method and stored to a hard disk.
Abstract: A method and apparatus for image compression suitable for personal computer applications, which compresses and stores data in two steps. An image is captured in real-time and compressed using an efficient method and stored to a hard-disk. At some later time, the data is further compressed in non-real-time using a computationally more intense algorithm that results in a higher compression ratio. The two-step approach allows the storage reduction benefits of a highly sophisticated compression algorithm to be achieved without requiring the computational resources to perform this algorithm in real-time. A compression algorithm suitable for performing the first compression step on a host processor in a personal computer is also described. The first compression step accepts 4:2:2 YCrCb data from the video digitizer. The two chrominance components are averaged and a pseudo-random number is added to all components. The resulting values are quantized and packed into a single 32-bit word representing a 2×2 array of pixels. The seed value for the pseudo-random number is remembered so that the pseudo-random noise can be removed before performing the second compression step.

154 citations


Journal ArticleDOI
TL;DR: A tutorial review of a classic paper by Samuel J. Mason (1954), which contained the first definition of a unilateral power gain for a linear two-port and the first proof that this grain is invariant with respect to linear lossless reciprocal four-port embeddings, is described.
Abstract: A tutorial review of a classic paper by Samuel J. Mason (1954) is described. That paper contained the first definition of a unilateral power gain for a linear two-port and the first proof that this grain is invariant with respect to linear lossless reciprocal four-port embeddings, thereby making it useful as a figure of merit intrinsic to the device. In this work, that original paper is brought up to date, a tutorial exposition of its contents is presented in a modern form, and its significance and applications in microwave engineering are discussed. The subsequent advances in the subject area are summarized, so that the original paper can be placed within a broader context and understood with a more general perspective. >

133 citations


Journal ArticleDOI
TL;DR: The modified multichannel linear predictor outperforms the other methods while offering certain advantages in implementation and is compared to that of a novel two-dimensional linear predictive coder developed by extending the multichannels version of the Burg algorithm to two dimensions.
Abstract: The performances of a number of block-based, reversible, compression algorithms suitable for compression of very-large-format images (4096*4096 pixels or more) are compared to that of a novel two-dimensional linear predictive coder developed by extending the multichannel version of the Burg algorithm to two dimensions. The compression schemes implemented are: Huffman coding, Lempel-Ziv coding, arithmetic coding, two-dimensional linear predictive coding (in addition to the aforementioned one), transform coding using discrete Fourier-, discrete cosine-, and discrete Walsh transforms, linear interpolative coding, and combinations thereof. The performances of these coding techniques for a few mammograms and chest radiographs digitized to sizes up to 4096*4096 10 b pixels are discussed. Compression from 10 b to 2.5-3.0 b/pixel on these images has been achieved without any loss of information. The modified multichannel linear predictor outperforms the other methods while offering certain advantages in implementation. >

128 citations


Patent
02 Sep 1992
TL;DR: In this article, a general-purpose, single-pass, adaptive, and lossless data compression invention implements an LZ1-like method using a hash-based architecture, which is suitable for use in data storage and data communications applications.
Abstract: A general-purpose, single-pass, adaptive, and lossless data compression invention implements an LZ1-like method using a hash-based architecture. It is suitable for use in data storage and data communications applications. Implementation efficiency, in terms of required memory and logic gates relative to the typical compression ratio achieved, is highly optimized. An easy-to-implement and quick-to-verify hash function is used. Differential copy lengths may be used to reduce the number of bits required to encode the copy-length field within copy tokens. That is, if multiple matches to a sequence of input bytes are found in the current window, then the length of the copy may be encoded as the difference between the lengths of the longest and the second-longest match, which results in a smaller copy length which likely has a shorter encoded representation. To further increase the compression achieved, literals are not used, but rather input bytes without window matches are mapped into alphabet tokens of variable length using a unary-length code. Other unary-length codes are used to represent the copy-length field and the displacement field within copy tokens.

124 citations


Patent
Ke-Chiang Chu1
18 Dec 1992
TL;DR: In this paper, a data compression process and system that identifies the data type of an input data stream and then selects in response to the identified data type at least one data compression method from a set of data compression methods that provides an optimal compression ratio for that particular data type, thus maximizing the compression ratio of that data stream.
Abstract: A data compression process and system that identifies the data type of an input data stream and then selects in response to the identified data type at least one data compression method from a set of data compression methods that provides an optimal compression ratio for that particular data type, thus maximizing the compression ratio for that input data stream. Moreover, the data compression process also provides means to alter the rate of compression during data compression for added flexibility and data compression efficiency. Furthermore, a system memory allocation process is also provided to allow system or user control over the amount of system memory to be allocated for the memory intensive data compression process. System memory allocation process estimates the memory requirement to compress the input data stream, and allocates only that amount of system memory as needed by the data compression for memory allocation efficiency.

105 citations


Proceedings ArticleDOI
24 Mar 1992
TL;DR: The authors consider choosing Sigma to be an alphabet whose symbols are the words of English or, in general, alternate maximal strings of alphanumeric characters and nonalphanumeric characters to take advantage of longer-range correlations between words and achieve better compression.
Abstract: Text compression algorithms are normally defined in terms of a source alphabet Sigma of 8-bit ASCII codes. The authors consider choosing Sigma to be an alphabet whose symbols are the words of English or, in general, alternate maximal strings of alphanumeric characters and nonalphanumeric characters. The compression algorithm would be able to take advantage of longer-range correlations between words and thus achieve better compression. The large size of Sigma leads to some implementation problems, but these are overcome to construct word-based LZW, word-based adaptive Huffman, and word-based context modelling compression algorithms. >

91 citations


Patent
01 Jun 1992
TL;DR: In this paper, a dictionary of finite size is used to facilitate the compression and decompression of data, and the data entries in the second dictionary represent the entries of the first dictionary that compress the greatest amount of input data.
Abstract: A class of lossless data compression algorithms use a memory-based dictionary of finite size to facilitate the compression and decompression of data When the current dictionary (CD) fills up with encoded character strings, it is reset thereby losing the compression information previously contained in the dictionary To reduce the loss in data compression caused by dictionary resets, a second, standby dictionary (SD) is used to simultaneously store a subset of the encoded data entries stored in the first dictionary The data entries in the second dictionary represent the data entries of the first dictionary that compress the greatest amount of input data When the first dictionary is ready to be reset, the first dictionary is replaced with the second dictionary, maintaining high data compression and freeing up memory space for new encoded data strings

Patent
23 Dec 1992
TL;DR: In this article, multiple hash tables are used based on different subblock sizes for string matching, and this improves the compression ratio and rate of compression, while using multiple hashing tables with a recoverable hashing method further improves compression ratio.
Abstract: Compressing a sequence of characters drawn from an alphabet uses string substitution with no a priori information. An input data block is processed into an output data block comprised of variable length incompressible data sections and variable length compressed token sections. Multiple hash tables are used based on different subblock sizes for string matching, and this improves the compression ratio and rate of compression. The plurality of uses of the multiple hash tables allows for selection of an appropriate compression data rate and/or compression factor in relation to the input data. Using multiple hashing tables with a recoverable hashing method further improves compression ratio and compression rate. Each incompressible data section contains means to distinguish it from compressed token sections.

Journal ArticleDOI
TL;DR: A VLSI implementation of a lossless data compression algorithm is reported and its performance on several 8-b test images exceeds other techniques employing differential pulse code modulation followed by arithmetic coding, adaptive Huffman coding, and a Lempel-Ziv-Welch (LZW) algorithm.
Abstract: A VLSI implementation of a lossless data compression algorithm is reported. This is the first implementation of an encoder/decoder chip set that uses the Rice (see JPL Publication 91-1, 1991) algorithm and provides an introduction to the algorithm and a description of the high-performance hardware. The algorithm is adaptive over a aide entropy range. Its performance on several 8-b test images exceeds other techniques employing differential pulse code modulation (DPCM) followed by arithmetic coding, adaptive Huffman coding, and a Lempel-Ziv-Welch (LZW) algorithm. A major feature of the algorithm is that it requires no look-up tables or external RAM. There are only 71000 transistors required to implement the encoder and decoder. Each chip was fabricated in a 1.0- mu m CMOS process and both are only 5 mm on a side. A comparison is made with other hardware realizations. Under laboratory conditions, the encoder compresses at a rate in excess of 50 Msamples/s and the decoder operates at 25 Msamples/s. The current implementation processes quantized data from 4 to 14 b/sample. >

Journal ArticleDOI
TL;DR: This work presents two new methods (called MLP and PPPM) for lossless compression, both involving linear prediction, modeling prediction errors by estimating the variance of a Laplace distribution, and coding using arithmetic coding applied to precomputed distributions.
Abstract: We give a new paradigm for lossless image compression, with four modular components: pixel sequence, prediction, error modeling and coding. We present two new methods (called MLP and PPPM) for lossless compression, both involving linear prediction, modeling prediction errors by estimating the variance of a Laplace distribution, and coding using arithmetic coding applied to precomputed distributions. The MLP method is both progressive and parallelizable. We give results showing that our methods perform significantly better than other currently used methods for lossless compression of high resolution images, including the proposed JPEG standard. We express our results both in terms of the compression ratio and in terms of a useful new measure of compression efficiency, which we call compression gain.

Patent
28 Aug 1992
TL;DR: In this paper, a method and apparatus for storing compressed bit map images in a laser printer is described, where bit maps representing a page of data are divided into bands and compressed into the printer memory and when needed by the interpreter/rasterizer, they are decompressed into another portion of that memory, or when desired to print those bands they are directly transmitted to a decompression engine.
Abstract: A method and apparatus for storing compressed bit map images in a laser printer. Bit map images representing a page of data are divided into bands and compressed into the printer memory. Then, when needed by the interpreter/rasterizer, they are decompressed into another portion of that memory, or when desired to print those bands they are directly transmitted to a decompression engine. The bands of bit map image are compressed using a Lempel-Ziv algorithm that contains improvements allowing compression towards the end of the band and improves the compression speed at the beginning of the band by initializing a hash table. Further, the interpreter/rasterizer switches between compression routines depending on the available memory and the desired speed of compression. The compression routine requests supplemental destination buffers when it needs additional memory in which to compress data. Finally, the compression continues to add margin white space during the compression of the uncompressed bit images so that margin what space need not be stored in the uncompressed bit image.

Proceedings ArticleDOI
24 Mar 1992
TL;DR: The authors show that compression can be efficiently parallelized and a computational advantage is obtained when the dictionary has the prefix property and can be generalized to the sliding window method where the dictionary is a window that passes continuously over the input string.
Abstract: The authors study parallel algorithms for lossless data compression via textual substitution. Dynamic dictionary compression is known to be P-complete, however, if the dictionary is given in advance, they show that compression can be efficiently parallelized and a computational advantage is obtained when the dictionary has the prefix property. The approach can be generalized to the sliding window method where the dictionary is a window that passes continuously from left to right over the input string. >


Journal ArticleDOI
TL;DR: The simple strategy of treating bit-planes as independent bi-level images for JBIG coding yields compressions at least comparable to and sometimes better than the JPEG standard in its lossless mode, making it attractive in a wide variety of environments.
Abstract: The JBIG coding standard like the G3 and G4 facsimile standards defines a method for the lossless (bit-preserving) compression of bi-level (two-tone or black/white) images. One advantage it has over G3/G4 is superior compression, especially on bi-level images rendering greyscale via halftoning. On such images compression improvements as large as a factor of ten are common. A second advantage of the JBIG standard is that it can be parameterized for progressive coding. Progressive coding has application in image databases that must serve displays of differing resolution, image databases delivering images to CRT displays over medium rate (say, 9.6 to 64 kbit/s) channels, and image transmission services using packet networks having packet priority classes. It is also possible to parameterize for sequential coding in applications not benefiting from progressive buildup. It is possible to effectively use the JBIG coding standard for coding greyscale and color images as well as bi-level images. The simple strategy of treating bit-planes as independent bi-level images for JBIG coding yields compressions at least comparable to and sometimes better than the JPEG standard in its lossless mode. The excellent compression and great flexibility of JBIG coding make it attractive in a wide variety of environments.

Journal ArticleDOI
TL;DR: Pipelined and parallel architectures for high-speed implementation of Huffman and Viterbi decoders (both of which belong to the class of tree-based decoder) are presented and incremental computation technique is used to obtain efficient parallel implementations.
Abstract: Pipelined and parallel architectures for high-speed implementation of Huffman and Viterbi decoders (both of which belong to the class of tree-based decoders) are presented. Huffman decoders are used for lossless compression. The Viterbi decoder is commonly used in communications systems. The achievable speed in these decoders is inherently limited due to the sequential nature of their computation. This speed limitation is overcome using a previously proposed technique of look-ahead computation. The incremental computation technique is used to obtain efficient parallel (or block) implementations. The decomposition technique is exploited to reduce the hardware complexity in pipelined Viterbi decoders, but not in Huffman decoders. Logic minimization is used to reduce the hardware overhead complexity in pipelined Huffman decoders. >

Patent
12 Oct 1992
TL;DR: In this article, a method and system for compression and decompression of data is presented. But the system compresses data by calculating the differences between adjacent data values, identifying a plurality of frequently occurring differences, tracking the frequency of occurrence of the identified differences, and then generating a first encoding for the identified difference wherein the length of the encoding is based on the frequency for occurrence, and for each calculated difference, encoding the calculated difference is identified difference.
Abstract: A method and system for compression and decompression. The system compresses data by calculating the differences between adjacent data values, identifying a plurality of frequently occurring differences, tracking the frequency of occurrence of the identified differences, generating a first encoding for the identified differences wherein the length of the encoding is based on the frequency of occurrence, generating a second encoding for the differences other than the identified differences, and for each calculated difference, encoding the calculated difference is identified difference and encoding the calculated difference using the second encoding when the calculated difference is not an identified difference to effect the compression of the data.

Proceedings ArticleDOI
24 Mar 1992
TL;DR: In this study, the Bostelmann (1974) technique is studied for use at all resolutions, whereas in the arithmetic coded JPEG lossless, the technique is applied only at the 16-bit per pixel resolution.
Abstract: The JPEG lossless arithmetic coding algorithm and a predecessor algorithm called Sunset both employ adaptive arithmetic coding with the context model and parameter reduction approach of Todd et al. The authors compare the Sunset and JPEG context models for the lossless compression of gray-scale images, and derive new algorithms based on the strengths of each. The context model and binarization tree variations are compared in terms of their speed (the number of binary encodings required per test image) and their compression gain. In this study, the Bostelmann (1974) technique is studied for use at all resolutions, whereas in the arithmetic coded JPEG lossless, the technique is applied only at the 16-bit per pixel resolution. >

Proceedings ArticleDOI
11 Oct 1992
TL;DR: A quantization technique to better approximate the higher coefficients has been used to obtain an accurate representation of the signal and the tradeoffs between accuracy, speed, and compression ratio are discussed.
Abstract: Orthogonal transforms provide alternate signal representations that can be useful for electrocardiogram (ECG) data compression. The goal is to select as small a subset of the transform coefficients as possible which contain the most information about the signal, without introducing objectionable error after reconstruction. With a sampling rate of 1 kHz, more than 99% of the power in the DCT is contained within the first 20% of the coefficients. Despite this result a 5:1 compression ratio cannot be obtained by merely substituting zero for the remaining 80%. The coefficients after the first 20%, although of relatively small magnitude, preserve the DC integrity of the signal. Approximating these components as zero leads to introduction of spurious sinusoidal terms in the reconstructed signal. A quantization technique to better approximate the higher coefficients has been used to obtain an accurate representation of the signal. The tradeoffs between accuracy, speed, and compression ratio are discussed. >

Proceedings ArticleDOI
27 Aug 1992
TL;DR: The numerous problems that confront vision researchers entering the field of image compression are discussed and special attention is paid to the connection between the contrast sensitivity function and the JPEG quantization matrix.
Abstract: This paper asks how the vision community can contribute to the goal of achieving perceptually lossless image fidelity with maximum compression. In order to maintain a sharp focus the discussion is restricted to the JPEG-DCT image compression standard. The numerous problems that confront vision researchers entering the field of image compression are discussed. Special attention is paid to the connection between the contrast sensitivity function and the JPEG quantization matrix.

Proceedings ArticleDOI
24 Mar 1992
TL;DR: A new method for error modeling applicable to the multi-level progressive (MLP) algorithm for hierarchical lossless image compression is presented, based on a concept called the variability index, which provides accurate models for pixel prediction errors without requiring explicit transmission of the models.
Abstract: The authors present a new method for error modeling applicable to the multi-level progressive (MLP) algorithm for hierarchical lossless image compression. This method, based on a concept called the variability index, provides accurate models for pixel prediction errors without requiring explicit transmission of the models. They also use the variability index to show that prediction errors do not always follow the Laplace distribution, as is commonly assumed; replacing the Laplace distribution with a more general distribution further improves compression. They describe a new compression measurement called compression gain, and give experimental results showing that the using variability index gives significantly better compression than other methods in the literature. >

Journal ArticleDOI
TL;DR: The proposed adaptive filter makes real-time rational subspace estimation an accessible alternative to computationally expensive offline techniques.
Abstract: Rational models have been studied as a tractable attempt to account for frequency dependencies using a finite parameter description. In this context, the following problem is addressed: Given time-domain measurements, estimate rational orthonormal spanning vectors for the signal and noise subspaces. It is shown that the problem can be rephrased as adapting a lossless transfer matrix so as to maximize the power split between two sets of output bins. An efficient and numerically robust adaptive filtering algorithm is derived for lossless multivariable lattice filters, and the system can be programmed in real time using CORDIC processors. The adaptive filter equations are consistent with the proper subspace identification if the subspace filter satisfies a sufficient order condition. In under-modeled scenarios the stable stationary points are characterized by a minimized Rayleigh quotient which leads to good subspace fits. The proposed adaptive filter makes real-time rational subspace estimation an accessible alternative to computationally expensive offline techniques. >


Journal ArticleDOI
TL;DR: An efficient data compression method is proposed for digitized fingerprint images using B-spline functions and it is believed that this method of representation may also enable more effective algorithms to be developed to perform feature extraction, classification, or recognition of fingerprints.

Patent
05 Nov 1992
TL;DR: In this article, a data compression and decompression method was proposed utilizing a sliding window dictionary in combination with an adaptive dictionary. But only data which satisfy certain criteria is entered into the adaptive dictionary, and matched data is replaced with a pointer to the dictionary entry.
Abstract: A data compression and decompression method and apparatus utilizing a sliding window dictionary in combination with an adaptive dictionary. Incoming data moves through a buffer and is compared against both the sliding window dictionary and the adaptive dictionary, and matched data is replaced with a pointer to the dictionary entry. All incoming data is entered into the sliding window dictionary, but only data which satisfies certain criteria is entered into the adaptive dictionary.

Journal ArticleDOI
TL;DR: Decomposition of images with the Haar orthonormal basis which is an important member of compactly supported Wavelets and a quadtree structured hierarchical coding technique are used in this work to obtain high image compression efficiency and time complexity linear in the number of pixels.
Abstract: Decomposition of images with the Haar orthonormal basis which is an important member of compactly supported Wavelets and a quadtree structured hierarchical coding technique are used in this work to obtain high image compression efficiency and time complexity linear in the number of pixels. An exhaustive testing of the algorithm has been performed on images of different complexity and typical of some application environment (image transmission and storing, remote control of intelligent robots). The results of the experiments arc presented and discussed. Finally, a comparison of quality performance of these techniques with the JPEG (Block Cosine Transform coding) compression technique is presented.

Book ChapterDOI
01 Jan 1992
TL;DR: The method has been designed to be computationally efficient: compression or decompression of a 512 × 512 image requires only 4 seconds on a Sun SPARCstation 1.
Abstract: An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The Guide Star digitised sky survey images can be compressed by at least a factor of 10 with no major losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 × 512 image requires only 4 seconds on a Sun SPARCstation 1.