scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 1996"


Book
01 Jan 1996
TL;DR: The author explains the development of the Huffman Coding Algorithm and some of the techniques used in its implementation, as well as some of its applications, including Image Compression, which is based on the JBIG standard.
Abstract: Preface 1 Introduction 1.1 Compression Techniques 1.1.1 Lossless Compression 1.1.2 Lossy Compression 1.1.3 Measures of Performance 1.2 Modeling and Coding 1.3 Organization of This Book 1.4 Summary 1.5 Projects and Problems 2 Mathematical Preliminaries 2.1 Overview 2.2 A Brief Introduction to Information Theory 2.3 Models 2.3.1 Physical Models 2.3.2 Probability Models 2.3.3. Markov Models 2.3.4 Summary 2.5 Projects and Problems 3 Huffman Coding 3.1 Overview 3.2 "Good" Codes 3.3. The Huffman Coding Algorithm 3.3.1 Minimum Variance Huffman Codes 3.3.2 Length of Huffman Codes 3.3.3 Extended Huffman Codes 3.4 Nonbinary Huffman Codes 3.5 Adaptive Huffman Coding 3.5.1 Update Procedure 3.5.2 Encoding Procedure 3.5.3 Decoding Procedure 3.6 Applications of Huffman Coding 3.6.1 Lossless Image Compression 3.6.2 Text Compression 3.6.3 Audio Compression 3.7 Summary 3.8 Projects and Problems 4 Arithmetic Coding 4.1 Overview 4.2 Introduction 4.3 Coding a Sequence 4.3.1 Generating a Tag 4.3.2 Deciphering the Tag 4.4 Generating a Binary Code 4.4.1 Uniqueness and Efficiency of the Arithmetic Code 4.4.2 Algorithm Implementation 4.4.3 Integer Implementation 4.5 Comparison of Huffman and Arithmetic Coding 4.6 Applications 4.6.1 Bi-Level Image Compression-The JBIG Standard 4.6.2 Image Compression 4.7 Summary 4.8 Projects and Problems 5 Dictionary Techniques 5.1 Overview 5.2 Introduction 5.3 Static Dictionary 5.3.1 Diagram Coding 5.4 Adaptive Dictionary 5.4.1 The LZ77 Approach 5.4.2 The LZ78 Approach 5.5 Applications 5.5.1 File Compression-UNIX COMPRESS 5.5.2 Image Compression-the Graphics Interchange Format (GIF) 5.5.3 Compression over Modems-V.42 bis 5.6 Summary 5.7 Projects and Problems 6 Lossless Image Compression 6.1 Overview 6.2 Introduction 6.3 Facsimile Encoding 6.3.1 Run-Length Coding 6.3.2 CCITT Group 3 and 4-Recommendations T.4 and T.6 6.3.3 Comparison of MH, MR, MMR, and JBIG 6.4 Progressive Image Transmission 6.5 Other Image Compression Approaches 6.5.1 Linear Prediction Models 6.5.2 Context Models 6.5.3 Multiresolution Models 6.5.4 Modeling Prediction Errors 6.6 Summary 6.7 Projects and Problems 7 Mathematical Preliminaries 7.1 Overview 7.2 Introduction 7.3 Distortion Criteria 7.3.1 The Human Visual System 7.3.2 Auditory Perception 7.4 Information Theory Revisted 7.4.1 Conditional Entropy 7.4.2 Average Mutual Information 7.4.3 Differential Entropy 7.5 Rate Distortion Theory 7.6 Models 7.6.1 Probability Models 7.6.2 Linear System Models 7.6.3 Physical Models 7.7 Summary 7.8 Projects and Problems 8 Scalar Quantization 8.1 Overview 8.2 Introduction 8.3 The Quantization Problem 8.4 Uniform Quantizer 8.5 Adaptive Quantization 8.5.1 Forward Adaptive Quantization 8.5.2 Backward Adaptive Quantization 8.6 Nonuniform Quantization 8.6.1 pdf-Optimized Quantization 8.6.2 Companded Quantization 8.7 Entropy-Coded Quantization 8.7.1 Entropy Coding of Lloyd-Max Quantizer Outputs 8.7.2 Entropy-Constrained Quantization 8.7.3 High-Rate Optimum Quantization 8.8 Summary 8.9 Projects and Problems 9 Vector Quantization 9.1 Overview 9.2 Introduction 9.3 Advantages of Vector Quantization over Scalar Quantization 9.4 The Linde-Buzo-Gray Algorithm 9.4.1 Initializing the LBG Algorithm 9.4.2 The Empty Cell Problem 9.4.3 Use of LBG for Image Compression 9.5 Tree-Structured Vector Quantizers 9.5.1 Design of Tree-Structured Vector Quantizers 9.6 Structured Vector Quantizers 9.6.1 Pyramid Vector Quantization 9.6.2 Polar and Spherical Vector Quantizers 9.6.3 Lattice Vector Quantizers 9.7 Variations on the Theme 9.7.1 Gain-Shape Vector Quantization 9.7.2 Mean-Removed Vector Quantization 9.7.3 Classified Vector Quantization 9.7.4 Multistage Vector Quantization 9.7.5 Adaptive Vector Quantization 9.8 Summary 9.9 Projects and Problems 10 Differential Encoding 10.1 Overview 10.2 Introduction 10.3 The Basic Algorithm 10.4 Prediction in DPCM 10.5 Adaptive DPCM (ADPCM) 10.5.1 Adaptive Quantization in DPCM 10.5.2 Adaptive Prediction in DPCM 10.6 Delta Modulation 10.6.1 Constant Factor Adaptive Delta Modulation (CFDM) 10.6.2 Continuously Variable Slope Delta Modulation 10.7 Speech Coding 10.7.1 G.726 10.8 Summary 10.9 Projects and Problems 11 Subband Coding 11.1 Overview 11.2 Introduction 11.3 The Frequency Domain and Filtering 11.3.1 Filters 11.4 The Basic Subband Coding Algorithm 11.4.1 Bit Allocation 11.5 Application to Speech Coding-G.722 11.6 Application to Audio Coding-MPEG Audio 11.7 Application to Image Compression 11.7.1 Decomposing an Image 11.7.2 Coding the Subbands 11.8 Wavelets 11.8.1 Families of Wavelets 11.8.2 Wavelets and Image Compression 11.9 Summary 11.10 Projects and Problems 12 Transform Coding 12.1 Overview 12.2 Introduction 12.3 The Transform 12.4 Transforms of Interest 12.4.1 Karhunen-Loeve Transform 12.4.2 Discrete Cosine Transform 12.4.3 Discrete Sine Transform 12.4.4 Discrete Walsh-Hadamard Transform 12.5 Quantization and Coding of Transform Coefficients 12.6 Application to Image Compression-JPEG 12.6.1 The Transform 12.6.2 Quantization 12.6.3 Coding 12.7 Application to Audio Compression 12.8 Summary 12.9 Projects and Problems 13 Analysis/Synthesis Schemes 13.1 Overview 13.2 Introduction 13.3 Speech Compression 13.3.1 The Channel Vocoder 13.3.2 The Linear Predictive Coder (Gov.Std.LPC-10) 13.3.3 Code Excited Linear Prediction (CELP) 13.3.4 Sinusoidal Coders 13.4 Image Compression 13.4.1 Fractal Compression 13.5 Summary 13.6 Projects and Problems 14 Video Compression 14.1 Overview 14.2 Introduction 14.3 Motion Compensation 14.4 Video Signal Representation 14.5 Algorithms for Videoconferencing and Videophones 14.5.1 ITU_T Recommendation H.261 14.5.2 Model-Based Coding 14.6 Asymmetric Applications 14.6.1 The MPEG Video Standard 14.7 Packet Video 14.7.1 ATM Networks 14.7.2 Compression Issues in ATM Networks 14.7.3 Compression Algorithms for Packet Video 14.8 Summary 14.9 Projects and Problems A Probability and Random Processes A.1 Probability A.2 Random Variables A.3 Distribution Functions A.4 Expectation A.5 Types of Distribution A.6 Stochastic Process A.7 Projects and Problems B A Brief Review of Matrix Concepts B.1 A Matrix B.2 Matrix Operations C Codes for Facsimile Encoding D The Root Lattices Bibliography Index

2,311 citations


Journal ArticleDOI
TL;DR: A new image multiresolution transform that is suited for both lossless (reversible) and lossy compression, and entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity.
Abstract: We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bit-shift operations. During its calculation, the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropy-coding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate versus distortion performance is comparable to those of the most efficient lossy compression methods.

738 citations


01 May 1996
TL;DR: This specification defines a lossless compressed data format that compresses data using a combination of the LZ77 algorithm and Huffman coding, with efficiency comparable to the best currently available general-purpose compression methods.
Abstract: This specification defines a lossless compressed data format that compresses data using a combination of the LZ77 algorithm and Huffman coding, with efficiency comparable to the best currently available general-purpose compression methods. The data can be produced or consumed, even for an arbitrarily long sequentially presented input data stream, using only an a priori bounded amount of intermediate storage. The format can be implemented readily in a manner not covered by patents.

718 citations


Proceedings ArticleDOI
31 Mar 1996
TL;DR: LOCO-I as discussed by the authors combines the simplicity of Huffman coding with the compression potential of context models, thus "enjoying the best of both worlds." The algorithm is based on a simple fixed context model, which approaches the capability of the more complex universal context modeling techniques for capturing high-order dependencies.
Abstract: LOCO-I (low complexity lossless compression for images) is a novel lossless compression algorithm for continuous-tone images which combines the simplicity of Huffman coding with the compression potential of context models, thus "enjoying the best of both worlds." The algorithm is based on a simple fixed context model, which approaches the capability of the more complex universal context modeling techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with a collection of (context-conditioned) Huffman codes, which is realized with an adaptive, symbol-wise, Golomb-Rice code. LOCO-I attains, in one pass, and without recourse to the higher complexity arithmetic coders, compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. In fact, LOCO-I is being considered by the ISO committee as a replacement for the current lossless standard in low-complexity applications.

625 citations


01 May 1996
TL;DR: This specification defines a lossless compressed data format that is compatible with the widely used GZIP utility and includes a cyclic redundancy check value for detecting data corruption.
Abstract: This specification defines a lossless compressed data format that is compatible with the widely used GZIP utility. The format includes a cyclic redundancy check value for detecting data corruption. The format presently uses the DEFLATE method of compression but can be easily extended to use other compression methods. The format can be implemented readily in a manner not covered by patents.

424 citations


01 May 1996
TL;DR: This specification defines a lossless compressed data format that can be produced or consumed, even for an arbitrarily long sequentially presented input data stream, using only an a priori bounded amount of intermediate storage.
Abstract: This specification defines a lossless compressed data format. The data can be produced or consumed, even for an arbitrarily long sequentially presented input data stream, using only an a priori bounded amount of intermediate storage. The format presently uses the DEFLATE compression method but can be easily extended to use other compression methods. It can be implemented readily in a manner not covered by patents. This specification also defines the ADLER-32 checksum (an extension and improvement of the Fletcher checksum), used for detection of data corruption, and provides an algorithm for computing it.

422 citations


Journal ArticleDOI
TL;DR: The sequential, lossless compression schemes obtained when the context modeler is used with an arithmetic coder, are tested with a representative set of gray-scale images and the compression ratios are compared with state-of-the-art algorithms available in the literature.
Abstract: Inspired by theoretical results on universal modeling, a general framework for sequential modeling of gray-scale images is proposed and applied to lossless compression. The model is based on stochastic complexity considerations and is implemented with a tree structure. It is efficiently estimated by a modification of the universal algorithm context. Several variants of the algorithm are described. The sequential, lossless compression schemes obtained when the context modeler is used with an arithmetic coder are tested with a representative set of gray-scale images. The compression ratios are compared with those obtained with state-of-the-art algorithms available in the literature, with the results of the comparison consistently favoring the proposed approach.

239 citations


Journal ArticleDOI
TL;DR: By using an error correction method that approximates the reconstructed coefficients quantization error, this work minimize distortion for a given compression rate at low computational cost.
Abstract: Schemes for image compression of black-and-white images based on the wavelet transform are presented. The multiresolution nature of the discrete wavelet transform is proven as a powerful tool to represent images decomposed along the vertical and horizontal directions using the pyramidal multiresolution scheme. The wavelet transform decomposes the image into a set of subimages called shapes with different resolutions corresponding to different frequency bands. Hence, different allocations are tested, assuming that details at high resolution and diagonal directions are less visible to the human eye. The resultant coefficients are vector quantized (VQ) using the LGB algorithm. By using an error correction method that approximates the reconstructed coefficients quantization error, we minimize distortion for a given compression rate at low computational cost. Several compression techniques are tested. In the first experiment, several 512/spl times/512 images are trained together and common table codes created. Using these tables, the training sequence black-and-white images achieve a compression ratio of 60-65 and a PSNR of 30-33. To investigate the compression on images not part of the training set, many 480/spl times/480 images of uncalibrated faces are trained together and yield global tables code. Images of faces outside the training set are compressed and reconstructed using the resulting tables. The compression ratio is 40; PSNRs are 30-36. Images from the training set have similar compression values and quality. Finally, another compression method based on the end vector bit allocation is examined.

223 citations


Patent
03 May 1996
TL;DR: In this article, a reversible wavelet filter is used to generate coefficients from input data, such as image data, and an entropy coder performs entropy coding on the embedded codestream to produce the compressed data stream.
Abstract: A compression and decompression system in which a reversible wavelet filter are used to generates coefficients from input data, such as image data. The reversible wavelet filter is an efficient transform implemented with integer arithmetic that has exact reconstruction. The present invention uses the reversible wavelet filter in a lossless system (or lossy system) in which an embedded codestream is generated from the coefficients produced by the filter. An entropy coder performs entropy coding on the embedded codestream to produce the compressed data stream.

171 citations


Proceedings ArticleDOI
07 May 1996
TL;DR: This work proposes a context-based, adaptive, lossless image codec (CALIC), which obtains higher lossless compression of continuous-tone images than other techniques reported in the literature and has relatively low time and space complexities.
Abstract: We propose a context-based, adaptive, lossless image codec (CALIC). CALIC obtains higher lossless compression of continuous-tone images than other techniques reported in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. CALIC puts heavy emphasis on image data modeling. A unique feature of CALIC is the use of a large number of modeling contexts to condition a non-linear predictor and make it adaptive to varying source statistics. The non-linear predictor adapts via an error feedback mechanism. In this adaptation process, CALIC only estimates the expectation of prediction errors conditioned on a large number of contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the sparse context problem. The low time and space complexities of CALIC are attributed to efficient techniques for forming and quantizing modeling contexts.

165 citations


Journal ArticleDOI
TL;DR: A new lossless algorithm is presented that exploits the interblock correlation in the index domain to achieve significant reduction of bit rates without introducing extra coding distortion when compared to memoryless VQ.
Abstract: In memoryless vector quantization (VQ) for images, each block is quantized independently and its corresponding index is sent to the decoder. This paper presents a new lossless algorithm that exploits the interblock correlation in the index domain. We compare the current index with previous indices in a predefined search path, and then send the corresponding search order to the decoder. The new algorithm achieves significant reduction of bit rates without introducing extra coding distortion when compared to memoryless VQ. It is very simple and computationally efficient.

Proceedings ArticleDOI
TL;DR: A preliminary version of a foveated imaging system, implemented on a general purpose computer, which greatly reduces the transmission bandwidth of images, based on the fact that the spatial resolution of the human eye is space variant, decreasing with increasing eccentricity from the point of gaze.
Abstract: We have developed a preliminary version of a foveated imaging system, implemented on a general purpose computer, which greatly reduces the transmission bandwidth of images. The system is based on the fact that the spatial resolution of the human eye is space variant, decreasing with increasing eccentricity from the point of gaze. By taking advantage of this fact, it is possible to create an image that is almost perceptually indistinguishable from a constant resolution image, but requires substantially less information to code it. This is accomplished by degrading the resolution of the image so that it matches the space-variant degradation in the resolution of the human eye. Eye movements are recorded so that the high resolution region of the image can be kept aligned with the high resolution region of the human visual system. This system has demonstrated that significant reductions in bandwidth can be achieved while still maintaining access to high detail at any point in an image. The system has been tested using 256 by 256 8 bit gray scale images with a 20 degree field-of-view and eye-movement update rates of 30 Hz (display refresh was 60 Hz). users of the system have reported minimal perceptual artifacts at bandwidth reductions of up to 94.7% (a factor of 18.8). Bandwidth reduction factors of over 100 are expected once lossless compression techniques are added to the system.

Patent
18 Jul 1996
TL;DR: In this paper, a lossless image compression encoder/decoder system with a context determination circuit and a code table generator is described. But the encoder does not determine the context of a pixel to be encoded.
Abstract: A lossless image compression encoder/decoder system having a context determination circuit and a code table generator. The image compressor uses the context of a pixel to be encoded to predict the value of the pixel and determines a prediction error. The image compressor contains a context quantizer that quantizes the context of pixels. The image compressor counts the error values for each quantized context and uses these counts to generate context-specific coding tables for each quantized context. As it encodes a particular pixel, the encoder looks up the prediction error in the context-specific coding table for the context of the pixel and encodes that value. To decompress an image, the decompressor determines and quantizes the context of each pixel being decoded. The decompressor uses the same pixels as the compressor to determine the context. The decompressor retrieves from the context-specific coding table the error value corresponding to the coded pixel. The decompressor uses a predictor to predict the value of the pixel based on the context and adds the error value to determine the actual value of the pixel. In one embodiment the image compressor uses an alphabet extension, embedded in its context model, in specific low gradient contexts to reduce the redundancy of the encoding. Other systems and methods are disclosed.

Proceedings ArticleDOI
31 Mar 1996
TL;DR: A text compression scheme dedicated to DNA sequences that is able to distinguish between "random" and "significative" repeats and Kolmogorov complexity theory.
Abstract: We present a text compression scheme dedicated to DNA sequences. The exponential growing of the number of sequences creates a real need for analyzing tools. A specific need emerges for methods that perform sequences classification upon various criteria, one of which is the sequence repetitiveness. A good lossless compression scheme is able to distinguish between "random" and "significative" repeats. Theoretical bases for this statement are found in Kolmogorov complexity theory.

Patent
Stuart T. Laney1
09 May 1996
TL;DR: In this article, image data is broken down into cells and iteratively compressed using compression formats that are most appropriate for the contents of the cells, and a determination is first made whether a cell is substantially identical to a succeeding cell in a previous frame.
Abstract: A technique for compressing digital video data provides improved compression over conventional block compression techniques. In this technique, image data is broken down into cells and iteratively compressed. The cells are compressed using compression formats that are most appropriate for the contents of the cells. A determination is first made whether a cell is substantially identical to a succeeding cell in a previous frame. If the cell is substantially identical to the cell in the previous frame, the cell is encoded in compressed form as a duplicate of the previous cell. Moreover, solid-color compression approaches, two-color compression approaches and eight-color compression approaches may be integrated into the compression technique.

Patent
Kurt Dobson, Peter Rigstad, Kevin Smart1, Nathan Whitney, Jack Yang1 
14 Oct 1996
TL;DR: In this paper, the disclosed compression method utilizes a combination of both lossy and lossless compression to achieve significant compression while retaining very high subjective quality of the reconstructed or decompressed signal.
Abstract: The disclosed compression method utilizes a combination of both lossy and lossless compression to achieve significant compression while retaining very high subjective quality of the reconstructed or decompressed signal. Methods and apparatus for compression and decompression of digital audio data are provided. In one preferred embodiment, the compression method or apparatus has a bit rate control feedback loop particularly well suited to matching the output bit rate of the data compressor to the bandwidth capacity of a communication channel. Disclosed embodiments trade-off various error sources in order to keep perceptible distortion levels to a minimum for a fixed bit rate. Preferred embodiments also utilize a unique combination of run length and Huffman encoding methods in order to take advantage of both local and global statistics.

Patent
27 Jun 1996
TL;DR: In this paper, a method and apparatus for performing video image compression and decompression is described, in which the wavelets applied to sample locations at the boundaries of image intervals are different from those applied to samples within the intervals.
Abstract: A method and apparatus for performing video image compression and decompression are disclosed. The video image compression is performed using boundary-spline-wavelet decomposition, in which the wavelets applied to sample locations at the boundaries of image intervals are different from those applied to sample locations within the intervals. As a result, boundary effect artifacts that arise from the use of wavelets requiring support outside of the interval are avoided. The decomposition is performed first for horizontal rows of the image data, and then in a vertical direction upon the results of the first decomposition. Quantization serves to locally round off the higher frequency components of the decomposition, and the decomposition is repeated until the desired compression ratio is obtained. Lossless compression may then be applied to the decomposed image data, and the compressed image is transmitted or stored, depending upon the application. Decompression is effected by lossless decompression of the received data, followed by reconstruction of the image using boundary-spline-wavelets, repeated as necessary to fully reconstruct the image. The reconstructed image can then be displayed on a conventional video display. Compression and decompression of still images with even higher compression ratios may also be performed, while maintaining the high quality of the image.

Patent
30 Aug 1996
TL;DR: In this paper, a lossless image compression encoder/decoder system with a context determination circuit and a code generator is proposed, where the image compressor determines a Golomb parameter based on the context and historical information gathered during the coding of an image.
Abstract: A lossless image compression encoder/decoder system having a context determination circuit and a code generator. The image compressor uses the context of a pixel to be encoded to predict the value of the pixel and determines a prediction error and maps the prediction error to a mapped value having a distribution suitable for Golomb encoding. The image compressor contains a context quantizer that quantizes the context of pixels. The image compressor determines a Golomb parameter based on the context and historical information gathered during the coding of an image. To avoid systematic prediction biases in an image, the image compressor adjusts the distribution of prediction residuals to a distribution suitable for Golomb coding. As it encodes a particular pixel, the encoder uses the Golomb parameter to determine a Golomb code for the prediction error and encodes that value. To decompress an image, the decompressor determines and quantizes the context of each pixel being decoded. The decompressor uses the same pixels as the compressor to determine the context. The decompressor uses the context and historical information gathered during the decompression of the image to determine a Golomb parameter for the context in which the pixel occured. The decompressor retrieves from the compressed image the code for the pixel. Using the Golomb parameter and the retrieved code, the decompressor determines the mapped value of the code. The decompressor then uses the inverse mapping to determine the error value. The decompressor uses a predictor to predict the value of the pixel based on the context and adds the error value to determine the actual value of the pixel. In one embodiment the image compressor uses an alphabet extension, embedded in its context model, in specific low gradient contexts to reduce the redundancy of the encoding. Other systems and methods are disclosed.

Journal ArticleDOI
TL;DR: Adaptive DPCM methods using linear prediction are described for the lossless compression of hyperspectral images recorded by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), with good predictors, whose performance closely approaches limits imposed by sensor noise.
Abstract: Adaptive DPCM methods using linear prediction are described for the lossless compression of hyperspectral (224-band) images recorded by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The methods have two stages-predictive decorrelation (which produces residuals) and residual encoding. Good predictors are described, whose performance closely approaches limits imposed by sensor noise. It is imperative that these predictors make use of the high spectral correlations between bands. The residuals are encoded using variable-length coding (VLC) methods, and compression is improved by using eight codebooks whose design depends on the sensor's noise characteristics. Rice (1979) coding has also been evaluated; it loses 0.02-0.05 b/pixel compression compared with better VLC methods but is much simpler and faster. Results for compressing ten AVIRIS images are reported.

Proceedings ArticleDOI
N.J. Larsson1
31 Mar 1996
TL;DR: It is shown that the scheme can be applied to PPM-style compression, obtaining an algorithm that runs in linear time, and in space bounded by an arbitrarily chosen window size.
Abstract: A practical scheme for maintaining an index for a sliding window in optimal time and space, by use of a suffix tree, is presented. The index supports location of the longest matching substring in time proportional to the length of the match. The total time for build and update operations is proportional to the size of the input. The algorithm, which is simple and straightforward, is presented in detail. The most prominent lossless data compression scheme, when considering compression performance, is prediction by partial matching with unbounded context lengths (PPM). However, previously presented algorithms are hardly practical, considering their extensive use of computational resources. We show that our scheme can be applied to PPM-style compression, obtaining an algorithm that runs in linear time, and in space bounded by an arbitrarily chosen window size. Application to Ziv-Lempel (1977) compression methods is straightforward and the resulting algorithm runs in linear time.

Patent
22 Apr 1996
TL;DR: In this article, an efficient method for compressing audio and other sampled data signals without loss, or with a controlled amount of loss, is described, which contains a subset selector, an approximator, an adder, two derivative encoders, a header encoder, and a compressed block formatter.
Abstract: An efficient method for compressing audio and other sampled data signals without loss, or with a controlled amount of loss, is described The compression apparatus contains a subset selector, an approximator, an adder, two derivative encoders, a header encoder, and a compressed block formatter The decompression apparatus contains a compressed block parser, a header decoder, two integration decoders, an approximator, and an adder The compressor first divides each block of input samples into a first subset and a second subset The approximator uses the first subset samples to approximate the second subset samples An error signal is created by subtracting the approximated second subset samples from the actual second subset samples The first subset samples and error signal are separately encoded by the derivative encoders, which select the signal's derivative that requires the least amount of storage for a block floating point representation A compressed block formatter combines the compression control parameters, encoded subset array, and encoded error array into a compressed block The decompression apparatus first parses the compressed block into a header, an encoded first subset array, and an encoded error array The header decoder recovers the compression control parameters from the header Using the compression control parameters, the integration decoders reconstruct the first subset and error arrays from their block floating point representations The approximator uses the first subset samples to approximate the original second subset samples The adder combines the subset samples, the error samples, and the approximated second subset samples to identically re-create the original, uncompressed signal An indexing method is described which allows random access to specific uncompressed samples within the stream of compressed blocks

Patent
15 May 1996
TL;DR: In this paper, a quantiser is jointly responsive to a first sample value of a signal input to the prediction filter and an output value of the quantiser at a previous sample incident.
Abstract: In a method of lossless processing of an integer value signal in a prediction filter which includes a quantiser, a numerator of the prediction filter is implemented prior to the quantiser and a denominator of the prediction filter is implemented recursively around the quantiser to reduce the peak data rate of an output signal. In the lossless processor, at each sample instant, an input to the quantiser is jointly responsive to a first sample value of a signal input to the prediction filter, a second sample value of a signal input to the prediction filter at a previous sample instant, and an output value of the quantiser at a previous sample incident. In a preferred embodiment, the prediction filter includes noise shaping for affecting the output of the quantiser.

Journal ArticleDOI
01 Oct 1996
TL;DR: A novel neural network technique for video compression is described, using a "point-process" type neural network model the authors have developed which is closer to biophysical reality and is mathematically much more tractable than standard models.
Abstract: In this paper we describe a novel neural network technique for video compression, using a "point-process" type neural network model we have developed which is closer to biophysical reality and is mathematically much more tractable than standard models. Our algorithm uses an adaptive approach based upon the users' desired video quality Q, and achieves compression ratios of up to 500:1 for moving gray-scale images, based on a combination of motion detection, compression, and temporal subsampling of frames. This leads to a compression ratio of over 1000:1 for full-color video sequences with the addition of the standard 4:1:1 spatial subsampling ratios in the chrominance images. The signal-to-noise ratio ranges from 29 dB to over 34 dB. Compression is performed using a combination of motion detection, neural networks, and temporal subsampling of frames. A set of neural networks is used to adaptively select the desired compression of each picture block as a function of the reconstruction quality. The motion detection process separates out regions of the frame which need to be retransmitted. Temporal subsampling of frames, along with reconstruction techniques, lead to the high compression ratios.

Proceedings ArticleDOI
TL;DR: In this paper, the authors measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels and proposed a mathematical model for DWT noise detection thresholds that is a function of level, orientation and display visual resolution.
Abstract: The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-L, where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
18 Mar 1996
TL;DR: In this article, a block adaptive differential pulse code modulation (DPCM) system was proposed, which includes a lossless DPCM processor responsive to blocks of pixel values for producing encoder command signals; a lossy DPCMs compressor responsive to block of pixel value for producing encoding signals; and an encoder for receiving encoder commands and producing a compressed encoded bit stream.
Abstract: A block adaptive differential pulse code modulation (DPCM) system includes a lossless DPCM processor responsive to blocks of pixel values for producing encoder command signals; a lossy DPCM compressor responsive to blocks of pixel values for producing encoder command signals; an encoder for receiving encoder command signals and producing a compressed encoded bit stream; and a switch responsive to a compression configuration signal and to the encoder command signals from the lossy compressor for selectively passing the encoder command signals from the lossless processor or the lossy compressor to the encoder.

Proceedings ArticleDOI
31 Mar 1996
TL;DR: This work considers an intermediate approach, where multiple compressors jointly construct a dictionary, and results are parallel speedup, with compression performance similar to the sequential case.
Abstract: It is often desirable to compress or decompress relatively small blocks of data at high bandwidth and low latency (for example, for data fetches across a high speed network). Sequential compression may not satisfy the speed requirement, while simply splitting the block into smaller subblocks for parallel compression yields poor compression performance due to small dictionary sizes. We consider an intermediate approach, where multiple compressors jointly construct a dictionary. The result is parallel speedup, with compression performance similar to the sequential case.

Patent
03 Jul 1996
TL;DR: In this article, a mixture of text data and dot-mapped image data is converted to compressed dotmapped data not exceeding a given size, after which the text data are converted to dot mapped data, then losslessly compressed, and finally the dot mapping image data are down-sampled as necessary.
Abstract: To convert a mixture of text data and dot-mapped image data to compressed dot-mapped data not exceeding a given size, the text data are converted to dot-mapped data, then losslessly compressed, after which the dot-mapped image data are down-sampled as necessary. To compress dot-mapped data to within a given size, lossless compression, lossy compression, and down-sampling followed by compression are attempted until the necessary size reduction is achieved. To convert a page of object data to compressed dot-mapped data, the objects are classified and prioritized, then rasterized and compressed by different methods according to their priorities. The compression ratios are predicted and monitored, and the compression parameters are modified according to the prediction error.

Patent
Jay Yogeshwar1
24 Jun 1996
TL;DR: In this paper, a portion of the decompressed video data is recompressed as intraframe data to serve as an anchor frame, and then a reference picture is selected and the reference frame data is decompressed.
Abstract: A compressed video decoder receives a stream of compressed video data in a channel buffer, and decompresses it. A portion of the decompressed video data is recompressed as intraframe data to serve as an anchor frame. The intraframe data is motion compensated, and then a reference picture is selected. The reference frame data is decompressed. Next, a region of interest is selected and stored in display memory for the decompressed form. The recompression of data to be used as an anchor frame can be done in a substantially lossless manner by using DCT with the same quantization matrix, quantizer scale, and field/frame coding type.

Journal ArticleDOI
TL;DR: This paper investigates the problem of ordering the color table such that the absolute sum of prediction errors is minimized and gives two heuristic solutions for the problem that can be achieved over dictionary-based coding schemes that are commonly employed for color-mapped images.
Abstract: Linear predictive techniques perform poorly when used with color-mapped images where pixel values represent indices that point to color values in a look-up table. Reordering the color table, however, can lead to a lower entropy of prediction errors. In this paper, we investigate the problem of ordering the color table such that the absolute sum of prediction errors is minimized. The problem turns out to be intractable, even for the simple case of one-dimensional (1-D) prediction schemes. We give two heuristic solutions for the problem and use them for ordering the color table prior to encoding the image by lossless predictive techniques. We demonstrate that significant improvements in actual bit rates can be achieved over dictionary-based coding schemes that are commonly employed for color-mapped images.

Journal ArticleDOI
TL;DR: It is seen that even with motion compensation, schemes that utilizing only temporal correlations do not perform significantly better than schemes that utilize only spectral correlations, so hybrid schemes that make use of both spectral and temporal correlations are looked at.
Abstract: We investigate lossless compression schemes for video sequences. A simple adaptive prediction scheme is presented that exploits temporal correlations or spectral correlations in addition to spatial correlations. It is seen that even with motion compensation, schemes that utilize only temporal correlations do not perform significantly better than schemes that utilize only spectral correlations. Hence, we look at hybrid schemes that make use of both spectral and temporal correlations. The hybrid schemes give significant improvement in performance over other techniques. Besides prediction schemes, we also look at some simple error modeling techniques that take into account prediction errors made in spectrally and/or temporally adjacent pixels in order to efficiently encode the prediction residual. Implementation results on standard test sequences indicate that significant improvements can be obtained by the proposed techniques.