scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1993"


Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations


Journal ArticleDOI
01 Oct 1993
TL;DR: It is proposed that fundamental limits in the science can be expressed by the semiquantitative concepts of perceptual entropy and the perceptual distortion-rate function, and current compression technology is examined in that framework.
Abstract: The notion of perceptual coding, which is based on the concept of distortion masking by the signal being compressed, is developed. Progress in this field as a result of advances in classical coding theory, modeling of human perception, and digital signal processing, is described. It is proposed that fundamental limits in the science can be expressed by the semiquantitative concepts of perceptual entropy and the perceptual distortion-rate function, and current compression technology is examined in that framework. Problems and future research directions are summarized. >

905 citations


Book
01 Mar 1993
TL;DR: Fractal Image Compression (FI) as discussed by the authorsractals are geometric or data structures which do not simplify under magnification and can be described in terms of a few succinct rules, while the fractal contains much or all the image information.
Abstract: Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

673 citations


Proceedings ArticleDOI
27 Apr 1993
TL;DR: A technique using boundary matching to compensate for lost or erroneously received motion vectors in motion-compensated video coding is proposed and results show that better image quality can be obtained by EBMA.
Abstract: A technique using boundary matching to compensate for lost or erroneously received motion vectors in motion-compensated video coding is proposed. This technique, called the boundary matching algorithm, produces noticeably better results than those reported previously. It is first assumed that the displaced frame differences have no error. Then, this assumption is relaxed by proposing an algorithm (the extended boundary matching algorithm or EBMA) which can recover both the missing displaced frame differences and the missing motion vectors. The resulting images obtained using these methods and some other methods are compared. The images obtained clearly show that better image quality can be obtained by EBMA. >

453 citations


Proceedings ArticleDOI
01 Jan 1993
TL;DR: The method treats each DCT coefficient as an approximation to the local response of a visual "channel" and estimates the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for agiven bit rate.
Abstract: Many image compression standards (JPEG, MPEG, H.261) are based on the Discrete Cosine Transform (DCT). However, these standards do not specify the actual DCT quantization matrix. We have previously provided mathematical formulae to compute a perceptually lossless quantization matrix. Here I show how to compute a matrix that is optimized for a particular image. The method treats each DCT coefficient as an approximation to the local response of a visual 'channel'. For a given quantization matrix, the DCT quantization errors are adjusted by contrast sensitivity, light adaptation, and contrast masking, and are pooled non-linearly over the blocks of the image. This yields an 8x8 'perceptual error matrix'. A second non-linear pooling over the perceptual error matrix yields total perceptual error. With this model we may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

336 citations


Journal ArticleDOI
TL;DR: The Joint Photographic Experts Group (JPEG) and Motion Picture Experts group (MPEG) algorithms for image and video compression are modified to incorporate block interleaving in the spatial domain and DCT coefficient segmentation in the frequency domain to conceal the errors due to packet loss.
Abstract: The applications of discrete cosine transform (DCT)-based image- and video-coding methods in the asynchronous transfer mode (ATM) environment are considered. Coding and reconstruction mechanisms are jointly designed to achieve a good compromise among compression gain, system complexity, processing delay, error-concealment capability, and reconstruction quality. The Joint Photographic Experts Group (JPEG) and Motion Picture Experts Group (MPEG) algorithms for image and video compression are modified to incorporate block interleaving in the spatial domain and DCT coefficient segmentation in the frequency domain to conceal the errors due to packet loss. A new algorithm is developed that recovers the damaged regions by adaptive interpolation in the spatial, temporal, and frequency domains. The weights used for spatial and temporal interpolations are varied according to the motion content and loss patterns of the damaged regions. When combined with proper layered transmission, the proposed coding and reconstruction methods can handle very high packet-loss rates at only a slight cost in compression gain, system complexity, and processing delay. >

298 citations


Proceedings ArticleDOI
08 Sep 1993
TL;DR: A perception-based model that predicts subjective ratings from these objective measurements, and a demonstration of the correlation between the model's predictions and viewer panel ratings are presented.
Abstract: The Institute for Telecommunication Sciences (ITS) has developed an objective video quality assessment system that emulates human perception. The system returns results that agree closely with quality judgements made by a large panel of viewers. Such a system is valuable because it provides broadcasters, video engineers and standards organizations with the capability for making meaningful video quality evaluations without convening viewer panels. The issue is timely because compressed digital video systems present new quality measurement questions that are largely unanswered. The perception-based system was developed and tested for a broad range of scenes and video technologies. The 36 test scenes contained widely varying amounts of spatial and temporal information. The 27 impairments included digital video compression systems operating at line rates from 56 kbits/sec to 45 Mbits/sec with controlled error rates, NTSC encode/decode cycles, VHS and S-VHS record/play cycles, and VHF transmission. Subjective viewer ratings of the video quality were gathered in the ITS subjective viewing laboratory that conforms to CCIR Recommendation 500-3. Objective measures of video quality were extracted from the digitally sampled video. These objective measurements are designed to quantify the spatial and temporal distortions perceived by the viewer. This paper presents the following: a detailed description of several of the best ITS objective measurements, a perception-based model that predicts subjective ratings from these objective measurements, and a demonstration of the correlation between the model's predictions and viewer panel ratings. A personal computer-based system is being developed that will implement these objective video quality measurements in real time. These video quality measures are being considered for inclusion in the Digital Video Teleconferencing Performance Standard by the American National Standards Institute (ANSI) Accredited Standards Committee T1, Working Group T1A1.5.

286 citations


Journal ArticleDOI
TL;DR: A new algorithm for ECG signal compression is introduced that can be considered a generalization of the recently published average beat subtraction method, and was found superior at any bit rate.
Abstract: A new algorithm for ECG signal compression is introduced. The compression system is based on the subautoregression (SAR) model, known also as the long-term prediction (LTP) model. The periodicity of the ECG signal is employed in order to further reduce redundancy, thus yielding high compression ratios. The suggested algorithm was evaluated using an in-house database. Very low bit rates on the order of 70 b/s are achieved with a relatively low reconstruction error (percent RMS difference-PRD) of less than 10%. The algorithm was compared, using the same database, with the conventional linear prediction (short-term prediction-STP) method, and was found superior at any bit rate. The suggested algorithm can be considered a generalization of the recently published average beat subtraction method. >

262 citations


Proceedings ArticleDOI
01 Jun 1993
TL;DR: This paper adapts three well-known data compressors to get three simple, deterministic, and universal prefetchers, and concludes that prediction for prefetching based on data compression techniques holds great promise.
Abstract: An important issue that affects response time performance in current OODB and hypertext systems is the I/O involved in moving objects from slow memory to cache. A promising way to tackle this problem is to use prefetching, in which we predict the user's next page requests and get those pages into cache in the background. Current databases perform limited prefetching using techniques derived from older virtual memory systems. A novel idea of using data compression techniques for prefetching was recently advocated in [KrV, ViK], in which prefetchers based on the Lempel-Ziv data compressor (the UNIX compress command) were shown theoretically to be optimal in the limit. In this paper we analyze the practical aspects of using data compression techniques for prefetching. We adapt three well-known data compressors to get three simple, deterministic, and universal prefetchers. We simulate our prefetchers on sequences of page accesses derived from the OO1 and OO7 benchmarks and from CAD applications, and demonstrate significant reductions in fault-rate. We examine the important issues of cache replacement, size of the data structure used by the prefetcher, and problems arising from bursts of “fast” page requests (that leave virtually no time between adjacent requests for prefetching and book keeping). We conclude that prediction for prefetching based on data compression techniques holds great promise.

260 citations


Proceedings ArticleDOI
30 Mar 1993
TL;DR: A new method gives compression comparable with the JPEG lossless mode, with about five times the speed, based on a novel use of two neighboring pixels for both prediction and error modeling.
Abstract: A new method gives compression comparable with the JPEG lossless mode, with about five times the speed. FELICS is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding, the authors use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter they present and analyze a provably good method for estimating the single coding parameter. >

259 citations


Journal ArticleDOI
TL;DR: Experiments indicate that the proposed adaptive wavelet selection procedure by itself can achieve almost transparent coding of monophonic compact disk (CD) quality signals at bit rates of 64-70 kilobits per second (kb/s).
Abstract: Describes a novel wavelet based audio synthesis and coding method. The method uses optimal adaptive wavelet selection and wavelet coefficients quantization procedures together with a dynamic dictionary approach. The adaptive wavelet transform selection and transform coefficient bit allocation procedures are designed to take advantage of the masking effect in human hearing. They minimize the number of bits required to represent each frame of audio material at a fixed distortion level. The dynamic dictionary greatly reduces statistical redundancies in the audio source. Experiments indicate that the proposed adaptive wavelet selection procedure by itself can achieve almost transparent coding of monophonic compact disk (CD) quality signals (sampled at 44.1 kHz) at bit rates of 64-70 kilobits per second (kb/s). The combined adaptive wavelet selection and dynamic dictionary coding procedures achieve almost transparent coding of monophonic CD quality signals at bit rates of 48-66 kb/s. >

Proceedings ArticleDOI
30 Mar 1993
TL;DR: The authors propose a lossless algorithm based on regularities, such as the presence of palindromes, in the DNA, which is far beyond classical algorithms.
Abstract: The authors propose a lossless algorithm based on regularities, such as the presence of palindromes, in the DNA. The results obtained, although not satisfactory, are far beyond classical algorithms. >

Patent
17 Dec 1993
TL;DR: In this article, a video compression system comprises a pre-processing section (102), and encoder (106), and post processing section (114), where the preprocessing section employs a median decimation filter (122) which combines median filtering and decimation process.
Abstract: A video compression system comprises a pre-processing section (102), and encoder (106), and post-processing section (114). The pre-processing section (102) employs a median decimation filter (122) which combines median filtering and decimation process. The pre-processing section (102) also employs adaptive temporal filtering and content adaptive noise reduction filtering to provide images with proper smoothness and sharpness to match the encoder characteristics. The encoder (106) employs a two pass look-ahead allocation rate buffer control scheme where the numbers of bits allocated and subsequently generated for each block may differ. In the first pass, the means square error for each block is estimated to determine the number of bits assigned to each block in a frame. In the second pass, the degree of compression is controlled as a function of the total number of bits generated for all the preceding blocks and the sum of the bits allocated to such preceding blocks.

Journal ArticleDOI
TL;DR: It is shown how the algebraic operations of pixel-wise and scalar addition and multiplication, which are the basis for many image transformations, can be implemented on compressed images.
Abstract: A family of algorithms that implement operations on compressed digital images is described. These algorithms allow many traditional image manipulation operations to be performed 50 to 100 times faster than their brute-force counterparts. It is shown how the algebraic operations of pixel-wise and scalar addition and multiplication, which are the basis for many image transformations, can be implemented on compressed images. These operations are used to implement two common video transformations: dissolving one video sequence into another and subtitling. The performance of these operations is compared with the brute-force approach. The limitations of the technique, extensions to other compression standards and the relationship of this research to other work in the area are discussed. >

Proceedings ArticleDOI
J.M. Shapiro1
30 Mar 1993
TL;DR: The algorithm consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images, but requires absolutely no training, no pre-stored tables or codebooks, and no prior knowledge of the image source.
Abstract: This paper describes a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance. A fully embedded code represents a sequence of binary decisions that distinguish an image from the 'null' image. Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly. Also, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. The algorithm consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images, but requires absolutely no training, no pre-stored tables or codebooks, and no prior knowledge of the image source. It is based on four key concepts: (1) wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression achieved via adaptive arithmetic coding. >

Proceedings ArticleDOI
03 May 1993
TL;DR: From this analysis, an improved method is proposed, and it is shown that the new method can increase the PSNR up to 1.3 dB over the original method.
Abstract: The zero-tree method for image compression, proposed by J. Shapiro (1992), is studied. The method is presented in a more general perspective, so that its characteristics can be better understood. From this analysis, an improved method is proposed, and it is shown that the new method can increase the PSNR up to 1.3 dB over the original method. >

Journal ArticleDOI
01 Sep 1993
TL;DR: This work addresses several aspects of reversible data compression and compression techniques, including general concepts of data compression; a number of compression techniques; a comparison of the effects of compression on common data types; advantages and disadvantages; and future research needs.
Abstract: Despite the fact that computer memory costs have decreased dramatically over the past few years, data storage still remains, and will probably always remain, an important cost factor for many large scale database applications. Compressing data in a database system is attractive for two reasons: data storage reduction and performance improvement. Storage reduction is a direct and obvious benefit, while performance improves because smaller amounts of physical data need to be moved for any particular operation on the database.We address several aspects of reversible data compression and compression techniques: general concepts of data compression;a number of compression techniques;a comparison of the effects of compression on common data types;advantages and disadvantages of compressing data; andfuture research needs.

Proceedings ArticleDOI
22 Oct 1993
TL;DR: A new image transformation suited for reversible (lossless) image compression is presented, which uses a simple pyramid multiresolution scheme which is enhanced via predictive coding, and which is comparable to other efficient lossy compression methods.
Abstract: In this paper a new image transformation suited for reversible (lossless) image compression is presented. It uses a simple pyramid multiresolution scheme which is enhanced via predictive coding. The new transformation is similar to the subband decomposition, but it uses only integer operations. The number of bits required to represent the transformed image is kept small through careful scaling and truncations. The lossless coding compression rates are smaller than those obtained with predictive coding of equivalent complexity. It is also shown that the new transform can be effectively used, with the same coding algorithm, for both lossless and lossy compression. When used for lossy compression, its rate-distortion function is comparable to other efficient lossy compression methods.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
22 Jul 1993
TL;DR: In this paper, a variable-size block multi-resolution motion estimation (MRME) scheme is presented, which can be used to estimate motion vectors in subband coding, wavelet coding and other pyramid coding systems for video compression.
Abstract: A novel variable-size block multi-resolution motion estimation (MRME) scheme is presented. The motion estimation scheme can be used to estimate motion vectors in subband coding, wavelet coding and other pyramid coding systems for video compression. In the MRME scheme, the motion vectors in the highest layer of the pyramid are first estimated, then these motion vectors are used as the initial estimate for the next layer and gradually refined. A variable block size is used to adapt to its level in the pyramid. This scheme not only considerably reduces the searching and matching time but also provides a meaningful characterization of the intrinsic motion structure. In addition, the variable-MRME approach avoids the drawback of the constant-size MRME in describing small object motion activities. The proposed variable-block size MRME scheme can be used in estimating motion vectors for different video source formats and resolutions including video telephone, NTSC/PAL/SECAM, and HDTV applications.

PatentDOI
Xuedong Huang1, Shenzhi Zhang1
TL;DR: A data compression system greatly compresses the stored data used by a speech recognition system employing hidden Markov models (HMM) to recognize spoken words without having to decompress the entire output probability table.
Abstract: A data compression system greatly compresses the stored data used by a speech recognition system employing hidden Markov models (HMM). The speech recognition system vector quantizes the acoustic space spoken by humans by dividing it into a predetermined number of acoustic features that are stored as codewords in a vector quantization (output probability) table or codebook. For each spoken word, the speech recognition system calculates an output probability value for each codeword, the output probability value representing an estimated probability that the word will be spoken using the acoustic feature associated with the codeword. The probability values are stored in an output probability table indexed by each codeword and by each word in a vocabulary. The output probability table is arranged to allow compression of the probability of values associated with each codeword based on other probability values associated with the same codeword, thereby compressing the stored output probability. By compressing the probability values associated with each codeword separate from the probability values associated with other codewords, the speech recognition system can recognize spoken words without having to decompress the entire output probability table. In a preferred embodiment, additional compression is achieved by quantizing the probability values into 16 buckets with an equal number of probability values in each bucket. By quantizing the probability values into buckets, additional redundancy is added to the output probability table, which allows the output probability table to be additionally compressed.

Journal ArticleDOI
TL;DR: The analysis of 3-D scenes consisting of nonrigid moving objects using 2-D image sequences is discussed, whereby the scene parameters are transmitted and the output sequence is synthesized at the receiver by employing an analysis-by-synthesis approach.
Abstract: The analysis of 3-D scenes consisting of nonrigid moving objects using 2-D image sequences is discussed. A parametric description of dynamic objects is extracted, and the time-variant scene parameters are estimated throughout the sequence by employing an analysis-by-synthesis approach. Images are synthesized from the parametric scene description and compared with the original images input to the camera. Frame differences between the synthesized and original images are evaluated to obtain an estimated scene parameter update. The analysis method is applied to videophone scenes in the form of a data compression algorithm, whereby the scene parameters are transmitted and the output sequence is synthesized at the receiver. >

Journal ArticleDOI
TL;DR: In this article, the authors describe several standard compression algorithms developed in recent years and describe their compatibility among different applications and manufacturers, and present a comparison of the algorithms for image and video compression.
Abstract: Most image or video applications involving transmission or storage require some form of data compression to reduce the otherwise inordinate demand on bandwidth and storage. Compatibility among different applications and manufacturers is very desirable, and often essential. This paper describes several standard compression algorithms developed in recent years.

Patent
21 Dec 1993
TL;DR: In this article, a method for compressing video movie data to a specified target size using intra-frame and inter-frame compression schemes is proposed. But the method does not consider the quality of the original video data.
Abstract: A method for compressing video movie data to a specified target size using intraframe and interframe compression schemes. In intraframe compression, a frame of the movie is compressed by comparing adjacent pixels within the same frame. In contrast, interframe compression compresses by comparing similarly situated pixels of adjacent frames. The method begins by compressing the first frame of the video movie using intraframe compression. The first stage of the intraframe compression process does not degrade the quality of the original data, e.g., the method uses run length encoding based on the pixels' color values to compress the video data. However, in circumstances where lossless compression is not sufficient, the method utilizes a threshold value, or tolerance, to achieve further compression. In these cases, if the color variance between pixels is less than or equal to the tolerance, the method will encode the two pixels using a single color value--otherwise, the method will encode the two pixels using different color values. The method increases or decreases the tolerance to achieve compression within the target range. In cases where compression within the target range results in an image of unacceptable quality, the method will split the raw data in half and compress each portion of data separately. Frames after the first frame are generally compressed using a combination of intraframe and interframe compression. Additionally, the method periodically encodes frames using intraframe compression only in order to enhance random frame access.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that GMS is more robust than other algorithms in locating the global optimum and is computationally simpler than the full search algorithm.
Abstract: A new approach to block-based motion estimation for video compression, called the genetic motion search (GMS) algorithm, is introduced. It makes use of a natural processing concept called genetic algorithm (GA). In contrast to existing fast algorithms, which rely on the assumption that the matching error decreases monotonically as the search point moves closer to the global optimum, the GMS algorithm is not fundamentally limited by this restriction. Experimental results demonstrate that GMS is more robust than other algorithms in locating the global optimum and is computationally simpler than the full search algorithm. GMS is also suitable for VLSI implementation because of its regularity and high architectural parallelism. >

Patent
18 Nov 1993
TL;DR: In this paper, an improved electronic solid-state record/playback device (SSRPD) and electronic system may be used to record and playback information such as audio, video, control, and other data.
Abstract: An improved electronic solid-state record/playback device (SSRPD) and electronic system may be used to record and playback information such as audio, video, control, and other data. The SSRPD uses no tape or moving parts in the actual record/playback process but includes an audio and/or video and/or other data record/playback module (RPM), which performs all of the record signal conversion, recording and data compression algorithms, digital signal processing, and playback signal conversion. The SSRPD has program input processing and control output processing modules so that other devices may be controlled in different ways including interactive control. A time and control processor module facilitates internal synchronization of the SSRPD audio, video, and control information, as well as synchronization with other devices. The SSRPD information described is recorded into an internal resident memory(s). The novel interface allows information to be exchanged without degradation via a digital Portable Storage Device (PSD) which may be a Random Access Memory card (RAM card), with other SSRPDs as well as to a special Computer Interface Device (CID). The CID is an intelligent device that connects to a standard computing device such as a PC and facilitates functions such as reading, writing, editing, and archiving PSD data, as well as performing diagnostic routines.

Journal ArticleDOI
01 Aug 1993
TL;DR: This method can compensate for many motion types, such as scaling and rotation, where conventional block matching fails, and be incorporated in current hybrid video compression systems with little additional complexity and no change in the bit stream syntax.
Abstract: A method for temporal prediction of image sequences is proposed. The motion vectors of conventional block-based motion compensation schemes are used to convey a mapping of a selected set of image points, instead of blocks, between the previous and the current image. The prediction is made by geometrically transforming, or warping, the previous image using the point pairs defined by the mapping as fixed points in the transformation. This method produces a prediction image without block artifacts and can compensate for many motion types where conventional block matching fails, such as scaling and rotation. It can also be incorporated in existing hybrid video compression systems with little additional complexity and few or no changes in the bit stream syntax. It is shown that a significant subjective improvement in the prediction as well as a consistent reduction in the objectively measured prediction error is obtained. >

Paul G. Howard1
02 Jan 1993
TL;DR: It is shown that high-efficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding, and that greatly increased speed can be achieved at only a small cost in compression efficiency.
Abstract: Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as empirical analysis. We analyze arithmetic coding as it is commonly implemented and show rigorously that almost no compression is lost in the implementation. We show that high-efficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding. We introduce a four-component paradigm for lossless image compression and present two methods that give state of the art compression efficiency. In the text compression area, we give a small improvement on the preferred method in the literature. We show that we can often obtain significantly improved throughput at the cost of slightly reduced compression. The extra speed comes from simplified coding and modeling. Coding is simplified by using prefix codes when arithmetic coding is not necessary, and by using a new practical version of arithmetic coding, called quasi-arithmetic coding, when the precision of arithmetic coding is needed. We simplify image modeling by using small prediction contexts and making plausible assumptions about the distributions of pixel intensity values. For text modeling we use self-organizing-list heuristics and low-precision statistics.

Journal ArticleDOI
01 Sep 1993
TL;DR: A review is presented of vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, which is a popular image compression algorithm.
Abstract: A review is presented of vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, which is a popular image compression algorithm. Compression has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Recent work has used vector quantization both to simplify image processing tasks, such as enhancement classification, halftoning, and edge detection, and to reduce the computational complexity by performing the tasks simultaneously with the compression. The fundamental ideas of vector quantization are explained, and vector quantization algorithms that perform image processing are surveyed. >

Patent
03 Sep 1993
TL;DR: In this article, a variable-size multi-resolution motion compensation (MRMC) prediction scheme is used to produce displaced residual wavelets (DRWs), which are then adaptively quantized with their respective bit maps.
Abstract: A video coding scheme based on wavelet representation performs motion compensation in the wavelet domain rather than spatial domain. This inter-frame wavelet transform coding scheme preferably uses a variable-size multi-resolution motion compensation (MRMC) prediction scheme. The MRMC scheme produces displaced residual wavelets (DRWs). An optimal bit allocation algorithm produces a bit map for each DRW, and each DRW is then adaptively quantized with its respective bit map. Each quantized DRW is then coded into a bit stream.

Patent
26 May 1993
TL;DR: In this paper, a standby dictionary is used to store a subset of encoded data entries previously stored in a current dictionary to reduce the loss in data compression caused by dictionary resets, and data is compressed/decompressed according to the address location of data entries contained within a dictionary built in a content addressable memory (CAM).
Abstract: A class of lossless data compression algorithms use a memory-based dictionary of finite size to facilitate the compression and decompression of data. To reduce the loss in data compression caused by dictionary resets, a standby dictionary is used to store a subset of encoded data entries previously stored in a current dictionary. In a second aspect of the invention, data is compressed/decompressed according to the address location of data entries contained within a dictionary built in a content addressable memory (CAM). In a third aspect of the invention, the minimum memory/high compression capacity of the standby dictionary scheme is combined with the fast single-cycle per character encoding/decoding capacity of the CAM circuit. The circuit uses multiple dictionaries within the storage locations of a CAM to reduce the amount of memory required to provide a high data compression ratio.