scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1994"


01 Jan 1994
TL;DR: A block-sorting, lossless data compression algorithm, and the implementation of that algorithm and the performance of the implementation with widely available data compressors running on the same hardware are compared.
Abstract: The charter of SRC is to advance both the state of knowledge and the state of the art in computer systems. From our establishment in 1984, we have performed basic and applied research to support Digital's business objectives. Our current work includes exploring distributed personal computing on multiple platforms, networking , programming technology, system modelling and management techniques, and selected applications. Our strategy is to test the technical and practical value of our ideas by building hardware and software prototypes and using them as daily tools. Interesting systems are too complex to be evaluated solely in the abstract; extended use allows us to investigate their properties in depth. This experience is useful in the short term in refining our designs, and invaluable in the long term in advancing our knowledge. Most of the major advances in information systems have come through this strategy, including personal computing, distributed systems, and the Internet. We also perform complementary work of a more mathematical flavor. Some of it is in established fields of theoretical computer science, such as the analysis of algorithms, computational geometry, and logics of programming. Other work explores new ground motivated by problems that arise in our systems research. We have a strong commitment to communicating our results; exposing and testing our ideas in the research and development communities leads to improved understanding. Our research report series supplements publication in professional journals and conferences. We seek users for our prototype systems among those with whom we have common interests, and we encourage collaboration with university researchers. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Systems Research Center. All rights reserved. Authors' abstract We describe a block-sorting, lossless data compression algorithm, and our implementation of that algorithm. We compare the performance of our implementation with widely available data compressors running on the same hardware. The algorithm works by applying a reversible transformation to a block of input …

2,753 citations


Journal ArticleDOI
TL;DR: Simulation results show that, as compared to TSS, NTSS is much more robust, produces smaller motion compensation errors, and has a very compatible computational complexity.
Abstract: The three-step search (TSS) algorithm has been widely used as the motion estimation technique in some low bit-rate video compression applications, owing to its simplicity and effectiveness. However, TSS uses a uniformly allocated checking point pattern in its first step, which becomes inefficient for the estimation of small motions. A new three-step search (NTSS) algorithm is proposed in the paper. The features of NTSS are that it employs a center-biased checking point pattern in the first step, which is derived by making the search adaptive to the motion vector distribution, and a halfway-stop technique to reduce the computation cost. Simulation results show that, as compared to TSS, NTSS is much more robust, produces smaller motion compensation errors, and has a very compatible computational complexity. >

1,689 citations


Journal ArticleDOI
TL;DR: It is proposed that compact coding schemes are insufficient to account for the receptive field properties of cells in the mammalian visual pathway and suggested that natural scenes, to a first approximation, can be considered as a sum of self-similar local functions (the inverse of a wavelet).
Abstract: A number of recent attempts have been made to describe early sensory coding in terms of a general information processing strategy. In this paper, two strategies are contrasted. Both strategies take advantage of the redundancy in the environment to produce more effective representations. The first is described as a "compact" coding scheme. A compact code performs a transform that allows the input to be represented with a reduced number of vectors (cells) with minimal RMS error. This approach has recently become popular in the neural network literature and is related to a process called Principal Components Analysis (PCA). A number of recent papers have suggested that the optimal compact code for representing natural scenes will have units with receptive field profiles much like those found in the retina and primary visual cortex. However, in this paper, it is proposed that compact coding schemes are insufficient to account for the receptive field properties of cells in the mammalian visual pathway. In contrast, it is proposed that the visual system is near to optimal in representing natural scenes only if optimality is defined in terms of "sparse distributed" coding. In a sparse distributed code, all cells in the code have an equal response probability across the class of images but have a low response probability for any single image. In such a code, the dimensionality is not reduced. Rather, the redundancy of the input is transformed into the redundancy of the firing pattern of cells. It is proposed that the signature for a sparse code is found in the fourth moment of the response distribution (i.e., the kurtosis). In measurements with 55 calibrated natural scenes, the kurtosis was found to peak when the bandwidths of the visual code matched those of cells in the mammalian visual cortex. Codes resembling "wavelet transforms" are proposed to be effective because the response histograms of such codes are sparse (i.e., show high kurtosis) when presented with natural scenes. It is proposed that the structure of the image that allows sparse coding is found in the phase spectrum of the image. It is suggested that natural scenes, to a first approximation, can be considered as a sum of self-similar local functions (the inverse of a wavelet). Possible reasons for why sensory systems would evolve toward sparse coding are presented.

990 citations


Proceedings ArticleDOI
01 Oct 1994
TL;DR: The main findings are that the tail behavior of the marginal bandwidth distribution can be accurately described using “heavy-tailed” distributions and the autocorrelation of the VBR video sequence decays hyperbolically and can be modeled using self-similar processes.
Abstract: We present a detailed statistical analysis of a 2-hour long empirical sample of VBR video. The sample was obtained by applying a simple intraframe video compression code to an action movie. The main findings of our analysis are (1) the tail behavior of the marginal bandwidth distribution can be accurately described using “heavy-tailed” distributions (e.g., Pareto); (2) the autocorrelation of the VBR video sequence decays hyperbolically (equivalent to long-range dependence) and can be modeled using self-similar processes. We combine our findings in a new (non-Markovian) source model for VBR video and present an algorithm for generating synthetic traffic. Trace-driven simulations show that statistical multiplexing results in significant bandwidth efficiency even when long-range dependence is present. Simulations of our source model show long-range dependence and heavy-tailed marginals to be important components which are not accounted for in currently used VBR video traffic models.

956 citations


Journal ArticleDOI
TL;DR: A full color video compression strategy, based on 3-D subband coding with camera pan compensation, to generate a single embedded bit stream supporting multiple decoder display formats and a wide, finely gradated range of bit rates is proposed.
Abstract: We propose a full color video compression strategy, based on 3-D subband coding with camera pan compensation, to generate a single embedded bit stream supporting multiple decoder display formats and a wide, finely gradated range of bit rates. An experimental implementation of our algorithm produces a single bit stream, from which suitable subsets are extracted to be compatible with many decoder frame sizes and frame rates and to satisfy transmission bandwidth constraints ranging from several tens of kilobits per second to several megabits per second. Reconstructed video quality from any of these bit stream subsets is often found to exceed that obtained from an MPEG-1 implementation, operated with equivalent bit rate constraints, in both perceptual quality and mean squared error. In addition, when restricted to 2-D, the algorithm produces some of the best results available in still image compression. >

688 citations


Journal Article
TL;DR: This article describes a simple general-purpose data compression algorithm, called Byte Pair Encoding (BPE), which provides almost as much compression as the popular Lempel, Ziv, and Welch method.
Abstract: Data compression is becoming increasingly important as a way to stretch disk space and speed up data transfers. This article describes a simple general-purpose data compression algorithm, called Byte Pair Encoding (BPE), which provides almost as much compression as the popular Lempel, Ziv, and Welch (LZW) method [3, 2]. (I mention the LZW method in particular because it delivers good overall performance and is widely used.) BPE’s compression speed is somewhat slower than LZW’s, but BPE’s expansion is faster. The main advantage of BPE is the small, fast expansion routine, ideal for applications with limited memory. The accompanying C code provides an efficient implementation of the algorithm.

657 citations


Patent
09 May 1994
TL;DR: An apparatus and method for converting an input data character stream into a variable length encoded data stream in a data compression system is described in this article. But this method requires the input data characters to be stored in the history array.
Abstract: An apparatus and method are disclosed for converting an input data character stream into a variable length encoded data stream in a data compression system. The data compression system includes a history array. The history array has a plurality of entries and each entry of the history array is for storing a portion of the input data stream. The method for converting the input data character stream includes the following steps. Performing a search in a history array for the longest data string which matches the input data string. If the matching data string is found within the history buffer, the next step includes encoding the longest matching data string found by appending to the encoded data stream a tag indicating the longest matching data string was found and a string substitution code. If the matching data string is not found within the history array, the next step includes encoding the first character of the input data string by appending to the encoded data stream a raw data tag indicating that no matching data string was found and the first character of the input data string.

429 citations


01 Jan 1994
TL;DR: Some simple functions to compute the discrete cosine transform and how it is used for image compression are developed to illustrate the use of Mathematica in image processing and to provide the reader with the basic tools for further exploration of this subject.
Abstract: The discrete cosine transform (DCT) is a technique for converting a signal into elementary frequency components. It is widely used in image compression. Here we develop some simple functions to compute the DCT and to compress images. These functions illustrate the power of Mathematica in the prototyping of image processing algorithms. The rapid growth of digital imaging applications, including desktop publishing, multimedia, teleconferencing, and high-definition television (HDTV) has increased the need for effective and standardized image compression techniques. Among the emerging standards are JPEG, for compression of still images [Wallace 1991]; MPEG, for compression of motion video [Puri 1992]; and CCITT H.261 (also known as Px64), for compression of video telephony and teleconferencing. All three of these standards employ a basic technique known as the discrete cosine transform (DCT). Developed by Ahmed, Natarajan, and Rao [1974], the DCT is a close relative of the discrete Fourier transform (DFT). Its application to image compression was pioneered by Chen and Pratt [1984]. In this article, I will develop some simple functions to compute the DCT and show how it is used for image compression. We have used these functions in our laboratory to explore methods of optimizing image compression for the human viewer, using information about the human visual system [Watson 1993]. The goal of this paper is to illustrate the use of Mathematica in image processing and to provide the reader with the basic tools for further exploration of this subject.

364 citations


Journal ArticleDOI
TL;DR: The authors combine the designed motion compensation method with the discrete cosine transform to simulate coding of CIF format images at the transmission rate of 16 kb/s and the amount of motion information required to achieve a fixed level of prediction error is reduced to one fourth that of block matching.
Abstract: Given the small bit allocation for motion information in very low bit-rate coding, motion compensation using the block matching (BMA) algorithm fails to maintain an acceptable level of prediction errors. The reason is that the motion model, or spatial transformation, assumed in block matching cannot approximate the motion in the real world precisely with a small number of parameters (i.e. motion vectors). To develop an effective motion compensation method for very low bit-rate video coding, The authors address the issues of 1) adopting more sophisticated spatial transformations than block matching, and 2) developing a motion estimation algorithm that is suitable for these spatial transformations. The spatial transformations discussed are affine transformation, bilinear transformation and perspective transformation. Two new motion estimation algorithms, a matching-based algorithm and its fast version, are presented. The performance of the motion compensation methods, which combine the spatial transformations and the proposed motion estimation algorithms, is evaluated theoretically and experimentally using the following criteria: prediction error, amount of motion information, and computational cost. The simulation results show that in the proposed method, the amount of motion information required to achieve a fixed level of prediction error is reduced to one fourth that of block matching. The authors combine the designed motion compensation method with the discrete cosine transform to simulate coding of CIF format images at the transmission rate of 16 kb/s. >

308 citations


Journal ArticleDOI
TL;DR: A lossless algorithm is presented, biocompress-2, to compress the information contained in DNA and RNA sequences, based on the detection of regularities, such as the presence of palindromes, which leads to the highest compression of DNA.
Abstract: Universal data compression algorithms fail to compress genetic sequences. It is due to the specificity of this particular kind of “text.” We analyze in some detail the properties of the sequences, which cause the failure of classical algorithms. We then present a lossless algorithm, biocompress-2, to compress the information contained in DNA and RNA sequences, based on the detection of regularities, such as the presence of palindromes. The algorithm combines substitutional and statistical methods, and to the best of our knowledge, leads to the highest compression of DNA. The results, although not satisfactory, give insight to the necessary correlation between compression and comprehension of genetic sequences.

302 citations


01 Jan 1994
TL;DR: Three approaches to the measurement of medical image quality are described: signal-to-noise ratio (SNR), subjective rating, and diagnostic accuracy, which compare and contrast in a particular application, and recently developed methods for determining diagnostic accuracy of lossy compressed medical images are considered.
Abstract: Compressing a digital image can facilitate its transmission, storage, and processing. As radiology departments become increasingly digital, the quantities fo their imaging data are forcing consideration of compression in picture archiving and communication systems. Significant compression is achievable anly by lossy algorithms, which do not permit the exact recovery of the original images

Journal ArticleDOI
TL;DR: A heuristic algorithm based on Lagrangian optimization using an operational rate-distortion framework that, with computing complexity reduced by an order of magnitude, approaches the optimally achievable performance.
Abstract: The authors formalize the description of the buffer-constrained adaptive quantization problem. For a given set of admissible quantizers used to code a discrete nonstationary signal sequence in a buffer-constrained environment, they formulate the optimal solution. They also develop slightly suboptimal but much faster approximations. These solutions are valid for any globally minimum distortion criterion, which is additive over the individual elements of the sequence. As a first step, they define the problem as one of constrained, discrete optimization and establish its equivalence to some of the problems studied in the field of integer programming. Forward dynamic programming using the Viterbi algorithm is shown to provide a way of computing the optimal solution. Then, they provide a heuristic algorithm based on Lagrangian optimization using an operational rate-distortion framework that, with computing complexity reduced by an order of magnitude, approaches the optimally achievable performance. The algorithms can serve as a benchmark for assessing the performance of buffer control strategies and are useful for applications such as multimedia workstation displays, video encoding for CD-ROMs, and buffered JPEG coding environments, where processing delay is not a concern but decoding buffer size has to be minimized. >

Journal ArticleDOI
01 Jun 1994
TL;DR: Current activity in speech compression is dominated by research and development of a family of techniques commonly described as code-excited linear prediction (CELP) coding, which offer a quality versus bit rate tradeoff that significantly exceeds most prior compression techniques.
Abstract: Speech and audio compression has advanced rapidly in recent years spurred on by cost-effective digital technology and diverse commercial applications. Recent activity in speech compression is dominated by research and development of a family of techniques commonly described as code-excited linear prediction (CELP) coding. These algorithms exploit models of speech production and auditory perception and offer a quality versus bit rate tradeoff that significantly exceeds most prior compression techniques for rates in the range of 4 to 16 kb/s. Techniques have also been emerging in recent years that offer enhanced quality in the neighborhood of 2.4 kb/s over traditional vocoder methods. Wideband audio compression is generally aimed at a quality that is nearly indistinguishable from consumer compact-disc audio. Subband and transform coding methods combined with sophisticated perceptual coding techniques dominate in this arena with nearly transparent quality achieved at bit rates in the neighborhood of 128 kb/s per channel. >

Proceedings ArticleDOI
27 Jun 1994
TL;DR: In this article, a subband coding scheme based on estimation and exploitation of just-noticeable-distortion (JND) profile is presented to maintain high image quality with low bit-rates.
Abstract: To maintain high image quality with low bit-rates, an effective coding algorithm should not only remove statistical correlation but also perceptual redundancy from image signals. A subband coding scheme based on estimation and exploitation of just-noticeable-distortion (JND) profile is presented. >

Patent
30 Mar 1994
TL;DR: In this article, a wavelet decomposition is used to generate the wavelet and to reconstruct data values close to the original data values, and the forward and inverse quasi-perfect reconstruction transforms utilize special filters at the boundaries of the data being transformed and or inverse transformed.
Abstract: A compression and decompression method uses a wavelet decompositin, frequency based tree encoding, tree based motion encoding, frequency weighted quantization, Huffman encoding, and/or tree based activity estimation for bit rate control. Forward and inverse quasi-perfect reconstruction transforms are used to generate the wavelet decomposition and to reconstruct data values close to the original data values. The forward and inverse quasi-perfect reconstruction transforms utilize special filters at the boundaries of the data being transformed and/or inverse transformed. Structures and methods are disclosed for traversing wavelet decompositions. Methods are disclosed for increasing software execution speed in the decompression of video. Fixed or variable length tokens are included in a compressed data stream to indicate changes in encoding methods used to generate the compressed data stream.

Journal ArticleDOI
TL;DR: Experiments involving several MR and US images show that the entropy-coded DPCM method can provide compression in the range from 4 to 10 with a peak SNR of about 50 dB for 8-bit medical images, and a comparison with the JPEG standard reveals that it can provide about 7 to 8 dB higher SNR for the same compression performance.
Abstract: The near-lossless, i.e., lossy but high-fidelity, compression of medical Images using the entropy-coded DPCM method is investigated. A source model with multiple contexts and arithmetic coding are used to enhance the compression performance of the method. In implementing the method, two different quantizers each with a large number of quantization levels are considered. Experiments involving several MR (magnetic resonance) and US (ultrasound) images show that the entropy-coded DPCM method can provide compression in the range from 4 to 10 with a peak SNR of about 50 dB for 8-bit medical images. The use of multiple contexts is found to improve the compression performance by about 25% to 30% for MR images and 30% to 35% for US images. A comparison with the JPEG standard reveals that the entropy-coded DPCM method can provide about 7 to 8 dB higher SNR for the same compression performance. >

Patent
21 Jan 1994
TL;DR: In this article, a dynamic list of coefficient indices (114-120) is generated for each symbol for which a symbol must be encoded, rather than a static list of coefficients, where each coefficient must be individually checked to see whether a symbol should be encoded.
Abstract: An apparatus and method for signal, image, or video compression that achieves high compression efficiency in a computationally efficient manner and corresponding decoder apparatus and methods are disclosed. This technique uses zerotree coding (126) of wavelet coefficients (100) in a much more efficient manner than the previous techniques. The key is the dynamic generation of the list of coefficient indices (114-120) to be scanned, whereby the dynamically generated list (114-120) only contains coefficient indices for which a symbol must be encoded. This is a dramatic improvement over the prior art in which a static list of coefficient indices is used and each coefficient must be individually checked to see whether a) a symbol must be encoded, or b) it is completely predictable. Additionally, using dynamic list generation (114-120), the greater the compression of the signal, the less time it takes to perform the compression. Thus, using dynamic list generation (114-120), the computational burden is proportional to the size of the output compressed bit stream (134) instead of being proportional to the size of the input signal or image.

Journal ArticleDOI
TL;DR: A rate-distortion optimal way to threshold or drop the DCT coefficients of the JPEG and MPEG compression standards using a fast dynamic programming recursive structure.
Abstract: We show a rate-distortion optimal way to threshold or drop the DCT coefficients of the JPEG and MPEG compression standards. Our optimal algorithm uses a fast dynamic programming recursive structure. The primary advantage of our approach lies in its complete compatibility with standard JPEG and MPEG decoders. >

Journal ArticleDOI
TL;DR: A simple way to get better compression performances (in MSE sense) via quadtree decomposition, by using near to optimal choice of the threshold for quad tree decomposition; and bit allocation procedure based on the equations derived from rate-distortion theory.
Abstract: Quadtree decomposition is a simple technique used to obtain an image representation at different resolution levels. This representation can be useful for a variety of image processing and image compression algorithms. This paper presents a simple way to get better compression performances (in MSE sense) via quadtree decomposition, by using near to optimal choice of the threshold for quadtree decomposition; and bit allocation procedure based on the equations derived from rate-distortion theory. The rate-distortion performance of the improved algorithm is calculated for some Gaussian field, and it is examined vie simulation over benchmark gray-level images. In both these cases, significant improvement in the compression performances is shown. >

Journal ArticleDOI
TL;DR: The paper describes fully pipelined parallel architectures for the 3-step hierarchical search block-matching algorithm, a fast motion estimation algorithm for video coding, which provide efficient solutions for real-time motion estimations required by video applications of various data rates, from low bit-rate video to HDTV systems.
Abstract: The paper describes fully pipelined parallel architectures for the 3-step hierarchical search block-matching algorithm, a fast motion estimation algorithm for video coding. The advantage of this algorithm was completely utilized by use of intelligent data arrangement and memory configuration. Techniques for reducing interconnections and external memory accesses were also developed. Because of their low costs, high speeds, and low memory bandwidth requirements, the proposed 3-PE, 9-PE, and 27-PE architectures provide efficient solutions for real-time motion estimations required by video applications of various data rates, from low bit-rate video to HDTV systems. >

Journal ArticleDOI
TL;DR: The proposed one-dimensional full search (1DFS) algorithm is more suitable for real-time hardware realization of a VLSI motion estimator for video applications and achieves a good compromise between computational complexity and performance.
Abstract: A new hardware-oriented algorithm called the one-dimensional full search (1DFS) is presented for block-matching motion estimation in video compression. The simulation for this algorithm follows H.261 and MPEG international standards. In MPEG simulation, structures with 1-, 2- and 3-frame interpolation are compared. The performance of 1DFS is superior to that of other fast search algorithms. And it has more regular data flow, data reuse and less control overhead. It is an alternative for 2D full search block matching and achieves a good compromise between computational complexity and performance. With competent performance and reasonable computation complexity, the proposed method is more suitable for real-time hardware realization of a VLSI motion estimator for video applications. >

Proceedings ArticleDOI
15 Oct 1994
TL;DR: Methods to support variable rate browsing for MPEG-like video steams and minimize the additional resources required are devised and it is shown that the proposed method is a viable approach to video browsing.
Abstract: In a video-on-demand (VOD) system, it is desirable to provide the user with interactive browsing functions such as “fast forward” and “fast backward.” However, these functions usually require a significant amount of additional resources from the VOD system in terms of storage space, retrieval throughput, network bandwidth, etc. Moreover, prevalent video compression techniques such as MPEG impose additional constraints on the process since they introduce inter-frame dependencies. In this paper, we devise methods to support variable rate browsing for MPEG-like video steams and minimize the additional resources required. Specifically, we consider retrieval for a disk-array-based video server and address the problem of distributing the retrieval requests across the disks.Our overall approach for interactive browsing comprises (1) a storage method, (2) placement and sampling methods, and (3) a playout method, where the placement and sampling methods are two alternatives for video segment selection. The segment sampling scheme supports browsing at any desired speed, while minimizing the variation on the number of video segments skipped between samplings. On the other hand, the segment placement scheme supports completely uniform segment sampling across the disk array for some specific speedup rates. Experiments for the visual effect of the proposed segment skipping approach have been conducted on MPEG data. It is shown that the proposed method is a viable approach to video browsing.

Patent
29 Nov 1994
TL;DR: In this article, a negotiation handshake protocol is described which enables the two sites to negotiate the compression rate based on such factors, such as the speed or data bandwidth on the communications connection between two sites, the data demand between the sites and amount of silence detected in the speech signal.
Abstract: The present invention includes software and hardware components to enable digital data communication over standard telephone lines. The present invention converts analog voice signals to digital data, compresses that data and places the compressed speech data into packets for transfer over the telephone lines to a remote site. A voice control digital signal processor (DSP) operates to use one of a plurality of speech compression algorithms which produce a scaleable amount of compression. The rate of compression is inversely proportional to the quality of the speech the compression algorithm is able to reproduce. The higher the compression, the lower the reproduction quality. The selection of the rate of compression is dependant on such factors as the speed or data bandwidth on the communications connection between the two sites, the data demand between the sites and amount of silence detected in the speech signal. The voice compression rate is dynamically changed as the aforementioned factors change. A negotiation handshake protocol is described which enables the two sites to negotiate the compression rate based on such factors.

Journal ArticleDOI
TL;DR: Experimental results, obtained by computer simulations, confirm that the presented coding algorithm is a promising scheme for the application at extremely low transmission bit rates.
Abstract: An object-based analysis-synthesis image sequence coder for transmission bit rates between 8 and 16 kbit/s is presented. Each moving object is described by three sets of parameters defining its shape, motion, and colour. Coding is based on the source model of flexible 2D objects which move translationally in the image plane, as it has been used in an implementation for a 64 kbit/s ISDN videophone by Hotter (1990). In order to cut down the bit rate from 64 kbit/s to 8 kbit/s, QCIF image resolution instead of CIF resolution is applied. Image analysis and coding of object parameters have been adapted to the reduced resolution and to the changed parameter statistics, respectively. In addition to Hotter's coder, predictive coding is used for encoding polygons and splines to improve the coding efficiency of shapes. Vector quantization is applied instead of DCT for coding the luminance and chrominance parameters of the object textures. Uncovered background regions are encoded by applying adaptive prediction from either the neighbouring static background or a special background memory. Experimental results, obtained by computer simulations, confirm that the presented coding algorithm is a promising scheme for the application at extremely low transmission bit rates. This is shown by comparing the picture qualities obtained with the presented algorithm and a block-based hybrid-DCT scheme corresponding to H.261/RM8 at 11 kbit/s. >

Journal ArticleDOI
01 Jun 1994
TL;DR: This overview focuses on a comparison of lossless compression capabilities of the international standard algorithms for still image compression known as MH, MR, MMR, JBIG, and JPEC.
Abstract: This overview focuses on a comparison of lossless compression capabilities of the international standard algorithms for still image compression known as MH, MR, MMR, JBIG, and JPEC. Where the algorithms have parameters to select, these parameters have been carefully set to achieve maximal compression. Compression variations due to differences in data are illustrated and scaling of these compression results with spatial resolution or amplitude precision are explored. These algorithms are also summarized in terms of the compression technology they utilize, with further references given for precise technical details and the specific international standards involved. >

Journal Article
TL;DR: Dolby AC-3 is a flexible audio data compression technology capable of encoding a range of audio channel formats into a low rate bit stream, based on a transform filter bank and psychoacoustics.
Abstract: Dolby AC-3 is a flexible audio data compression technology capable of encoding a range of audio channel formats into a low rate bit stream. Channel formats range from monophonic to 5.1 channels, and may include a number of associated audio services. Based on a transform filter bank and psychoacoustics, AC-3 includes the novel features of transmission of a variable frequency resolution spectral envelope and hybrid backward/forward adaptive bit allocation.

Patent
25 Jan 1994
TL;DR: In this paper, the authors proposed a method for performing image compression that eliminates redundant and invisible image components using a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed.
Abstract: A method for performing image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

Patent
Chong U. Lee1, Pian Donald T1
01 Feb 1994
TL;DR: In this article, a video compression system and method for compressing video data for transmission or storage by reducing the temporal redundancy in the video data is described, where a frame of video data are divided into a variable number of blocks of pixel data of varying size, and each block of data is compared to a window of pixels in a reference frame of pixel images, typically the previous frame.
Abstract: A video compression system and method for compressing video data for transmission or storage by reducing the temporal redundancy in the video data is described. A frame of video data is divided into a variable number of blocks of pixel data of varying size, and each block of data is compared to a window of pixel data in a reference frame of pixel data, typically the previous frame. A best matched block of pixel data is selected from the window of pixel data in the reference frame, and a displacement vector is assigned to describe the selected block location in the reference frame relative to the current block of pixel data. The number and size of the blocks of pixel data are permitted to vary, in order to adapt to motion discontinuities in the sequential frames of pixel data. This is to allow prediction blocks of pixel data in the current frame to be smaller in areas of high activity, while maintaining high levels of compression, achieved by using larger prediction blocks, in areas of the frame with low levels of activity. A frame of predicted pixel data is assembled from variable size blocks of prediction data and subtracted from the current frame of pixel data. Only the residual difference, the displacement vectors and an indication of the block sizes used in the prediction are needed for transmission or storage.

Patent
07 Feb 1994
TL;DR: In this article, a standby dictionary is used to store a subset of encoded data entries previously stored in a current dictionary, and a selective overwrite dictionary swapping technique was used to allow all data entries to be used at all times for encoding character strings.
Abstract: A class of lossless data compression algorithms use a memory-based dictionary of finite size to facilitate the compression and decompression of data. To reduce the loss in data compression caused by dictionary resets, a standby dictionary is used to store a subset of encoded data entries previously stored in a current dictionary. In a second aspect of the invention, data is compressed/decompressed according to the address location of data entries contained within a dictionary built in a content addressable memory (CAM). In a third aspect of the invention, the minimum memory/high compression capacity of the standby dictionary scheme is combined with the fast single-cycle per character encoding/decoding capacity of the CAM circuit. In a fourth aspect of the invention, a selective overwrite dictionary swapping technique is used to allow all data entries to be used at all times for encoding character strings.

Proceedings Article
12 Sep 1994
TL;DR: Various design issues arise in the use of data compression in the dbms from the choice of algorithm, statistics collection, hardware versus software based compression, location of the compression function in the overall computer system architecture, unit of compression, update in place, and the application of log’ to compressed data.
Abstract: Computers running database management applications often manage large amounts of data. Typically, the price of the I/O subsystem is a considerable portion of the computing hardware. Fierce price competition demands every possible savings. Lossless data compression methods, when appropriately integrated with the dbms, yield signiflcant savings. Roughly speaking, a slight increase in cpu cycles is more than offset by savings in I/O subsystem. Various design issues arise in the use of data compression in the dbms from the choice of algorithm, statistics collection, hardware versus software based compression, location of the compression function in the overall computer system architecture, unit of compression, update in place, and the application of log’ to compressed data. These are methodic & y examined and trade-offs discussed in the context of choices made for IBM’s DB2 dbms product.