scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1987"


Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations


Journal ArticleDOI
TL;DR: A variety of data compression methods are surveyed, from the work of Shannon, Fano, and Huffman in the late 1940s to a technique developed in 1986, which has important application in the areas of file storage and distributed systems.
Abstract: This paper surveys a variety of data compression methods spanning almost 40 years of research, from the work of Shannon, Fano, and Huffman in the late 1940s to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effective data density. Data compression has important application in the areas of file storage and distributed systems. Concepts from information theory as they relate to the goals and evaluation of data compression methods are discussed briefly. A framework for evaluation and comparison of methods is constructed and applied to the algorithms presented. Comparisons of both theoretical and empirical natures are reported, and possibilities for future research are suggested.

581 citations


Book
13 May 1987
TL;DR: Books and internet are the recommended media to help you improving your quality and performance.
Abstract: Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.

553 citations


Journal ArticleDOI
TL;DR: This paper presents a recursive algorithm for DCT with a structure that allows the generation of the next higher order DCT from two identical lower order D CT's.
Abstract: The discrete cosine transform (DCT) is widely applied in various fields, including image data compression, because it operates like the Karhunen-Loeve transform for stationary random data. This paper presents a recursive algorithm for DCT with a structure that allows the generation of the next higher order DCT from two identical lower order DCT's. As a result, the method for implementing this recursive DCT requires fewer multipliers and adders than other DCT algorithms.

483 citations


Journal ArticleDOI
TL;DR: Experimental results reported here indicate that the Markov modelling approach generally achieves much better data compression than that observed with competing methods on typical computer data.
Abstract: A method of dynamically constructing Markov chain models that describe the characteristics of binary messages is developed. Such models can be used to predict future message characters and can therefore be used as a basis for data compression. To this end, the Markov modelling technique is combined with Guazzo's arithmetic coding scheme to produce a powerful method of data compression. The method has the advantage of being adaptive: messages may be encoded or decoded with just a single pass through the data. Experimental results reported here indicate that the Markov modelling approach generally achieves much better data compression than that observed with competing methods on typical computer data.

255 citations


Proceedings ArticleDOI
01 Jan 1987
TL;DR: An efficient search technique is presented which minimizes the computations necessary for estimating the motion in video-sequences by the block matching method and the theoretical basis for conducting such a reduced search is discussed.
Abstract: We present an efficient search technique which minimizes the computations necessary for estimating the motion in video-sequences by the block matching method We also discuss the theoretical basis for conducting such a reduced search by our technique We then present two algorithms which employ the proposed technique for estimating the motion typical of video-conferencing environment Next, the results of computer simulations on a real video-sequence are included which demonstrate the effectiveness of the proposed technique Finally, the results of a study of statistical properties of block motion-compensated frame difference signals are also summarized, to assist in future choice of a coding strategy for such signals

229 citations


Journal ArticleDOI
TL;DR: The Karhunen-Loeve transform, which optimally extracts coherent information from multichannel input data in a least-squares sense, is used for two specific problems in seismic data processing.
Abstract: The Karhunen-Loeve transform, which optimally extracts coherent information from multichannel input data in a least-squares sense, is used for two specific problems in seismic data processing. The first is the enhancement of stacked seismic sections by a reconstruction procedure which increases the signal-to-noise ratio by removing from the data that information which is incoherent trace-to-trace. The technique is demonstrated on synthetic data examples and works well on real data. The Karhunen-Loeve transform is useful for data compression for the transmission and storage of stacked seismic data. The second problem is the suppression of multiples in CMP or CDP gathers. After moveout correction with the velocity associated with the multiples, the gather is reconstructed using the Karhunen-Loeve procedure, and the information associated with the multiples omitted. Examples of this technique for synthetic and real data are presented.

165 citations


Patent
18 Sep 1987
TL;DR: In this article, the authors present methods and apparatus for processing signals to remove redundant information and make the signals more suitable for transfer through a limited-bandwidth medium using mean-square difference signals.
Abstract: The present invention relates to methods and apparatus for processing signals to remove redundant information thereby making the signals more suitable for transfer through a limited-bandwidth medium. The present invention specifically relates to methods and apparatus useful in video compression systems. Typically, the system determines differences between the current input signals and the previous input signals using mean-square difference signals. These mean-square signals are processed and compared with one or more thresholds for determining one of several modes of operation. After processing in some mode, the processed signals are in the form of digital numbers and these digital numbers are coded, using ordered redundancy coding, and transmitted to a receiver.

146 citations



Patent
24 Feb 1987
TL;DR: In this paper, a method and apparatus for compressing data, particularly useful in a modem, is described, where parallel encoding and decoding tables are provided at the encoder and the decoder, and are updated for each character processed.
Abstract: A method and apparatus for compressing data, particularly useful in a modem. The preferred method is implemented in a microprocessor within a modem, and dynamically adapts to changing data statistics. Parallel encoding and decoding tables are provided at the encoder and the decoder, and are updated for each character processed. Each table has a plurality of digital compression codes associated with characters of an alphabet. In response to an item of data presented for encoding, a compression code which corresponds to the character presented for encoding is selected using the encoding table. The selected compression code is provided as an output. Periodically, the association between the codes and the characters of the alphabet in the table is adjusted as a function of the frequency of occurrence of characters of the alphabet, over a plurality of characters. As the frequency of occurrence of characters presented for encoding changes, the more frequently occurring characters become associated with the shorter codes in the encoding table. By performing the same steps in the decoding table, the compression codes are used at the decoder to select a character of the alphabet, which is provided as a decoded output.

118 citations


Patent
15 Oct 1987
TL;DR: In this article, a pair of data compression/decompression translation tables are constructed based on the data which is to be compressed or decompressed, with one table used to compress or decompress while the other is being rebuilt, thus reflecting the characteristics of the most recent input data.
Abstract: Data compression/decompression apparatus and methods are provided which exhibit significant data compression improvement over prior art methods and apparatus. This is achieved by providing an adaptive characteristic in which a pair of data compression/decompression translation tables are constructed based on the data which is to be compressed or decompressed. One table is used to compress or decompress while the other is being rebuilt, thus reflecting the characteristics of the most recent input data.

Patent
18 Aug 1987
TL;DR: In this article, a first arithmetic coding encoder is introduced, characterized by a first set of encoding conventions, which encoder generates a code stream that points to an interval along a number line in response to decision event inputs.
Abstract: A data compression/de-compression system includes a first arithmetic coding encoder, characterized by a first set of encoding conventions, which encoder generates a code stream that points to an interval along a number line in response to decision event inputs. The code stream can be adjusted to point to the same interval as code streams generated by one or more other arithmetic coding encoders characterized by encoding conventions differing in some way from those in the first set. In a binary context, optimal hardware encoders increment or decrement the value of the code stream in response to each occurrence of a more probable decision event while optimal software so changes the code stream value for each occurrence of a less likely event. According to the invention, the code streams for optimal hardware encoders and optimal software encoders are made either identical or compatible to enable similar decoding for each. Identical or compatible code streams are obtained from encoders having different event sequence or symbol ordering along intervals on the number line. Moreover, various hardware and software decoders--with respective symbol ordering and other conventions--can be used in conjunction with encoders having respective conventions, wherein each decoder retrieves the same sequence of decisions for a code stream pointing to a given interval. In both encoding and decoding, the present invention overcomes finite precision problems of carry propagation and borrow propagation by handling data in bytes and bit stuffing at byte boundaries and by pre-borrowing as required.

Journal ArticleDOI
TL;DR: The main advantages of space compression are the reduction in the number of pins monitored by the tester and the minimization of the memory space required for reference signatures.
Abstract: The main advantages of space compression are the reduction in the number of pins monitored by the tester and the minimization of the memory space required for reference signatures. Compression, however, may reduce fault coverage. We investigate output data modification with the objective to improve the efficiency of syndrome testing and to reduce by a significant amount the error probability. It was found that this approach to output modification depends strongly upon the functions tested and may result in a complicated testing circuit, in some cases. A design algorithm is proposed which combines space compression and output modification. This algorithm will often minimize the disadvantages of both approaches while maintaining error coverage information.

Proceedings ArticleDOI
Pierre Duhamel1, H. H'Mida
06 Apr 1987
TL;DR: Two new implementation of DCT's are proposed which have several interesting features, as far as VLSI implementation is concerned, and are mainly based on a new formulation of a length-2nDCT as a cyclic convolution.
Abstract: Small length Discrete Cosine Transforms (DCT's) are used for image data compression. In that case, length 8 or 16 DCT's are needed to be performed at video rate. We propose two new implementation of DCT's which have several interesting features, as far as VLSI implementation is concerned. A first one, using modulo-arithmetic, needs only one multiplication per input point, so that a single multiplier is needed on-chip. A second one, based on a decomposition of the DCT into polynomial products, and evaluation of these polynomial products by distributed arithmetic, results in a very small chip, with a great regularity and testability. Furthermore, the same structure can be used for FFT computation by changing only the ROM-part of the chip. Both new architectures are mainly based on a new formulation of a length-2nDCT as a cyclic convolution, which is explained in the first section of the paper.

Patent
07 Dec 1987
TL;DR: In this paper, a method of data compression for recording data on a recording medium such as a magentic tape, a method for data restoration for data which has been compressed for recording, and an apparatus for data compression and restoration is presented.
Abstract: A method of data compression for recording data on a recording medium such as a magentic tape, a method of data restoration for data which has been compressed for recording, and an apparatus of data compression and restoration prescribe the data to be compressed based on type or value and encode the compression object data, thereby reducing the number of bits necessary to indicate the compression object data. Compression is implemented only for consecutive data fewer in number of repeating consecutive bytes than a certain number, thereby reducing the number of bits necessary to indicate the number of bytes of the consecutive data. A compression mark indicative of compression is appended to the compressed data, consisting of data made by encoding the compression object consecutive data and data indicating the number of bytes of the data, either at the front or rear of the compressed data, whereby an input data string can be compressed drastically and compressed data, even including errors, can be restored.

Proceedings ArticleDOI
13 Oct 1987
TL;DR: Sub-band coding has been investigated for the novel application of video transmission over packet-switched networks in this paper, which divides the input signal into frequency bands in all three dimensions, and yields high compression with sustained good quality.
Abstract: Sub-band coding has been investigated for the novel application of video transmission over packet-switched networks. The scheme, which divides the input signal into frequency bands in all three dimensions, seems promising in that it lends itself to parallel implementation, it is robust enough to handle errors due to lost packets; and it yields high compression with sustained good quality. Moreover, it may be well integrated with the network to handle issues like flow-control and error-handling. The article presents the underlying design goals together with a software implementation and associated results.

Book
01 Jan 1987
TL;DR: In this paper, the authors provide professionals and students with a path to faster data transmission times and reduced transmission costs with their in-depth examination of practical and easy-to-implement data-compression techniques.
Abstract: From the Publisher: Provides professionals and students with a path to faster data transmission times and reduced transmission costs with its in-depth examination of practical and easy-to-implement data-compression techniques. Retaining all data compression fundamentals from the first two editions, the Third Edition expands to include information on the structure and operation of several popular compression algorithms new to the market, including Microcom Networking Protocol (MNP) Class 5 data compression and MNP Class 7 Enhanced Data Compression. Numerous methods to enhance the efficiency of both character-oriented and statistical compression techniques are included as is a new chapter on character compression that discusses three methods to be used to obtain the special compression indicating character.

Journal ArticleDOI
TL;DR: A unified treatment of the various techniques to reduce the output data from a unit under test is given and the characteristics of time compression schemes with respect to errors detected are developed.
Abstract: A unified treatment of the various techniques to reduce the output data from a unit under test is given. The characteristics of time compression schemes with respect to errors detected are developed. The use of two or more of these methods together is considered. Methods to design efficient test compression structures for built-in-tests are proposed. The feasibility of the proposed approach is demonstrated by simulation results.

Patent
25 Aug 1987
TL;DR: In this article, a digital compression filter with poles at the zero locations, but shifted inside the unit circle to prevent error-ramp build-up was used to reduce the bit-rate needed for accurate transmission.
Abstract: Audio signals such as ECG, speech and music are digitally processed to reduce the bit-rate needed for accurate transmission, known as minimizing the entropy of the signal. The transmitter features a digital compression filter with zeros restricted to certain points on the unit circle, and Huffman encoding for transmission. The receiver features a digital decompression filter with poles at the zero locations, but shifted inside the unit circle to prevent error-ramp build-up.

Patent
Richards Norman Dennis1
06 Apr 1987
TL;DR: In this article, the pixel information representing an image for display is encoded using data compression into display data which can be stored on a compact disc, and complementary decoding to obtain the original data consists in restituting the second matrix by interpolation filtering the coded fourth matrix, and combining the restituted second matrix with decoded third matrix.
Abstract: Pixel information representing an image for display is encoded using data compression into display data which can be stored on a compact disc. The data compression consists in obtaining the pixel information as a first matrix of high resolution pixel values, subtracting from this first matrix a second matrix composed of lower resolution pixel values, produced by low pass filtering the first matrix, to produce a third matrix of difference values, decimation filtering the second matrix to produce a fourth matrix of less density lower resolution pixel values and encoding the third and fourth matrices. The complementary decoding to obtain the original data consists in restituting the second matrix by interpolation filtering the coded fourth matrix, and combining the restituted second matrix with decoded third matrix. The coding of the third matrix takes into account rate of change pixel value information obtained by delta coding the fourth matrix and the decoding of the third matrix takes into account equivalent rate of change pixel information available at the interpolation filtering.

Patent
02 Jun 1987
TL;DR: In this paper, a video transceiver includes a compressor which grabs a pair of blocks of image data from a video frame store and loads them into a dual port memory, both processors operating in parallel to speed up the entire process.
Abstract: A video transceiver includes a compressor which grabs a pair of blocks of image data from a video frame store and loads them into a dual port memory. A first processor, working through one of the dual ports, performs a portion of an image compression algorithm, while a second processor, working through a second one of the dual ports, performs the remainder of the compression algorithm on each one of the two pairs of blocks, both processors operating in parallel to speed up the entire process. A key word in each block is changed in accordance with each step, so that each of the two processors is prevented from grabbing the wrong one of the two blocks from the dual port memory. The resulting compressed data is queued in a temporary buffer, from which it is returned to another portion of the video frame store in serial fashion, rather than block, in preparation for serial transmission.

Proceedings ArticleDOI
J. Lynch1, J. Josenhans2, R. Crochiere2
06 Apr 1987
TL;DR: A new algorithmic technique is presented for efficiently implementing the end-point decisions necessary to separate and segment speech from noisy background environments and for silence compression of speech in which speech segments are encoded with a low bit-rate encoding scheme and silence information is characterized by a set of parameters.
Abstract: A new algorithmic technique is presented for efficiently implementing the end-point decisions necessary to separate and segment speech from noisy background environments. The algorithm utilizes a set of computationally efficient production rules that are used to generate speech and noise metrics continuously from the input speech waveform. These production rules are based on statistical assumptions about the characteristics of the speech and noise waveform and are generated via time-domain processing to achieve a zero delay decision. An end-pointer compares the speech and silence metrics using an adaptive thresholding scheme with a hysteresis characteristic to control the switching speed of the speech/silence decision. The paper further describes the application of this algorithm to silence compression of speech in which speech segments are encoded with a low bit-rate encoding scheme and silence information is characterized by a set of parameters. In the receiver the resulting packetized speech is reconstructed by decoding the speech segments and reconstructing the silence intervals through a noise substitution process in which the amplitude and duration of background noise is defined by the silence parameters. A noise generation technique is described which utilizes an 18th order polynomial to generate a spectrally flat pseudo-random sequence that is filtered to match the mean coloration of acoustical background noise. A technique is further described in which the speech/silence transitions are merged rather than switched to achieve maximum subjective performance of the compression technique. The above silence compression algorithm has been implemented in a single DSP-20 signal processing chip using sub-band coding for speech encoding. Using this system, experiments were conducted to evaluate the performance of the technique and to verify the robustness of the endpoint and silence compression over a wide range of background noise conditions.


Journal ArticleDOI
TL;DR: A lossless progressive transmission method for grey-scale images which concentrates early transmission efforts on areas of greater image information content is described, and is computationally simple with a complexity which grows linearly with the number of pixels.
Abstract: A lossless progressive transmission method for grey-scale images which concentrates early transmission efforts on areas of greater image information content is described. The receiver does not have a priori knowledge of which image areas are to receive preferential treatment, and the preferential level of resolution is the pixel. The method makes use of simultaneous geometric and information content decompositions. The method is computationally simple with a complexity which grows linearly with the number of pixels. Compression achieved approaches that obtained by nonprogressive lossless methods, and is approximately the same as for homogeneous progressive lossless methods. Extensions of the method for progressive transmission with limited distortion and greater compression are also discussed.

Patent
Jr. Glen George Langdon1
20 Mar 1987
TL;DR: In this article, a method and apparatus for compressing multilevel signals is described, which is based upon prediction errors and probability distributions, and is made efficient with a partition which is a function of the sign and the number of significant bits in the prediction errors.
Abstract: This invention relates to method and apparatus for compressing multilevel signals. The compression is based upon prediction errors and probability distributions. Compression is improved by conditioning the probability distributions using context of previous events. Storage required for storing the probability distributions is reduced by partitioning the prediction errors into predetermined ranges which become the coding events and contexts. Compression is made efficient with a partition which is a function of the sign and the number of significant bits in the prediction errors.

Patent
Toshio Koga1, Junichi Ohki1
30 Oct 1987
TL;DR: In this paper, a vector quantizer selects one of the vectors retrieved from the memory which is nearest to the value of the interframe differential image sequence and generates an index signal representative of the selected vector, which index signal is encoded and transmitted to a destination.
Abstract: "Data Compression Using Orthogonal Transform and Vector Quantization" ABSTRACT OF THE DISCLOSURE In an image communication system, an input image sequence is converted into a block-formatted sequence. data compression signal indicative of the amount of moving blocks in the block-formatted sequence is generated to individually control a plurality of vector quantizers each having a particular frequency band and a memory containing output vectors. The output vectors of each of the vector quantizers is representative of inverse orthogonal transform of a code table of optimum quantized vectors in the particular frequency band, the optimum quantized vectors being orthogonal transform of interframe differential training image sequences. The output vectors is retrievable from the memory as a function of an interframe differential image sequence, or prediction error. Each vector quantizer selects one of the vectors retrieved from the memory which is nearest to the value of the interframe differential image sequence and generates an index signal representative of the selected vector, which index signal is encoded and transmitted to a destination. The outputs of the vector quantizers are processed by inverse vector quantizers to generate a predictive image sequence. The prediction error is detected as a difference between the predictive image sequence and the block-formatted sequence.

Book
01 Jan 1987
TL;DR: Part 1 Logic design: binary numbers boolean algebra and minimization combinational circuit design sequential circuit design logic circuit implementation.
Abstract: In Part III of this book, we discovered that many applications and digital systems take advantage of data compression, be it for storage savings, bandwidth reduction, or transmission time savings. Incorporating data compression in a digital system requires careful design to assure it will provide the intended benefits and operate harmoniously with other system functions. In this chapter, we introduce decisions that designers face when choosing to integrate data compression in their digital systems.

Journal ArticleDOI
TL;DR: Investigation of information-preserving compression of Landsat image data based on an entropy study indicates a 3 : 1 compression ratio can be achieved with a simple real-time compressions scheme.
Abstract: This paper investigates information-preserving compressionof Landsat image data based on an entropy study. Measurementsof the statistical information in actual Landsat-4 images indicate a 3 : 1compression ratio can be achieved with a simple real-time compressionscheme.

Patent
11 Mar 1987
TL;DR: In this article, a transform approach to image coding where pixels are coded in the order prescribed by a predetermined "List" is proposed, where the code for each pixel is developed by computing a prediction for the pixel based on the known pixel values in the neighborhood of the pixel and subtracting this prediction from the true value of the pixels.
Abstract: A transform approach to image coding where pixels are coded in the order prescribed by a predetermined "List". The code for each pixel is developed by computing a prediction for the pixel based on the known pixel values in the neighborhood of the pixel and subtracting this prediction from the true value of the pixel. This results in a sequence of predominantly non-zero valued pixel prediction errors at the beginning of the encoding process (of each block), and predominantly zero valued pixel prediction errors towards the end of the encoding process. Enhancements to this approach include adaptive encoding, where pixel sequences having expected zero prediction errors are not encoded, and run length encoding of the signals to be transmitted or stored for augmenting the encoding process.

Journal ArticleDOI
Robert J. Moorhead1, S. Rajala, L. Cook
TL;DR: This paper presents and analyzes a pel-recursive, motioncompensated, image sequence compression algorithm and indicates that implementing the analytical model as opposed to the generally used heuristic technique does yield a decrease in the information rate and the computational requirements.
Abstract: This paper presents and analyzes a pel-recursive, motioncompensated, image sequence compression algorithm [1]. The analysis retains all the terms of the Taylor's series expansion and yields a set of equations for which the convergence criteria and the convergence rate of the motion estimate are more easily seen. The existing motion prediction schemes are also reviewed and a new motion prediction scheme is presented which is shown to be superior to the existing schemes. Simulations run on actual image sequences to verify the analytical results indicate that implementing the analytical model as opposed to the generally used heuristic technique does yield a decrease in the information rate and the computational requirements. Simulation results also are included which use the "projection-along-the-motiontrajectory" or PAMT prediction scheme. Third, zeroth-order entropy encoding is shown to reduce the bit rate on the order of 12 percent, and to reduce the mean square error in the reconstructed images on the order of 60 percent when compared to first-order entropy encoding. Fourth, field-to-field motion prediction is compared to frame-to-frame motion prediction.