scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1984"


Journal Article
TL;DR: During the past few years several design algorithms have been developed for a variety of vector quantizers and the performance of these codes has been studied for speech waveforms, speech linear predictive parameter vectors, images, and several simulated random processes.
Abstract: A vector quantizer is a system for mapping a sequence of continuous or discrete vectors into a digital sequence suitable for communication over or storage in a digital channel. The goal of such a system is data compression: to reduce the bit rate so as to minimize communication channel capacity or digital storage memory requirements while maintaining the necessary fidelity of the data. The mapping for each vector may or may not have memory in the sense of depending on past actions of the coder, just as in well established scalar techniques such as PCM, which has no memory, and predictive quantization, which does. Even though information theory implies that one can always obtain better performance by coding vectors instead of scalars, scalar quantizers have remained by far the most common data compression system because of their simplicity and good performance when the communication rate is sufficiently large. In addition, relatively few design techniques have existed for vector quantizers. During the past few years several design algorithms have been developed for a variety of vector quantizers and the performance of these codes has been studied for speech waveforms, speech linear predictive parameter vectors, images, and several simulated random processes. It is the purpose of this article to survey some of these design techniques and their applications.

2,743 citations


Journal ArticleDOI
TL;DR: A new compression algorithm is introduced that is based on principles not found in existing commercial methods in that it dynamically adapts to the redundancy characteristics of the data being compressed, and serves to illustrate system problems inherent in using any compression scheme.
Abstract: Data stored on disks and tapes or transferred over communications links in commercial computer systems generally contains significant redundancy. A mechanism or procedure which recodes the data to lessen the redundancy could possibly double or triple the effective data densitites in stored or communicated data. Moreover, if compression is automatic, it can also aid in the rise of software development costs. A transparent compression mechanism could permit the use of "sloppy" data structures, in that empty space or sparse encoding of data would not greatly expand the use of storage space or transfer time; however , that requires a good compression procedure. Several problems encountered when common compression methods are integrated into computer systems have prevented the widespread use of automatic data compression. For example (1) poor runtime execution speeds interfere in the attainment of very high data rates; (2) most compression techniques are not flexible enough to process different types of redundancy; (3) blocks of compressed data that have unpredictable lengths present storage space management problems. Each compression ' This article was written while Welch was employed at Sperry Research Center; he is now employed with Digital Equipment Corporation. 8 m, 2 /R4/OflAb l strategy poses a different set of these problems and, consequently , the use of each strategy is restricted to applications where its inherent weaknesses present no critical problems. This article introduces a new compression algorithm that is based on principles not found in existing commercial methods. This algorithm avoids many of the problems associated with older methods in that it dynamically adapts to the redundancy characteristics of the data being compressed. An investigation into possible application of this algorithm yields insight into the compressibility of various types of data and serves to illustrate system problems inherent in using any compression scheme. For readers interested in simple but subtle procedures, some details of this algorithm and its implementations are also described. The focus throughout this article will be on transparent compression in which the computer programmer is not aware of the existence of compression except in system performance. This form of compression is "noiseless," the decompressed data is an exact replica of the input data, and the compression apparatus is given no special program information, such as data type or usage statistics. Transparency is perceived to be important because putting an extra burden on the application programmer would cause

2,426 citations


Journal ArticleDOI
TL;DR: This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that mixed-case English text can be coded in as little as 2.2 bits/ character with no prior knowledge of the source.
Abstract: The recently developed technique of arithmetic coding, in conjunction with a Markov model of the source, is a powerful method of data compression in situations where a linear treatment is inappropriate. Adaptive coding allows the model to be constructed dynamically by both encoder and decoder during the course of the transmission, and has been shown to incur a smaller coding overhead than explicit transmission of the model's statistics. But there is a basic conflict between the desire to use high-order Markov models and the need to have them formed quickly as the initial part of the message is sent. This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that mixed-case English text can be coded in as little as 2.2 bits/ character with no prior knowledge of the source.

1,318 citations


Journal ArticleDOI
TL;DR: The cost of a number of sequential coding search algorithms is analyzed in a systematic manner and it is found that algorithms that utilize sorting are much more expensive to use than those that do not; metric-first searching regimes are less efficient than breadth-first or depth-first regimes.
Abstract: The cost of a number of sequential coding search algorithms is analyzed in a systematic manner. These algorithms search code trees, and find use in data compression, error correction, and maximum likelihood sequence estimation. The cost function is made up of the size of and number of accesses to storage. It is found that algorithms that utilize sorting are much more expensive to use than those that do not; metric-first searching regimes are less efficient than breadth-first or depth-first regimes. Cost functions are evaluated using experimental data obtained from data compression and error correction studies.

623 citations


Journal ArticleDOI
TL;DR: An interactive image communication system transmitting image information stored in a central database over low-bitrate channels is described, and a data compression technique is applied in combination with a progressive image transmission procedure to shorten the transmission time.
Abstract: An interactive image communication system is described transmitting image information stored in a central database over low-bitrate channels. To shorten the transmission time, a data compression technique is applied in combination with a progressive image transmission procedure. Furthermore, human interaction is implemented in order to select specific areas for picture buildup or to reject additional image sharpening completely. The mean transmission time for each picture is essentially reduced if the transmission parameters of the investigated system are adapted to the visual threshold performance of the human eye. The adaptation is realized by means of the given classification algorithm. For a set of portrait pictures, more than 95 percent of the transmission time can be gained by this adaptation in comparison with PCM transmission.

102 citations


Patent
06 Aug 1984
TL;DR: In this article, an input data string including repetitive data more in number than the specified value is transformed into a data string having a format including the first region where non-compressed data are placed, the second region including a datum representative of a compressed data string section which has undergone the compression process and information indicative of the number of repetitive data.
Abstract: Method of data compression and restoration wherein an input data string including repetitive data more in number than the specified value is transformed into a data string having a format including the first region where non-compressed data are placed, the second region including a datum representative of a data string section which has undergone the compression process and information indicative of the number of repetitive data, i.e., the length of the data string section, and control information inserted at the front and back of the first region indicative of the number of data included in the first region, said transformed data string being recorded on the recording medium, and, for data reproduction, the first and second regions are identified on the basis of the control information read out on the recording medium so that the compressed data string section is transformed back to the original data string in the form of repetitive data.

71 citations


Patent
Gerald Goertzel1, Joan L. Mitchell1
15 Mar 1984
TL;DR: In this paper, a continuous adaptive probability decision model is proposed for data compression for transfer (storage or communication) by a continuously adaptive decision model, which closely approaches the compression entropy limit.
Abstract: Data compression for transfer (storage or communication) by a continuously adaptive probability decision model, closely approaches the compression entropy limit. Sender and receiver perform symmetrical compression/decompression of binary decision n according to probabilities calculated independently from the transfer sequence of 1 . . . n-1 binary decisions. Sender and receiver dynamically adapt the model probabilities, as a cumulative function of previously presented decisions, for optimal compression/decompression. Adaptive models for sender and receiver are symmetrical, to preserve data identity; transfer optimization is the intent. The source model includes a state generator and an adaptive probability generator, which dynamically modify the coding of decisions according to state, probability and bit signals, and adapt for the next decision. The system calculates probability history for all decisions, including the current decision, but uses probability history for decision n-1 (the penultimately current decision) for encoding decision n (the dynamically current decision). The system, separately at source and destination, reconfigures the compression/expansion algorithm, up to decision n-1, codes each decision in the data stream optimally, according to its own character in relation to the calculated probability history, and dynamically optimizes the current decision according to the transfer optimum of the data stream previously transferred. The receiver operates symmetrically to the sender. Sender and receiver adapt identically, and adapt to the same decision sequence, so that their dynamically reconfigured compression-expansion algorithms remain symmetrical--even though the actual algorithms may change with each decision as a function of dynamic changes in probability history.

47 citations


Proceedings Article
01 Jan 1984

44 citations


Patent
31 Dec 1984
TL;DR: In this paper, a method and apparatus for data compression in a digital imaging process is disclosed, whereby a more efficient reduction of memory space is obtained when a dithered image is of the so-called "grey scale" type.
Abstract: A method and apparatus for data compression in a digital imaging process is disclosed. More specifically a method is disclosed whereby a more efficient reduction of memory space is obtained when a dithered image is of the so called "grey scale" type. The binary bits representing the pixels of the dithered image are differentiated to obtain groups of ones or zeros, so that these groups may be represented by a code, thereby saving memory space. One embodiment uses the Exclusive OR function in the differentiation process.

35 citations


Proceedings ArticleDOI
Roland Wilson1
01 Mar 1984
TL;DR: A new class of predictive coding algorithms, based on the quad-tree image representation, is described, and data compression schemes based on these algorithms have been found to produce acceptable images at rates as low as 0.25 bit/pixel.
Abstract: A new class of predictive coding algorithms, based on the quad-tree image representation, is described. Data compression schemes based on these algorithms have been found to produce acceptable images (peak-rms SNR > 30dB) at rates as low as 0.25 bit/pixel.

30 citations


Journal ArticleDOI
TL;DR: Test results verify that at a bit error probability of 10-6or less, this concatenated coding system does provide a coding gain of 2.5 dB or more over the viterbi-decoded convolutional-only coding system.
Abstract: The development of sophisticated adaptive source coding algorithms together with inherent error sensitivity problems fostered the need for efficient space communication at very low bit error probilbilities (\leq 10^{-6}) . This led to the specification and implementation of a concatenated coding system using an interleaved Reed-Solomon code as the outer code and a Viterbi-decoded convolutionai code as the inner code. This paper presents the experimental results of this channel coding system under an emulated S -band uplink and X -band downlink two-way space communication channel, where both uplink and downlink have strong carrier power. Test results verify that at a bit error probability of 10-6or less, this concatenated coding system does provide a coding gain of 2.5 dB or more over the viterbi-decoded convolutional-only coding system. These tests also show that a desirable interleaving depth for the Reed-Solomon outer code is 8 or more. The imptict of this "virtually" error-free space communication link on the transmission of images is discussed and examples of Simulation results are given.

Patent
07 Mar 1984
TL;DR: In this paper, a circuit tester and test method are described that compress the amount of data stored in local test data RAMS for the implementation of a circuit test, thereby reducing the number of data that must be downloaded to the local test RAMs, thereby improving test throughput.
Abstract: A circuit tester and test method are described that compress the amount of data stored in local test data RAMS (13) for the implementation of a circuit test, thereby reducing the amount of data that must be downloaded to the local test data RAMs (13), thereby improving test throughput. Derivative data vectors are utilized in addition to raw data vectors as part of the data compression technique. Further compression results from storing only unique data vectors in the local test data RAMs (13) and utlizing a sequencer (16) to control the order in which the unique data vectors are utilized. The sequencer includes test program logic (15,17) and logic capable of implementing on test pins indirect counters (19).

Journal ArticleDOI
TL;DR: A study of a low-rate monochrome video compression system that is a conditional-replenishment coder that uses two-dimensional Walsh-transform coding within each video frame, augmented with a motion-prediction algorithm that measures spatial displacement parameters from frame to frame, and codes the data using these parameters.
Abstract: A study of a low-rate monochrome video compression system is presented in this paper. This system is a conditional-replenishment coder that uses two-dimensional Walsh-transform coding within each video frame. The conditional-replenishment algorithm works by transmitting only the portions of an image that are changing in time. This system is augmented with a motion-prediction algorithm that measures spatial displacement parameters from frame to frame, and codes the data using these parameters. A comparison is made between the conditional-replen-ishment system with, and without, the motion-prediction algorithm. Subsampling in time is used to maintain the data rate at a fixed value. Average bit rates of 1 bit/picture element (pel) to 1/16 bit/pel are considered. The resultant performance of the compression simulations is presented in terms of the average frame rates produced.

Patent
01 May 1984
TL;DR: In this article, a line position is detected from the image signal and normalized, and its relative normalized line position from a line to be coded and an immediately preceding coded line is encoded.
Abstract: In an image data compression system for encoding a binary or multilevel image signal into a compressed code, a line position is detected from the image signal and normalized, and its relative normalized line position is calculated from a line to be coded and an immediately preceding coded line, and then the relative normalized line position is encoded. Run positions (where pel values change) on the same line in the image signal are detected and normalized, and relative normalized run positions of adjacent one of the normalized run positions are calculated and encoded.

Journal ArticleDOI
TL;DR: This correspondence introduces an adaptive realization of the maximum likelihood (ML) processor for time delay estimation (TDE) and a modified ML processor, which requires less computations but still performs better than the other when implemented in an adaptive way.
Abstract: This correspondence introduces an adaptive realization of the maximum likelihood (ML) processor for time delay estimation (TDE). Also presented is a modified ML processor, which requires less computations but still performs better than the other when implemented in an adaptive way. Widrow's least mean square (LMS) adaptive filter algorithm is used to implement the two methods. Simulation results comparing these processors with other existing adaptive TDE algorithms are also presented.

Patent
30 Oct 1984
TL;DR: In this paper, the Hadamard transform of video data is used for video compression, and the data can then be reproduced, unpacked and decoded to regenerate the original video signals, thereby achieving a very high degree of compression with little or no degradation of image quality.
Abstract: Video compression is achieved by taking the Hadamard transform of video data, comparing Hadamard coefficients from consecutive lines, encoding the changed coefficient values via an entropy coding technique, removing superfluous bits and storing the data on a magnetic tape. The stored data can then be reproduced, unpacked and decoded to regenerate the original video data signals, thereby achieving a very high degree of compression with little or no degradation of image quality.

DOI
01 May 1984
TL;DR: The problems which had to be overcome in order to construct a display directly exploiting the quad tree and related forms are addressed and the potential of such a display for rapidly handling very large amounts of pictorial data and for application to picture-archive retrieval is discussed.
Abstract: A colour raster display system based on picture encoding is described. Encoding pictures as a means of data compression has a long history and is a technique of interest to anyone working with complex pictures. Its primary value is the lower bandwidths required of transmission channels, but its more frugal use of store is not to be overlooked. A quad tree as a means of encoding area-coherent pictures is used because it retains spatial information in a form which is still amenable to processing. The combination of this with compression makes this a particularly useful form of coding for raster-style colour pictures, and this paper addresses the problems which had to be overcome in order to construct a display directly exploiting the quad tree and related forms. The potential of such a display for rapidly handling very large amounts of pictorial data and for application to picture-archive retrieval is also discussed. The system has been constructed, is fully operational and incorporates roam and zoom operations. Sample pictures are included.


Patent
30 Nov 1984
TL;DR: In this article, a progressive scan processor including memories for time compressing a video input signal and doubling the line rate to reduce visible line structure when the double line-rate signal is displayed.
Abstract: A television receiver/monitor includes a progressive scan processor including memories for time compressing a video input signal and doubling the line rate to reduce visible line structure when the double line-rate signal is displayed. The memories are controlled to provide a video compression factor (2.5:1) greater than the display line rate increase (2:1) to provide a display retrace time (10.8 micro-seconds) substantially equal to the blanking interval (11.0 micro-seconds) of the video input signal thereby decreasing display power losses and horizontal drive requirements.

Journal ArticleDOI
TL;DR: It is shown through exhaustive analysis that the direct data compression technique utilizing adaptive least-squares curve fitting yields a relatively fast and efficient representation of ECG signals at about 1.6 bits/sample, while maintaining visual fidelity and a normalized mean-squared error less than 1%.
Abstract: Many different techniques have recently been proposed for efficient storage of ECG data with data compression as one of the main objectives. Although high compression ratios have been claimed for some of these techniques, the techniques did not always include the word length considerations with regard to the parameters representing the compressed signal. The authors feel that any technique can be meaningfully evaluated only if the resulting compression is expressed in bits/sample rather than the compression ratio that is often used in this field. This paper provides a critical evaluation of two classes of techniques, the direct data compression technique and the transformation technique. It is shown through exhaustive analysis that the direct data compression technique utilizing adaptive least-squares curve fitting yields a relatively fast and efficient representation of ECG signals at about 1.6 bits/sample, while maintaining visual fidelity and a normalized mean-squared error less than 1%.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: This paper examines the performance of a coding system operating at either 6,500 bit per second or 13,000 bits per second which incorporates a full-search vector quantizer to encode the outputs coming from a complete subband coder filter bank.
Abstract: Vector Quantization (VQ) and subband coding (SBC) are two of the most efficient data compression systems in the field of medium-to-low rate speech waveform coding. In this paper, we examine the performance of a coding system operating at either 6,500 bits per second or 13,000 bits per second which incorporates a full-search vector quantizer to encode the outputs coming from a complete subband coder filter bank.

Proceedings ArticleDOI
15 Jun 1984
TL;DR: The theory and application of these techniques leads to an optimal bit-efficient quantization format with a number of interesting properties, which can have beneficial effects on hardware costs arithmetic processing and data compression.
Abstract: Physical signals acquired, quantized and processed in imaging systems are inherently noisy. Considering the source signal and noise statistics when designing a quantizing and processing system leads to a number of general and practical results. The theory and application of these techniques leads to an optimal bit-efficient quantization format with a number of interesting properties. Applying these can have beneficial effects on hardware costs arithmetic processing and data compression.

Journal ArticleDOI
TL;DR: The outline of an approach for image data compression using a 2-D lattice predictor is presented and preliminary results indicate that acceptable quality images are obtained at information rates, bit rates and signal/noise ratios.
Abstract: The outline of an approach for image data compression using a 2-D lattice predictor is presented. Preliminary results indicate that acceptable quality images (quantised to 15 levels) at information rates, bit rates and signal/noise ratios ranging, respectively, from 1.16 to 1.38 bpp, 1.19 to 1.40 bpp and 20.6 to 22.5 dB have been obtained for lattice stages 1 to 5.


Proceedings ArticleDOI
04 Dec 1984
TL;DR: A video compression technique which utilizes the alternating projection theorem for convex sets and can be made more robust by easily adding additional convex set or using it in conjunction with other coding schemes such as motion compensation.
Abstract: This paper describes a video compression technique which utilizes the alternating projection theorem for convex sets. The image to be transmitted is determined to be in certain convex sets and parameters defining these sets are sent. The receiver can then use the method of successive projections to locate an image which is in the intersection of the sets. If the intersection is small then the image determined should be close to the desired image. The coder can be made more robust by easily adding additional convex sets or using it in conjunction with other coding schemes such as motion compensation.

Patent
Dimitris Anastassiou1
08 Jun 1984
TL;DR: In this article, a first bit plane (bit array) is generated of the graphic image, and then, by comparison/logic circuitry, a second bit plane is generated which identifies only pixels of the graphics image representing edges e.g. characters or lines in the image.
Abstract: A first bit plane (bit array) is generated of the graphic image, and then, by comparison/logic circuitry, a second bit plane is generated which identifies only pixels of the graphics image representing edges e.g. of characters or lines in the image. The data stream representing the image from the first bit plane is supplemented by edge bits from the second bit plane.

Journal ArticleDOI
TL;DR: In this article, the linear-quadratic-Gaussian problem of quantized control subject to quantized measurements is considered with the measurement and control quantizers formulated as data compression systems, and the optimum closed-loop control is shown to be separable in control law, control quantizer and estimation.
Abstract: The linear-quadratic-Gaussian problem of quantized control subject to quantized measurements is considered with the measurement and control quantizers formulated as data compression systems. The optimum closed-loop control is shown to be separable in control law, control quantizer and estimation. However, estimation and measurement quantization are non-separable. The optimum measurement quantizer is of a differential structure but requires optimum state estimation at both encoder and decoder. A fixed total number of quantization levels should be equally allocated to the measurement and control quantizers, and the quantizer design depends on both the known control law and the solution to the matrix Riccati equation. A suboptimum control structure is developed, and the effect of quantization error on estimation error is shown to be stable.

Patent
Adachi Eiichi1
04 Jun 1984
TL;DR: In this article, a data compression device for a facsimile apparatus or like picture processing apparatus is operable with a desirable coding efficiency to remarkably reduce the transmission time when the absolute value of a difference between the numbers of transition points of picture signals associated with two adjacent scanning lines is larger than a predetermined run difference coefficient.
Abstract: A data compression device for a facsimile apparatus or like picture processing apparatus is operable with a desirable coding efficiency to remarkably reduce the transmission time. When the absolute value of a difference between the numbers of transition points of picture signals associated with two adjacent scanning lines is larger than a predetermined run difference coefficient, the device codes the picture signals by a one-dimensional mode coding operation which occurs with respect to the main scan direction only. When the former is equal to or smaller than the latter, the device codes the picture signals in a two-dimensional mode coding operation which occurs with respect to both the main scan direction and the subscan direction.


Proceedings ArticleDOI
04 Dec 1984
TL;DR: The Pipelined Resampling Processor was presented to this conference in 1983 in concept as a practical solution to real-time high resolution geometric image rectification needs and performance results obtained from the development unit completed in late 1983 are presented.
Abstract: The Pipelined Resampling Processor (PRP) was presented to this conference in 1983 in concept as a practical solution to real-time high resolution geometric image rectification needs. In the present report we present performance results obtained from the PRP development unit completed in late 1983. The small size, weight, and power requirements of the PRP and its high throughput make it very well suited for space and airborne applications where goemetric correction of image data must be done autonomously in real-time. This high resolution geometric correction is a necessary adjunct to applications using frame differencing or frame averaging for motion compensation, moving target indication, noise suppression or data compression as well as applications requiring precise correction of focal plane sampling distortions. By equipping the PRP with manual controls, image geometry can be manipulated at video rates from a console to achieve a much higher image analysis throughput than is possible for more general purpose processing facilities. In mapping, merging, classifi-cation and registration applications, interactive video rate processing will be important in bringing these and other image analysis techniques out of the laboratory and into an operational environment.