scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1973"


Journal ArticleDOI
P. Ready1, P. Wintz1
TL;DR: The Karhunen-Loeve transformation is applied to multispectral data for information extraction, SNR improvement, and data compression and provides a set of uncorrelated principal component images very useful in automatic classification and human interpretation.
Abstract: The Karhunen-Loeve transformation is applied to multispectral data for information extraction, SNR improvement, and data compression. When applied in the spectral dimension, the transform provides a set of uncorrelated principal component images very useful in automatic classification and human interpretation. Significant improvements in SNR and estimates of the noise variance are also shown to be possible in the spectral dimension. Data compression results using the transform on one-, two-, and three-dimensional blocks over three general types of terrain are presented.

184 citations


Journal ArticleDOI
TL;DR: Waveform segmentation is treated as a problem of piecewise linear uniform (minmax) approximation and can be used for pattern recognition, data compression, and nonlinear filtering not only for waveforms but also for pictures and maps.
Abstract: Waveform segmentation is treated as a problem of piecewise linear uniform (minmax) approximation. Various algorithms are reviewed and a new one is proposed based on discrete optimization. Examples of its applications are shown on terrain profiles, scanning electron microscope data, and electrocardiograms. The processing is sufficiently fast to allow its use on-line. The results of the segmentation can be used for pattern recognition, data compression, and nonlinear filtering not only for waveforms but also for pictures and maps. In the latter case some additional preprocessing is required and it is described in [19].

144 citations


Patent
12 Jan 1973
TL;DR: In this article, a delta modulation loop responds to the delta modulation commands to maintain a digital word of the same value which is created by the same command in the decoder section, and that value is compared with incoming words so as to cause the operation to shift into a coarse mode when the delta-modulated word varies from the incoming word by more than three levels.
Abstract: A digital video data compression system compresses digital words as small as four-bits into two-bit words by means of a combination of coarse data compression and delta modulation data compression. In response to the two most significant bits being the same for five contiguous words, the apparatus switches into delta modulation mode in which the digital word representing the video brightness (sixteen shades of gray in the four-bit embodiment herein) is incremented or decremented by one level or left unchanged. In the encoder section, a delta modulation loop responds to the delta modulation commands to maintain a digital word of the same value which is created by the same command in the decoder section, and that value is compared with incoming words so as to cause the operation to shift into a coarse mode when the delta-modulated word varies from the incoming word by more than three levels, the two-bit compressed word representing, in the coarse mode, the two most significant bits of the video data word. When in the delta modulation mode, one of the four combinations representable by the two bits is a command to shift into the coarse mode; when in the coarse mode, a return to the delta mode is effected by sending the signal for pure black (ZERO, ZERO) followed by a signal for pure white (ONE, ONE), since this is a least-likely signal combination to occur. When this combination naturally occurs, it is automatically changed to a lesser shift by sending ZERO, ZERO followed by ONE, ZERO, in order to avoid ambiguity. Clocking, switching, comparing and other functions are disclosed.

36 citations


Patent
Duane E Mcintosh1
25 Oct 1973
TL;DR: In this paper, a data compression scheme for bit-pair coding and word-based data compression is described. But the scheme is not suitable for bit pair coding and it requires the data to contain a sufficient number of consecutive identical words to permit a greater compression on a word basis rather than a bit pair basis.
Abstract: Data compression apparatus is disclosed which is operable in either a bit pair coding mode or a word coding mode depending on the degree of redundancy of the data to be encoded Consecutive words of data to be encoded are compared and if the words are not identical within a predefined tolerance the data is encoded on a bit pair basis The bit pair encoding produces transitions in the coded output signals at the beginning of the first of two bit cells which contain a discrete pair of 1''s and at the middle of the first of two bit cells which contain a discrete pair of 0''s If the data to be encoded contains a sufficient number of consecutive identical words to permit a greater data compression on a word basis rather than a bit pair basis the first word in the consecutive identical words is encoded on a bit pair basis to identify the bit pattern and a unique transitional pattern incapable of being generated during bit pair coding is generated to identify the number of succeeding words which are identical with the first word

35 citations


Proceedings ArticleDOI
20 Aug 1973

16 citations


Journal ArticleDOI
TL;DR: This paper develops data reduction procedures in terms of modern estimation theory, specifically a Kalman filter model, and illustrates the utility of this model as an analysis tool by means of an example based on a uniform tube which provides a qualitative assessment of the potential of the technique for application to real speech signals.
Abstract: Efficient coding of continuous speech signals for digital representation has attracted much interest in recent years. The underlying aim of efficient coding methods is to reduce the channel capacity required to represent a signal to meet a specific reconstruction fidelity criterion. To achieve this objective, modern speech data compression techniques rely on two very similar procedures. One procedure uses predictive deconvolution which subtracts from the current signal value that portion which can be predicted from its past and thus removes redundancy in the speech by removing sequential correlation. The signal thus requires fewer bits for equivalent quantization error. The second procedure involves identification of a complete mathematical model of the speech producing mechanism. This involves determination of the characteristics of the source that drives this transfer function. Data reduction is again achieved since the rate of change of the parameters of the speech model is much smaller than the rate of change of the speech waveform. This paper develops these data reduction procedures in terms of modern estimation theory, specifically a Kalman filter model, and illustrates the utility of this model as an analysis tool by means of an example based on a uniform tube which provides a qualitative assessment of the potential of the technique for application to real speech signals.

13 citations



Patent
Mutsuo Ogawa1
04 Oct 1973
TL;DR: In this paper, analog signals are quantized and converted into digital N-pulse binary codes to represent 2N discrete levels and the digital coded signals are subjected to data compression in such a way that bits in the same digit position are compressed and then transmitted.
Abstract: Analog signals are quantized and converted into digital N-pulse binary code to represent 2N discrete levels. The digital coded signals are subjected to data compression in such a way that bits in the same digit position are compressed and then transmitted.

8 citations



03 Jan 1973
TL;DR: The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links.
Abstract: The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.

5 citations


Journal ArticleDOI
TL;DR: The paper presents and compares quantitatively various compression techniques based on the Shannon-Fano, ‘run-length’ and Hadamard transformation methods of source encoding, and the compression ratios obtained when applying the techniques to actual satellite data are given.
Abstract: This paper reports on an investigation into the application of data compression techniques as a means of reducing the ‘on-ground’ data storage requirements that are associated with many space research programmes. The paper presents and compares quantitatively various compression techniques based on the Shannon-Fano, ‘run-length’ and Hadamard transformation methods of source encoding. The compression ratios obtained when applying the techniques to actual satellite data are given and some new basic theory relating to ‘run-length’ encoding is presented.

01 Jan 1973
TL;DR: A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function.
Abstract: A general formulation of the data compression system is presented A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data

Journal ArticleDOI
TL;DR: The information compression of videophone signals takes advantage of the so-called Hadamard transformation and acoustic-surface-wave tapped delay lines are efficient for the achievement in a low-cost real-time device.
Abstract: An application of acoustic surface waves for image processing is reported. The information compression of videophone signals takes advantage of the so-called Hadamard transformation. Acoustic-surface-wave tapped delay lines are efficient for the achievement of such a transform in a low-cost real-time device. The results of initial experiments are reported.


Proceedings ArticleDOI
01 Dec 1973
TL;DR: Prefilter trade studies with a two dimension system show that the algorithm may provide a large reduction in filter computations with only a small increase in RMS error.
Abstract: Data filtering with presmoothed measurements is a useful technique for data compression in high data rate, noisy measurement systems. A sequential filtering algorithm is derived for processing integrated measurements, and the algorithm is applied to data filtering with discrete presmoothed measurements. Prefilter trade studies with a two dimension system show that the algorithm may provide a large reduction in filter computations with only a small increase in RMS error.


Journal ArticleDOI
TL;DR: The Lynch-Davisson (L-D) code is shown to be efficient for compressing line-scanned weather charts wherein each line is divided into equal-length segments, and a separate L-D code is generated for each segment.
Abstract: The Lynch-Davisson (L-D) code is shown to be efficient for compressing line-scanned weather charts wherein each line is divided into equal-length segments, and a separate L-D code is generated for each segment. As the number of segments increases, the L-D code length decreases appreciably, thereby simplifying the encoding and decoding operations. However, the accompanying decrease in the overall compression ratio is relatively small.