scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1972"



Journal ArticleDOI
TL;DR: It is shown that under certain conditions there is redundancy in the data and the estimator can be run at a lower rate using compressed data with practically the same performance as when no data compression is utilized.
Abstract: The problem of the existence of redundancy in the data in a recursive estimation problem is investigated. Given a certain data rate, should the estimator be run at the same rate? It is shown that under certain conditions there is redundancy in the data and the estimator can be run at a lower rate using compressed data with practically the same performance as when no data compression is utilized. It is also pointed out that, although at the higher rate there is redundancy in the data, the performance deteriorates noticeably when the data rate is lowered. Conditions for the existence of redundancy in the data and the procedure to remove it are presented. The procedure to compress the data is obtained such as to preserve the information in the sense of Fisher. The effect of data compression is a reduction in the computation requirements by a factor equal to the compression ratio. Such a reduction might be important in real-time applications in which the computing power is limited or too expensive. The application of this technique to the tracking of a reentry vehicle with a linearized filter is discussed in more detail and simulation results are presented.

23 citations


01 Jan 1972
TL;DR: Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery, and an appropriate mathematical model proposed.
Abstract: Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

16 citations


Patent
24 Nov 1972
TL;DR: In this article, an electronic speaking machine has its vocabulary stored in a solid state memory so that the device, with the possible exception of the sound generator, employs no moving parts.
Abstract: An electronic speaking machine has its vocabulary stored in a solid state memory so that the device, with the possible exception of the sound generator, employs no moving parts. The machine is capable of reproducing any spoken word by storing a digital representation of that word in its vocabulary. To reduce storage space, data compression is employed to reduce the data obtained from sampling an audio signal of the spoken word. Because only fixed words are stored, the data compression technique employed can be optimized for each stored word. A particular word is selected by applying the proper ''''select code'''' to the input of the apparatus. A ''''start of word'''' signal then causes a clock to sequence a counter through the addresses in the memory where the digital data representing the word is stored. Inasmuch as the stored digital data has a non-linear relationship to the original data, the non-linear data read out of the memory is transformed by a non-linear mapper to digital data having a linear relationship to the original data. A digital to analog converter transforms the linear digital values into an audio signal that is then filtered to obtain a reconstruction of the original audio signal of the spoken word. The reconstructed audio signal can then be used as the input to a conventional amplifier and speaker system.

15 citations


Patent
Abe Takeshi1
27 Dec 1972
TL;DR: In this article, a video signal data compression system is presented in which the video signals obtained by scanning an image or subject copy by a facsimile scanner are quantized and sampled so as to derive quantized bit patterns consisting of bits 1''s representing black elementary areas and bits 0 ''s representing white elementary areas.
Abstract: A video signal data compression system is provided in which the video signals obtained by scanning an image or subject copy by a facsimile scanner are quantized and sampled so as to derive quantized bit patterns consisting of bits 1''s representing black elementary areas and bits 0''s representing white elementary areas, and the bits of the two quantized bit patterns for adjacent scanning lines are alternately rearranged by interleaving so as to provide a synthesized bit pattern. Thereafter the synthesized bit pattern is coded and compressed.

14 citations


Journal ArticleDOI
TL;DR: A two-dimensional sampled image can be represented by an equal number of spatial frequencies in the Fourier domain, but due to physical limitations of the sampling process, there exists an effective cutoff frequency beyond which no information is preserved.
Abstract: A two-dimensional sampled image can be represented by an equal number of spatial frequencies in the Fourier domain. However, due to physical limitations of the sampling process, there exists an effective cutoff frequency beyond which no information is preserved. The knowledge of this cutoff frequency is very important, since a considerable amount of additional noise can be introduced by frequency components above this cutoff. Availability of a sharp edge within the image allows, within the linear theory, the estimation of the transfer function of the digitizing process itself. This calculated transfer function may be used for image enhancement below the cutoff frequency. In addition, significant amount of data compression may be achieved by removing all spatial frequency terms above the cutoff frequency. An example of this technique is developed utilizing a tribar resolution chart sampled at 1024 x 1024 points.

9 citations





Journal ArticleDOI
01 Jan 1972
TL;DR: Based upon a first-order Markov model of video data, an upper bound on compression ratio is found for run-length encoding according to the Markov source.
Abstract: Based upon a first-order Markov model of video data, an upper bound on compression ratio is found for run-length encoding. The bound is compared with that of the Markov source.

5 citations


01 Jul 1972
TL;DR: This report describes the initial evaluation of a text compression algorithm against Computer-Aided Instruction (CAI) material and a theoretical compression was calculated using a probabilistic Model of the compression algorithm.
Abstract: This report describes the initial evaluation of a text compression algorithm against Computer-Aided Instruction (CAI) material A review of some concepts related to statistical text compression is Followed bya detailed description of a practical text compression algorithm A simulation of the algorithm was programmed and used to obtain compression ratios for a small sample of both traditional frame-structured CAI material and a new type of information-structured CAI material he resulting cOMpression ratios are near 15 to one for both types of materials The simulation program was modified to apply the algorithm to the lesson files of a particular frame-structured CAI subsystem used in the Air Force Phase II gaSe Level System The compression in this case was found to be 13 to one becauthe ofthe prose o0 in the lesson file of uncompressible, frame formatting bytes The modified simulation program was also used to take letter occurrence statistics on the text being compressed From these, a theoretical compression was calculated using a probabilistic Model of the compression algorithm Theoretical compression was within two per cent of measured compression, thus verifying'themodel's, applicability The report closes with the raising of some questions anda discussion of future work

Patent
J Garcia1
11 Dec 1972
TL;DR: In this article, a data compaction method is proposed for encoding run lengths of black and white information (or any two grey levels) as pertaining to facsimile, where a mechanical scanner and a semiconductor memory are joined into a compact solid state device with virtually no scanning speed limitations.
Abstract: A data compaction method is proposed for encoding run lengths of black and white information (or any two grey levels) as pertaining to facsimile. A mechanical scanner and a semiconductor memory are joined into a compact solid state device incorporating variable rate scanning with virtually no scanning speed limitations.

Journal ArticleDOI
TL;DR: A system for simulating video bandwidth compression techniques is described, which accepts up to 10 s of real-time video and slows the scan rate so that this information can be converted for storage and processing on a digital computer.
Abstract: A system for simulating video bandwidth compression techniques is described. The simulation facility accepts up to 10 s of real-time video and slows the scan rate so that this information can be converted for storage and processing on a digital computer. The processed information is then converted back to real-time video for visual evaluation.

01 Apr 1972
TL;DR: In this article, the authors classified source encoding techniques into sampling and analog to digital conversion, codebook techniques, predictive subtractive coding and delta modulation, along with aperature and partitioning techniques.
Abstract: : Data compression, in the paper, is used to denote the reducing of an input data set prior to transmission, as opposed to data reduction, the analytical processing of the set upon reception. Source encoding techniques are classified into sampling and analog to digital conversion, codebook techniques, predictive subtractive coding and delta modulation, along with aperature and partitioning techniques. Some of the most recent work discussed includes adaptive predictive coding, picture bandwidth compression-source coding, and a techniques for subjective performance measures. The paper concludes with an information theoretic approach to data compression. (Author)

Proceedings ArticleDOI
01 Dec 1972
TL;DR: A technique of image compression through linear transformation which reduces the image information while generating a set of features for optimal image discrimination and feature generation that is optimal for image classification rather than for image representation is concerned.
Abstract: This paper is concerned with a technique of image compression through linear transformation which reduces the image information while generating a set of features for optimal image discrimination. This method consists of partitioning the original image into non-overlapping sub-images and applying the transgeneration technique to the subimages. The objective of this transgeneration technique is image data compression and feature generation that is optimal for image classification rather than for image representation. The technique is applied to transgenerate features fYom scintigraphic images for the detection of brain tumors. Some performance results for the classification of normal/abnormal classes of brain scans are presented. Some possible extensions and modifications of this work are briefly described.

Journal ArticleDOI
TL;DR: A proposed self-adaptive approach to data compression, and a method of extending the approach to a multichannel data compression system to allow more efficient use of the communication link.
Abstract: The increased complexity in spacecraft systems and experiment packages has resulted in large quantities of data to be processed and transmitted to ground stations, often in remote locations. The automatic removal of redundant information before transmission would allow more efficient use of the communication link, as well as yield a reduction in the net amount of information to be processed on the ground. This paper describes a proposed self-adaptive approach to data compression, and a method of extending the approach to a multichannel data compression system.

01 Jul 1972
TL;DR: An objective evaluation method is developed for determining the goodness of various data compression techniques applied to radar landmass simulation data and the results obtained demonstrates the inadequacy of the terrain RMS error criteria and the degrading smoothing effect introduced by polynomial data compression.
Abstract: : The objective of this research is to develop an objective evaluation method for determining the goodness of various data compression techniques applied to radar landmass simulation data, to validate the objective evaluation method by comparison with a subjective evaluation performed by experienced radar operators and to compare the objective evaluation with the currently used terrain RMS error criteria. This research consists of the computation of simulated radar shadow displays, the computation of various measures of goodness which are combined in a performance criteria, and the subjective evaluation of the simulated displays. The objective evaluation method which is developed measures the goodness of the data compression techniques in terms of the usefulness of the simulated radar displays. The results obtained demonstrates the inadequacy of the terrain RMS error criteria and the degrading smoothing effect introduced by polynomial data compression. It is recommended that, prior to the implementation of any terrain data compression technique in a radar landmass simulator, it be evaluated in the manner proposed.