scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1978"


Patent
15 Dec 1978
TL;DR: In this article, a digital video compression and expansion system and its methods for compressing and expanding digitalized video signals in real time at rates up to NTSC color broadcast rates are disclosed.
Abstract: A digital video compression and expansion system and its methods for compressing and expanding digitalized video signals in real time at rates up to NTSC color broadcast rates are disclosed. The system compressor receives digitalized video frames divided into subframes, performs in a single pass a spatial domain to transform domain transformation in two dimensions of the picture elements of each subframe, normalizes the resultant coefficients by a normalization factor having a predetermined compression ratio component and an adaptive rate buffer capacity control feedback component, to provide compression, encodes the coefficients with a minimum redundancy coding scheme and stores them in a first rate buffer memory asynchronously at a high data transfer rate from which they are put out at a slower, synchronous rate. The compressor adaptively determines the rate buffer capacity control feedback component in relation to instantaneous data content of the rate buffer memory in relation to its capacity, and it controls the absolute quantity of data resulting from the normalization step so that the buffer memory is never completely emptied and never completely filled. In expansion, the system essentially mirrors in reverse the steps performed during compression. An efficient, high speed decoder forms an important aspect of the present invention. The compression system forms an important element of a disclosed color broadcast compression system.

390 citations


Journal ArticleDOI
TL;DR: A bit-vector can be compressed, if the frequence of zeroes (or ones as well) differs from 0.5 or if the vector is clustered in some way (i.e. not random).

233 citations


Journal ArticleDOI
TL;DR: The finite-state complexity of a sequence plays a role similar to that of entropy in classical information theory (which deals with probabilistic ensembles of sequences rather than an individual sequence).
Abstract: A quantity called the {\em finite-state} complexity is assigned to every infinite sequence of elements drawn from a finite sot. This quantity characterizes the largest compression ratio that can be achieved in accurate transmission of the sequence by any finite-state encoder (and decoder). Coding theorems and converses are derived for an individual sequence without any probabilistic characterization, and universal data compression algorithms are introduced that are asymptotically optimal for all sequences over a given alphabet. The finite-state complexity of a sequence plays a role similar to that of entropy in classical information theory (which deals with probabilistic ensembles of sequences rather than an individual sequence). For a probabilistic source, the expectation of the finite state complexity of its sequences is equal to the source's entropy. The finite state complexity is of particular interest when the source statistics are unspecified.

202 citations


Journal ArticleDOI
TL;DR: General source coding theorems are proved in order to justify using the optimal test channel transition probability distribution for allocating the information rate among the DFT coefficients and for calculating arbitrary performance measures on actual optimal codes.
Abstract: Distortion-rate theory is used to derive absolute performance bounds and encoding guidelines for direct fixed-rate minimum mean-square error data compression of the discrete Fourier transform (DFT) of a stationary real or circularly complex sequence. Both real-part-imaginary-part and magnitude-phase-angle encoding are treated. General source coding theorems are proved in order to justify using the optimal test channel transition probability distribution for allocating the information rate among the DFT coefficients and for calculating arbitrary performance measures on actual optimal codes. This technique has yielded a theoretical measure of the relative importance of phase angle over the magnitude in magnitude-phase-angle data compression. The result is that the phase angle must be encoded with 0.954 nats, or 1.37 bits, more rate than the magnitude for rates exceeding 3.0 nats per complex element. This result and the optimal error bounds are compared to empirical results for efficient quantization schemes.

85 citations


Proceedings ArticleDOI
01 May 1978
TL;DR: A general model for data compression is presented which includes most data compression systems in the literature as special cases as well as how the work of other authors (such as Lempel-Ziv) relates to this model.
Abstract: A general model for data compression is presented which includes most data compression systems in the literature as special cases. All macro schemes are based on the principle of finding redundant strings or patterns and replacing them by pointers to a common copy. Different varieties of macro schemes may be defined by varying the interpretation of pointers, for instance, a pointer may indicate a substring of the compressed string, a substring of the original string, or a substring of some other string such as an external dictionary. Other varieties of macros schemes may be defined by restricting the type of overlapping or recursion that may be used. Trade-offs between different varieties of macro schemes, exact lower bounds on the amount of compression obtainable, and the complexity of encoding and decoding are discussed as well as how the work of other authors (such as Lempel-Ziv) relates to this model.

77 citations


01 Nov 1978
TL;DR: A new method for coding and segmentation of binary pictures is presented, related to the common chain-code, but the coder works in raster scan mode, and therefore no intermediate storage for the image is needed in connection with raster scanning devices.
Abstract: A new method for coding and segmentation of binary pictures is presented. The code is related to the common chain-code, but the coder works in raster scan mode, and therefore no intermediate storage for the image is needed in connection with raster scan devices. The code efficiency of the new code is compared with that of chain-code. Coding, segmentation, and ordering of objects and holes can be done concurrently.

67 citations


01 May 1978
TL;DR: In this report, image compression procedures based on transform techniques are reviewed and the emphasis is on adaptive and, especially, on rate adaptive algorithms.
Abstract: : In recent years, considerable research effort has been done to optimize various digital communication channels. The significant growth of both commercial and military communication systems has emphasized the need for efficient data compression procedures. In the case of image transmission, the requirement is particularly demanding because of the significant amount of information to be transmitted. In this report, image compression procedures based on transform techniques are reviewed. The emphasis is on adaptive and, especially, on rate adaptive algorithms. The discussion includes an historical review, theoretical development, and illustrative examples of the transform image coding field. (Author)

55 citations


Journal ArticleDOI
TL;DR: It is shown that the z(-2)dependence from the lidar equation can be converted to almost constant signal amplitude for distances up to 3 km by suitable and realistic choice of the geometric parameters.
Abstract: The dynamic range of lidar return signals has been calculated for coaxial transmitter–receiver geometries via the spatial distribution of irradiance in the detector plane as a function of distance z. It is shown that the z−2 dependence from the lidar equation can be converted to almost constant signal amplitude for distances up to 3 km by suitable and realistic choice of the geometric parameters. On the other hand, no signal degradation with respect to the lidar equation occurs at large distances. This geometrical compression of lidar return signal amplitudes is rated superior to electronic compression methods such as gain switching or logarithmic amplification.

51 citations


Journal ArticleDOI
TL;DR: Several simple ad hoc techniques for obtaining a good low rate "fake process" for the original source are introduced and shown by simulation to provide an improvement of typically 1-2 dB over optimum quantization, delta modulation, and predictive quantization for one-bit per symbol compression of Gaussian memoryless, autoregressive, and moving average sources.
Abstract: The problem of designing a good decoder for a timeinvariant tree-coding data compression system is equivalent to that of finding a good low rate "fake process" for the original source, where the fake is produced by a time-invariant nonlinear filtering of an independent, identically distributed sequence of uniformly distributed discrete random variables and "goodness" is measured by the \bar{\rho} -distance between the fake and the original source. Several simple ad hoc techniques for obtaining such fake processes are introduced and shown by simulation to provide an improvement of typically 1-2 dB over optimum quantization, delta modulation, and predictive quantization for one-bit per symbol compression of Gaussian memoryless, autoregressive, and moving average sources. In addition, the fake process viewpoint provides a new intuitive explanation of why delta modulation and predictive quantization work as well as they do on Gaussian autoregressive sources.

33 citations


Patent
12 Jun 1978
TL;DR: In this article, a system and apparatus for compressing a binary data message generated by a digital input device is disclosed wherein a data message is examined on the basis of information content with all data relating to redundant information previously generated or known being deleted together with encoding of preselected portions of the non-redundant data results in the compression of the data to a minimum amount without losing the informational content of the original data.
Abstract: A system and apparatus for compressing a binary data message generated by a digital input device is disclosed wherein a data message generated in a data terminal device as part of a merchandise transaction is examined on the basis of information content with all data relating to redundant information previously generated or known being deleted together with encoding of preselected portions of the non-redundant data results in the compression of the data to a minimum amount without losing the informational content of the original data thereby allowing the compressed data to be stored in a relatively small storage unit located in the data terminal device. A compressed data record is generated including an encoded start of record character which may signify, in addition to the start of the compressed data record, the type of merchandise transaction being processed. Other non-redundant data which can be determined by knowing the corresponding data of a previous data message are also deleted with only that data required in order to reconstruct the original data generated being retained in the storage unit.

32 citations


Journal ArticleDOI
TL;DR: Variable-rate universal source codes are data compression schemes that are optimum for the coding of a collection of sources subject to a fixed average distortion constraint.
Abstract: Variable-rate universal source codes are data compression schemes that are optimum for the coding of a collection of sources (e.g., a source with unknown parameters) subject to a fixed average distortion constraint. Existence of variable-rate universal source codes is demonstrated for very general classes of sources and distortion measures.

Journal ArticleDOI
B. R. Hunt1
TL;DR: An optical system which captures the essential features of DPCM without optical feedback is introduced, and a simulation of this optical system by means of digital image processing is presented, and performance data are also included.
Abstract: Image bandwidth compression is dominated by digital methods for carrying out the required computations. This paper discusses the general problem of using optics to realize the computations in bandwidth compression. A common method of digital bandwidth compression, feedback differential pulse code modulation (DPCM), is reviewed, and the obstacles to making a direct optical analogy to feedback DPCM are discussed. Instead of a direct optical analogy to DPCM, an optical system which captures the essential features of DPCM without optical feedback is introduced. The essential features of this incoherent optical system are encoding of low-frequency information and generation of difference samples which can be coded with a small number of bits. A simulation of this optical system by means of digital image processing is presented, and performance data are also included.


J. G. Wolff1
18 Jul 1978
TL;DR: A discovery procedure is described for non-recursive context-free phrase structure grammars which is based on three data compression principles and works without the need for semantic information, negative samples, a teacher or pre-segmented data.
Abstract: A discovery procedure is described for non-recursive context-free phrase structure grammars which is based on three data compression principles. It works without the need for semantic information, negative samples, a teacher or pre-segmented data.

Patent
19 Jun 1978
TL;DR: In this article, correlation between data in two adjacent scan lines is sensed and the degree of correlation is measured by means of two-dimensional and one-dimensional compression, respectively, and when the correlation is below a predetermined value, the data is compressed by one dimensional compression.
Abstract: Correlation between data in two adjacent scan lines is sensed. When the degree of correlation is above a predetermined value the data is compressed by means of two dimensional compression. When the degree of correlation is below the predetermined value the data is compressed by means of one dimensional compression. Typically, both the one and two dimensional compression systems are based on run length encoding. After a predetermined number of scan lines have been compressed by two dimensional compression, one scan line may be compressed by one dimensional compression to prevent accumulation of error.

Journal ArticleDOI
TL;DR: By application of results of an earlier study in compression coding, efficient encoding and decoding procedures are presented for use in on-line transmission of data.
Abstract: In this paper a simple algorithm is used for selection of a set of codeable substrings that occur at the front or rear of the words in a textual data base Since the words are assumed to be non-repeating, the technique is useful for data compression of dictionaries The time complexity of the algorithm is governed by the associated sorting algorithm and hence is 0 ( n log n ) It has been applied to three sample data bases, consisting of words selected from street names, authors names, or general written English text The results show that the substrings at the rear of the words, yield better compression than those at the front By application of results of an earlier study in compression coding, efficient encoding and decoding procedures are presented for use in on-line transmission of data

Patent
07 Jun 1978
TL;DR: In this paper, the authors proposed a data compression system to further reduce previously-formed digital words representing samples of non-uniformly quantized and encoded analog speech by normalizing and reencoding the original input digital word samples into smaller words using block coding wherein the input words are bit-reduced by normalising a block of samples to the maximum amplitude sample value.
Abstract: This data compression system further reduces previously-formed digital words representing samples of non-uniformly quantized and encoded analog speech, by normalizing and reencoding the original input digital word samples into smaller words using block coding wherein the input words are bit-reduced by normalizing a block of samples to the maximum amplitude sample value Features of the invention include concatenating the most significant bit of the maximum value block code (first digital word) with the least significant bit of the segment bits part of the normalized input digital word (third digital word)

01 Apr 1978
TL;DR: In recent years, several measures of distortion between speech waveforms have been proposed as substitutes for the traditional but subjectively inadequate mean-squared error as discussed by the authors, which depend on the power spectral densities or linear models of the speech process.
Abstract: : In recent years several measures of distortion between speech waveforms have been proposed as substitutes for the traditional but subjectively inadequate mean-squared error. All of these measures involve some form of distortion measure between the second order properties of the speech processes producing the waveforms instead of an average of the waveform error power. In particular, they depend on the power spectral densities or linear models of the speech process. In this report the properties and interrelations of several such measures are developed. In particular, the relative strengths or equivalences of the various implications and applications of these measures to prediction, detection, and coding are summarized. (Author)

Journal ArticleDOI
TL;DR: A first-order predictor routine and an encoding routine were implemented on an Intel-8080-based microprocessor system to determine their ability to compress seismic data and it appears that higher order predictors offer no significant advantage over the first- order predictor in compressing seismic data.
Abstract: A first-order predictor routine and an encoding routine were implemented on an Intel-8080-based microprocessor system to determine their ability to compress seismic data. The predictor routine obtained a maximum compression ratio of 2.33. It appears that higher order predictors offer no significant advantage over the first-order predictor in compressing seismic data. The encoder routine obtained a larger compression ratio of 3.94. The accuracy of reconstructed seismic traces is bounded by a maximum average error per point. For the predictor, the maximum average error per point is shown to be equal to the specified user tolerance. For the encoder, it is shown to be proportional to the maximum quantization error associated with the ranges used in the encoding algorithm. In general, the average error per point is found to be equal to the square root of the mean-squared error regardless of the compression technique used. A design for a basic data acquisition system utilizing data compression to reduce memory requirements is proposed. Examination of the design indicates that such a system can process no more than three separate data channels at a 1-ms sampling rate and that such a system is cost effective only when small numbers of data channels are to be processed.

Proceedings ArticleDOI
07 Dec 1978
TL;DR: An efficient facsimile data coding scheme (CSM) which cambines an extended run-length coding technique with a symbol recognition technique and results are demonstrated for both CCITT and Xerox standard documents.
Abstract: Presented here is an efficient facsimile data coding scheme (CSM) which cambines an extended run-length coding technique with a symbol recognition technique. The CSM scheme first partitions the data into run-length regions and symbol regions. The run-length regions are then coded by a modified Interline Coding technique, while the data within the symbol region is further subpartitioned into regions defined as symbols. A prototype symbol library is maintained, and as each new symbol is encountered, it is compared with each element of the library. These comparisons produce a signature for the new symbol. A tolerance threshold is used to evaluate the "goodness" of the comparison. If the tolerance threshold indicates a matching symbol, then only the location and the library address need be transmitted. Otherwise the new symbol is both trans-mitted and placed in the library. For finite sized libraries a scoring system determines which elements of the library are to be replaced by new prototypes. Simulation results are demonstrated for both CCITT and Xerox standard documents.© (1978) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

15 Jun 1978
TL;DR: The efficiency of various deep space communication systems which are required to transmit both imaging and a typically error sensitive class of data called general science and engineering (gse) are compared.
Abstract: Various communication systems were considered which are required to transmit both imaging and a typically error sensitive, class of data called general science/engineering (gse) over a Gaussian channel. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an Advanced Imaging Communication System (AICS) which exhibits the rather significant potential advantages of sophisticated data compression coupled with powerful yet practical channel coding.

Book
01 Feb 1978
TL;DR: Several intuitive design approaches and also a general design philosophy based upon the generation of fake processes i.e., finite entropy processes which are close to the process one wishes to compress are presented.
Abstract: : Recent results in information theory promise the existence of tree and trellis data compression systems operating near the Shannon theoretical bound, but provide little indication of how actually to design such systems Presented here are several intuitive design approaches and also a general design philosophy based upon the generation of fake processes ie, finite entropy processes which are close (in the generalized Ornstein distance) to the process one wishes to compress Most of the design procedures can be used for a wide class of sources Performance is evaluated, via simulations, for memoryless, autoregressive and moving average Gaussian sources and compared to traditional Data Compression systems The new schemes typically provide 1-2 dB improvement in performance over the traditional schemes at a rate of 1 bit/symbol The inevitable increase in complexity is moderate in most cases

01 Apr 1978
TL;DR: The development of a speech processing computer facility with the ultimate goal of transmitting narrowband speech in real time over the ARPA Network and a reliable method for measuring subjective speech quality are described.
Abstract: : This report describes our work in the past three years on data compression and quality evaluation of digital speech We developed and implemented linear predictive coding (LPC) techniques with the overall objective of digitally transmitting high quality speech at the lowest possible average data rates over packet-switched communication media Major techniques reported include: covariance lattice method of linear prediction analysis, adaptive lattice methods, linear predictive spectral warping, improved quantization of LPC parameters, variable frame rate transmission of LPC parameters based on a functional perceptual model of speech, and a mixed-source model for LPC synthesizer to produce more natural-sounding speech Also, we developed a reliable method for measuring subjective speech quality This method was employed to formally demonstrate the quality improvements provided by our speech analysis/synthesis techniques as well as for studying speech quality as a function of LPC parameters As subjective procedures are generally expensive and time-consuming, we developed and tested several objective procedures for speech quality evaluation The results from these objective procedures were found to be highly correlated to the corresponding subjective quality judgments Another highlight of our work is the development of a speech processing computer facility with the ultimate goal of transmitting narrowband speech in real time over the ARPA Network


Proceedings ArticleDOI
07 Dec 1978
TL;DR: Adaptive Hybrid Picture Coding is considered as a method of extracting subsources from the composite sources in such a way that the over-all communication problem can be viewed as two different, but connected communication requirements.
Abstract: The approaches to achieving data compression when the source is a class of images have generally been variants of either unitary transform encoding or time domain encoding. Various hybrid approaches using DPCM in tandem with unitary transforms have been suggested. However, the problems of picture statistics dependence and error propagation cannot be solved by these approaches because the transformed picture elements form a non-stationary signal class. Naturally, a constant set of DPCM predictor coefficients cannot be optimal for all users. However, a composite non-stationary signal source can be decomposed into simpler subsources if it exhibits certain characteristics. Adaptive Hybrid Picture Coding (AHPC) is considered as a method of extracting these subsources from the composite sources in such a way that the over-all communication problem can be viewed as two different, but connected communication requirements. One requirement is the transmission of a set of sequences that are formed by the predictor coefficients. Each of these sequences forms a subsource. The additional requirement is the transmission of the error sequence. An intermediate fidelity requirement is presented which describes the effects of predictor parameter distortion on the transmission requirements for the error signal. The rate distortion bound on the channel requirements for the transmission of the predictor coefficients and the error signal is determined subject to a dual fidelity criterion. The signal class is a set of one dimensional unitary transformed images.

Patent
16 Oct 1978
TL;DR: In this article, a dual-mode encoding and decoding procedure enables image data to be compressed optionally in one-dimensional (1D) or two-dimensional(2D) mode.
Abstract: a dual-mode encoding and decoding procedure enables image data to be compressed optionally in one-dimensional (1D) mode or two-dimensional (2D) mode In 1D mode, color transitions in the image are encoded as run length features only In 2D mode, the transitions are encoded as vertical correlation features wherever possible, and where this is not possible, the transitions are encoded as run length features The compression achieved by run length encoding in 2D mode may be enhanced in those instances where the "history line" which precedes the current scan line contains a transition located between points that are vertically aligned with the beginning and end points of the run currently being encoded Run length counting is suspended for those pels in the current run that could have been referenced to the history transition if the run had ended with any of these pels, thereby enabling the run to be encoded as though it contained fewer pels than its actual length Compression may be enhanced still further by dynamically interchanging the variable-length bit patterns respectively representing certain vertical correlation and run length prefix codes depending upon whether the preceding transition was encoded as a vertical correlation feature or a run length feature

Proceedings ArticleDOI
07 Dec 1978
TL;DR: An end-to-end (film transparency in to image out) mechanization of a high-resolution, high-speed film scanning system employing optically butted linear CCD's is described, removing objectionable artifacts caused by CCD nonuniformities.
Abstract: The advent of solid-state linear charge-coupled devices (CCD's), as well as the recent developments in high-speed digital-computing elements, has made possible real-time processing of video image information at high data rates. An end-to-end (film transparency in to image out) mechanization of a high-resolution, high-speed film scanning system employing optically butted linear CCD's is described. The system includes a precision film scanner, a digital data transmitter that performs 2:1 differential pulse code modulation (DPCM) data compression as well as data serialization, and a digital data receiver that performs the inverse data compression algorithm. End-to-end system offset and gain correction are performed in real time on a pixel-by-pixel basis, removing objectionable artifacts caused by CCD nonuniformities. The resulting video data stream can either be viewed on a soft copy TV monitor or be read into a hard copy laser recorder. The scan spot size and pitch are less than 3 microns.© (1978) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Patent
16 Oct 1978
TL;DR: In this paper, the compression factor of the entire original was enhanced by dividing the picture signals into several blocks, selecting the compression system suited to the expressing type of the original to carry out the compression with every block and then transmitting the high-compression block in preference.
Abstract: PURPOSE: To enhance the compression factor of the entire original by dividing the picture signals into several blocks, selecting the compression system suited to the expressing type of the original to carry out the compression with every block and then transmitting the high-compression block in preference. CONSTITUTION: Input signal 1 is supplied to the block division circuit consisting of buffer circuit 2 and address counter 3. When the count value of counter 3 reaches the value obtained by dividing the bit number equivalent to one scanning by the block number, the contents of circuit 2 is supplied to compression circuits 4∼4'''. Circuits 4∼4''' features several types of compression systems, and the compression selected according to the original type is given to supply the compression signal to buffer circuits 5∼5'''. These compression signals are given the addresses in sequence via address counters 6∼6''', and a comparison is given to the count value at comparator 7. Gate circuit 8 opens the gate to the minimum count signal; the flag bit is added through flag generation circuit 9; and the compression data is delivered via output buffer circuit 10. In this constitution, the compression factor can be enhanced greatly for the entire paper surface including the mesh-spot photos, the sentences, the line work and others. COPYRIGHT: (C)1980,JPO&Japio