scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1975"


Journal ArticleDOI
TL;DR: The weighted sum of the absolute values of the transform coefficients, defined herein as the activity index, is proposed as an objective measure of scene busyness (i.e., the density of significant scene detail).
Abstract: The weighted sum of the absolute values of the transform coefficients, defined herein as the activity index, is proposed as an objective measure of scene busyness (i.e., the density of significant scene detail). For an image divided into subpictures, it is possible to classify each subpicture into a finite number (say four) of categories according to its computed activity index. A different coding scheme, involving different truncation and quantization rules and hence a different number of bits, is used for each activity category. Data compression is efficiently achieved by assigning more bits to code those portions of the image showing the most detail.

72 citations


Journal ArticleDOI
TL;DR: Performance in terms of meansquare reconstruction error versus bit rate can be shown to parallel the theoretical rate distortion function for the first-order Markov process by about 0.6 bits/sample at low bit rates.
Abstract: Predictive coders have been suggested for use as analog data compression devices. Exact expressions for reconstructed signal error have been rare in the literature. In fact most results reported in the literature are based on the assumption of Gaussian statistics for prediction error. Predictive coding of first-order Gaussian Markov sequences are considered in this paper. A numerical iteration technique is used to solve for the prediction error statistics expressed as an infinite series in terms of Hermite polynomials. Several interesting properties of predictive coding are thereby demonstrated. First, prediction error is in fact close to Gaussian, even for the binary quantizer. Sencond, quantizer levels may be optimized at each iteration according to the calculated density. Finally, the existence of correlation between successive quantizer outputs is shown. Using the series solutions described above, performance in terms of meansquare reconstruction error versus bit rate can be shown to parallel the theoretical rate distortion function for the first-order Markov process by about 0.6 bits/sample at low bit rates.

60 citations


Patent
23 Jun 1975
TL;DR: In this paper, a space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is disclosed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth.
Abstract: A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is disclosed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data is first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver with parameters J=8, E=16, I=16, followed by a convolutional encoder of parameters k=7, ν=2. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.

57 citations


Journal ArticleDOI
01 Aug 1975
TL;DR: Data compression experience with four large data bases is described, and data structures, compression strategies, and storage devices that are optimal for the recent usage pattern observed on that cluster are chosen.
Abstract: The conventional general-purpose data management system tends to make inefficient use of storage space. By reducing the physical size of the data, substantial savings are available in the area of storage cost. Reduction ratios of 4:1 and more are realizable. This also reduces the amount of I/O time required to physically transfer data between secondary and primary memories. Since I/O time tends to be the pacing factor when processing large data bases, this could produce a 4:1 or greater reduction in response time. Data compression experience with four large data bases is described. In some applications, only a small fraction of the data transferred in an individual I/O operation is relevant to the query being processed. If usage patterns are measured and data records and fields are rearranged so that those which are commonly referenced together are also physically stored together, then additional savings are available. Once the data base has been partitioned into clusters of commonly accessed data, further efficiencies can be obtained by choosing data structures, compression strategies, and storage devices that are optimal for the recent usage pattern observed on that cluster.

52 citations


Journal ArticleDOI
TL;DR: In this paper, a non-block source coding (data compression) technique is introduced and a source coding theorem is proved using recently developed techniques from ergodic theory, and the existence theorem is valid for all stationary aperiodic sources with finite alphabets and all ergodical sources with separable alphabeets.
Abstract: A new nonblock source coding (data compression) technique is introduced and a source coding theorem is proved using recently developed techniques from ergodic theory. The existence theorem is valid for all stationary aperiodic sources (e.g., ergodic sources) with finite alphabets and all ergodic sources with separable alphabets and is proved without Shannon-style random coding arguments. The coding technique and the optimal performance bounds are compared and contrasted with Shannon block coding techniques.

44 citations


Journal ArticleDOI
TL;DR: This work proposes using a convolutional encoder for joint source and channel encoding, and reduces to a Convolutional source code that is simpler to encode than any other optimal noiseless source code known to date.
Abstract: In certain communications problems, such as remote telemetry, it is important that any operations performed at the transmitter be of a simple nature, while operations performed at the receiver can frequently be orders of magnitude more complex. Channel coding is well matched to such situations while conventional source coding is not. To overcome this difficulty of usual source coding, we propose using a convolutional encoder for joint source and channel encoding. When the channel is noiseless this scheme reduces to a convolutional source code that is simpler to encode than any other optimal noiseless source code known to date. In either case, decoding can be a minor variation on sequential decoding.

34 citations


Patent
31 Jan 1975
TL;DR: In this paper, the authors proposed a random walk ough Pascal's triangle which is directed by the incoming random source sequence, and the random walk starts at the apex of the triangle and proceeds downward according to an algorithm until it terminates at a boundary which has been constructed in such a way that the encoding of each source sequence can be accomplished in a fixed number of bits.
Abstract: A method and apparatus for data compression which utilizes a random walk ough Pascal's triangle which is directed by the incoming random source sequence. The random walk starts at the apex of Pascal's triangle and proceeds downward according to an algorithm until it terminates at a boundary which has been constructed in such a way that the encoding of each source sequence can be accomplished in a fixed number of bits. The fixed-length encoded block consists of a prefix to determine the boundary crossing point in Pascal's triangle and a suffix which represents the encoded form of the input sequence relative to that starting point. Theoretically optimal entropy encoding is achieved by this method.

19 citations


Book ChapterDOI
Toby Berger1
01 Jan 1975
TL;DR: The rate distortion function is defined and a powerful iterative algorithm for calculating it is described and Shannon’s source coding theorems are stated and heuristically discussed.
Abstract: In this introductory lecture we present the rudiments of rate distortion theory, the branch of information theory that treats data compression problems The rate distortion function is defined and a powerful iterative algorithm for calculating it is described Shannon’s source coding theorems are stated and heuristically discussed

16 citations


Journal ArticleDOI
TL;DR: Vocabulary characteristics of various fields are described, and it is shown how the data base may be stored in a compressed form by use of restricted variable length codes that produce a compression not greatly in excess of the optimum that could be achieved through use of Huffman codes.
Abstract: Consideration is given to a document data base that is structured for information retrieval purposes by means of an inverted index and term dictionary. Vocabulary characteristics of various fields are described, and it is shown how the data base may be stored in a compressed form by use of restricted variable length codes that produce a compression not greatly in excess of the optimum that could be achieved through use of Huffman codes. The coding is word oriented. An alternative scheme of word fragment coding is described. It has the advantage that it allows the use of a small dictionary, but is less efficient with respect to compression of the data base.

15 citations



Proceedings ArticleDOI
30 Oct 1975
TL;DR: A new technique which jointly applies clustering and source encoding concepts to obtain data compression is described, which demonstrates the application of cluster compression to multispectral image data of the Earth Resources Technology Satellite.
Abstract: This paper describes a new technique which jointly applies clustering and source encoding concepts to obtain data compression. The cluster compression technique basically uses clustering to extract features from the measurement data set which are used to describe characteristics of the entire data set. In addition, the features may be used to approximate each individual measurement vector by forming a sequence of scalar numbers which define each measurement vector in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. A description of a practical cluster compression algorithm is given and experimental results are presented to show trade-offs and characteristics of various implementations. Examples are provided which demonstrate the application of cluster compression to multispectral image data of the Earth Resources Technology Satellite.

Journal ArticleDOI
TL;DR: The averaging technique is applied to aircraft navigation using two DMEs and either air data or inertial data, demonstrating that significant computation time reductions are possible with some covariance degradation.
Abstract: The goal of data compression or preprocessing is to reduce computational requirements in a Kalman filter while retaining adequate estimation accuracy. A technique which averages batches of data is considered here. Guidelines are given for averaging, such as, how often to sample the data, when to average, and how many data points to average. The averaging technique is applied to aircraft navigation using two DMEs and either air data or inertial data. The results demonstrate that significant computation time reductions are possible with some covariance degradation. VOR/DME flight data of aircraft landing approaches were used to verify the analysis and show agreement with the analytical predictions.

Proceedings ArticleDOI
30 Oct 1975
TL;DR: The coding methods suitable for bandwidth compression of multispectral imagery are considered and three methods are reduced to three, which are KL-2 Dimensional DPCM, KL-cosine-DPCM and a cluster coding method.
Abstract: The coding methods suitable for bandwidth compression of multispectral imagery are considered. These methods are compared using various criteria of optimality such as MSE,signal-to-noise ratio, recognition accuracy, as well as computational complexity. Based on these results the candidate methods are reduced to three recommended methods. These are KL-2 Dimensional DPCM, KL-cosine-DPCM and a cluster coding method. The performance of the recommended methods are examined in the presence of a noisy channel and concatenated with a variable rate (Huffman) encoder.© (1975) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: In this article, maximum likelihood and minimum transform chi square estimators are investigated for the local processor to estimate a signal corrupted by noise that is sampled and quantized at a high data rate.
Abstract: This paper treats the problem of estimating a signal corrupted by noise that is sampled and quantized at a high data rate. Local and global processors are proposed to achieve data compression that permits near optimal extraction of information. Two techniquesmaximum likelihood and minimum transform chi square, which are in the class of best asymptotically normal estimators- are investigated for the local processor. Simulation results are presented to demonstrate the feasibility of the approach.

Proceedings ArticleDOI
30 Oct 1975
TL;DR: This work addresses the individual error sources in transform image coding by separating the component of the MSE introduced by transform coefficient deletion is separated from requantization effects.
Abstract: The mean square error (MSE) is a classical measure of image distortion. However, this metric is generally not a faithful indication of subjective image quality. We attempt to correct this deficiency by addressing the individual error sources in transform image coding. Specifically, the component of the MSE introduced by transform coefficient deletion is separated from requantization effects. Results are demonstrated in both numerical and pictorial form.© (1975) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Book ChapterDOI
01 Jan 1975
TL;DR: The basic purpose of data compression is to massage a data stream to reduce the average bit rate required for transmission or storage by removing unwanted redundancy and/or unnecessary precision.
Abstract: The basic purpose of data compression is to massage a data stream to reduce the average bit rate required for transmission or storage by removing unwanted redundancy and/or unnecessary precision. A mathematical formulation of data compression providing figures of merit and bounds on optimal performance was developed by Shannon [1,2] both for the case where a perfect compressed reproduction is required and for the case where a certain specified average distortion is allowable. Unfortunately, however, Shannon’s probabilistic approach requires advance precise knowledge of the statistical description of the process to be compressed - a demand rarely met in practice. The coding theorems only apply, or are meaningful, when the source is stationary and ergodic.

Proceedings ArticleDOI
30 Oct 1975
TL;DR: A real-time digital video processor using Hadamard transform techniques to reduce video bandwidth is described and algorithms related to spatial compression, temporal compression, and the adaptive selection of parameter sets are described.
Abstract: A real-time digital video processor using Hadamard transform techniques to reduce video bandwidth is described. The processor can be programmed with different parameters to investigate various algorithms for bandwidth compression. The processor is also adaptive in that it can select different parameter sets to trade off spatial resolution for temporal resolution in the regions of the picture that are moving. Algorithms used in programming the system are described along with results achieved at various levels of compression. The algorithms relate to spatial compression, temporal compression, and the adaptive selection of parameter sets.© (1975) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Journal ArticleDOI
TL;DR: Certain sequences that have zero aperiodic autocorrelation except for zero and the maximum shifts are described, useful in radar pulse compression.
Abstract: Certain sequences that have zero aperiodic autocorrelation except for zero and the maximum shifts are described They are useful in radar pulse compression

Patent
Ishii Atsushi1, Kiyoshi Oikawa1, Tadaaki Suzuki1, Tetsuro Morita1, Yoshio Iizuka1 
18 Dec 1975

Proceedings ArticleDOI
30 Oct 1975
TL;DR: A bandwidth compression system for the transmission of video images from remotely piloted vehicles has been built and demonstrated and uses the use of the Constant Area Quantization (CAQ) technique to obtain spatial bit rate reduction of 6:1 and a rugged and compact scan convertor, based on a core memory, to accommodate temporal frame rate reduction.
Abstract: A bandwidth compression system for the transmission of video images from remotely piloted vehicles has been built and demonstrated. Novel features of this system are the use of the Constant Area Quantization (CAQ) technique to obtain spatial bit rate reduction of 6:1 and a rugged and compact scan convertor, based on a core memory, to accommodate temporal frame rate reduction. Based on the ability of the human eye to perceive more detail in high contrast regions than in low, the CAQ method transmits higher resolution in the former areas. The original six-bit digitized video is converted to a three level signal by the quanti-zing circuit and then Huffman - encoded to exploit its statistical properties and reduce it further to one-bit per pixel. These circuits operate on one line of the picture at a time, and can handle information at full video (10 MHz) rate. The compressed information when received on the ground is stored in coded form in a two-frame (500,000 bit) digital core memory. One frame of the memory is filled while the other is being displayed and then the two are interchanged. Decoding and reconstruction of the video are performed between the memory and the display.© (1975) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: An efficient algorithm is developed to reconstruct only those rows of Z satisfying the conditions specified by a given data retrieval descriptor, which illustrates that using unambiguous bit matrices as data files is desirable not only for the purpose of data compression but also for thepurpose of fast data retrieval.
Abstract: Algorithms to check whether a bit matrix is unambiguous or a sum set is unique are given. Let an unambiguous bit matrixZ be represented by its row sums and column sums. An efficient algorithm is developed to reconstruct only those rows ofZ satisfying the conditions specified by a given data retrieval descriptor. This algorithm illustrates that using unambiguous bit matrices as data files is desirable not only for the purpose of data compression but also for the purpose of fast data retrieval.

Proceedings Article
01 Jan 1975
TL;DR: The concept is used to define an Information Extraction System intended to remove data redundancy early in the sensor to user link in both classification and film interpretation applications.
Abstract: This paper describes a concept of Joint Classification and Data Compression of multidimensional information sources in the context of applications to Earth Resources Technology Satellites The concept is used to define an Information Extraction System intended to remove data redundancy early in the sensor to user link in both classification and film interpretation applications The approach uses cascaded partially supervised clustering to extract spectral intensity features from spatially local sources Data compression is then used to efficiently represent spatial features within the spectral intensity feature map


Book ChapterDOI
01 Jan 1975
TL;DR: A tutorial description of numerous recent approaches and results generalizing the Shannon approach to unknown statistical environments and simple examples and empirical results are given to illustrate the essential ideas.
Abstract: A rigorous real-variables treatment of general data compression and encoding problems, centered on formulation and proof of relevant existence theorems and a unified formulation of source coding (both noiseless and with a fidelity criterion) in inaccurately or incompletely specified statistical environments. Difficulties in modeling of sources with unknown or imperfectly known statistical descriptions are analyzed and source codes (SC) are classified (variable-rate noiseless SC, fixed-rate noiseless SC) and analyzed, along with types of code sequences (weighted-universal, maximin-universal, strongly or weakly minimax-universal). Universal coding on video data, variable-rate coding with distortion, and distortion-rate functions are discussed. Design strategies for universal coding algorithms are suggested, but the article is not oriented to specific methods of synthesizing data compression systems.

Journal ArticleDOI
TL;DR: In this paper, a method of representing functions of two variables as a series of products of functions of one variable is considered, and the results of the application of this expansion to approximation problems, data compression, filtering of very noisy signals and the reduction of two-dimensional inverse problems to a small number of one-dimensional operations are discussed.
Abstract: A METHOD of representing functions of two variables as a series of products of functions of one variable is considered. The results of the application of this expansion to approximation problems, data compression, filtering of very noisy signals and the reduction of two-dimensional inverse problems to a small number of one-dimensional operations are discussed.

Proceedings ArticleDOI
B. G. Haskell1, P. L. Gordon1
30 Oct 1975
TL;DR: The channel capacity required for long distance digital transmission of video signals can be substantially reduced through a system using interframe coding because of the amount of frame-to-frame redundancy which exists in a video source output.
Abstract: The channel capacity required for long distance digital transmission of video signals can be substantially reduced through a system using interframe coding. This is possible because of the amount of frame-to-frame redundancy which exists in a video source output. Through appropriate signal processing it is feasible to send only the changed area of each frame rather than the full frame provided that the previous frame values are retained in memories.


Journal ArticleDOI
TL;DR: Alternative methods for compression of numbers stored in binary coded decimal (BCD) form are presented and the relative advantages of these methods are discussed.
Abstract: Tien Chi Chen and Irving T. Ho in their paper “Storage-Efficient Representation of Decimal Data” [1] present a scheme for the compression of numbers stored in binary coded decimal (BCD) form. Their compression scheme involves coding two or three digits at a time in seven bit or ten bit fields and requires only permutations, deletions, and insertions for coding and decoding. We would like to briefly present some alternatives to their compression method and to discuss the relative advantages of these other methods.