scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1976"


Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations



Journal ArticleDOI
A. Jain1
TL;DR: In this paper, the Karhunen-Loeve transform for a class of signals is proven to be a set of periodic sine functions and this k-means expansion can be obtained via an FFT algorithm.
Abstract: The Karhunen-Loeve transform for a class of signals is proven to be a set of periodic sine functions and this Karhunen-Loeve series expansion can be obtained via an FFT algorithm. This fast algorithm obtained could be useful in data compression and other mean-square signal processing applications.

215 citations


A. Jain1
01 Sep 1976
TL;DR: The Karhunter-Loeve transform for a class of signals is proven to be a set of periodic sine functions and this Karhunen- Loeve series expansion can be obtained via an FFT algorithm, which could be useful in data compression and other mean-square signal processing applications.
Abstract: The Karhunen-Loeve transform for a class of signals is proven to be a set of periodic sine functions and this Karhunen-Loeve series expansion can be obtained via an FFT algorithm. This fast algorithm obtained could be useful in data compression and other mean-square signal processing applications.

211 citations


Patent
27 Feb 1976
TL;DR: In this article, a line image or a line signature is optically scanned to generate digital signals for storage in an image matrix and the digital signals represent black and white cells defining the line signature or line image and are processed by tracing the image boundary.
Abstract: A line image or a line signature is optically scanned to generate digital signals for storage in an image matrix. These digital signals represent black and white cells defining the line signature or line image and are initially processed by tracing the image boundary. During the tracing a "thinning" or "peeling off" operation is performed that evaluates black cells in the image matrix for conversion into white cell digital signals. This thinning or peeling off process, also identified as data compression, continues until the line signature or line image is composed of a single cell thickness. The final phase of the data compression operation includes another boundary tracing of the one cell thick image, and connecting a sequence of boundary points defining each black cell to form a string of vectors which represent the signature. The resulting vector catalog comprises a composition of data including a vector starting point and vector directions which are encoded and stored for future retrieval. When a stored line signature or line image is to be retrieved for display, the encoded vector data is recalled from storage to generate on a cathode ray tube the original vector data. This operation is known as data decompression and produces on the cathode ray tube a synthesis of the original line image or line signature. The compression and decompression operations, except thinning or peeling off, are also applicable to textured images or images having grayscale and thickness. Such textured images are first subdivided into binary images, each representing one bit of the grayscale, then the vector boundary encoding process is completed without thinning. The encoded vectors are stored for subsequent retrieval and display.

95 citations


Journal ArticleDOI
Frank Rubin1
TL;DR: A system for the compression of data files, viewed as strings of characters, is presented, which applies equally well to English, to PL/I, or to digital data.
Abstract: A system for the compression of data files, viewed as strings of characters, is presented. The method is general, and applies equally well to English, to PL/I, or to digital data. The system consists of an encoder, an analysis program, and a decoder. Two algorithms for encoding a string differ slightly from earlier proposals. The analysis program attempts to find an optimal set of codes for representing substrings of the file. Four new algorithms for this operation are described and compared. Various parameters in the algorithms are optimized to obtain a high degree of compression for sample texts.

87 citations


Journal ArticleDOI
TL;DR: A "universal" generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles and can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source.
Abstract: A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A "universal" generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given comparing the performance of noiseless universal syndrome-source-coding to 1) run-length coding and 2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

71 citations


Proceedings ArticleDOI
13 Dec 1976
TL;DR: A practical video bandwidth compression system is described in detail to achieve reduced transmission data rates to less than 1 bit/pel, while maintaining good picture quality.
Abstract: A practical video bandwidth compression system is described in detail. Video signals are Haar transformed in 2 dimensions and the resulting transform coefficients are adaptively filtered to achieve reduced transmission data rates to less than 1 bit/pel, while maintaining good picture quality. Error detection and compensation is included into the transmission bit structure.© (1976) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

37 citations


ReportDOI
12 Apr 1976
TL;DR: An adaptive transform coding algorithm based on a recursive procedure in the transform domain has been developed that both the quantization parameters and bit assignment are dynamically determined and thus are closely matched to the actual image structure.
Abstract: : An adaptive transform coding algorithm based on a recursive procedure in the transform domain has been developed. Both the quantization parameters and bit assignment are dynamically determined and thus are closely matched to the actual image structure. Overhead requirements are minimal.

31 citations


Patent
Ishiguro Tatsuo1
08 Apr 1976
TL;DR: In this article, the coding error occurring in the interframe coding process is effectively corrected by low-bit intraframe coding with limited increase in the volume of coding information, which permits raising the significance determination threshold value without impairment of picture quality.
Abstract: A television signal encoder uses correlation between frames for data compression or reduction in redundancy of information to be transmitted. The coding error occurring in the interframe coding process is effectively corrected by low-bit intraframe coding with limited increase in the volume of coding information. This permits raising the significance determination threshold value without impairment of picture quality.

30 citations


Journal ArticleDOI
TL;DR: A real-time digital video processor using Hadamard transform techniques to reduce video bandwidth is described and algorithms related to spatial compression, temporal compression, and the adaptive selection of parameter sets are described.
Abstract: A real-time digital video processor using Hadamard transform techniques to reduce video bandwidth is described. The processor can be programmed with different parameters to investigate various algorithms for bandwidth compression. The processor is also adaptive in that it can select different parameter sets to trade-off spatial resolution for temporal resolution in the regions of the picture that are moving. Algorithms used in programming the system are described along with results achieved at various levels of compression. The algorithms relate to spatial compression, temporal compression, and the adaptive selection of parameter sets.

Patent
21 Jan 1976
TL;DR: In this article, an improved automatically adaptive arrangement for advantageously incorporating nearly instantaneous companding (NIC) and priority trunk rotation in a plurality of frames, called a multiframe, is presented.
Abstract: A digital speech interpolation (DSI) system advantageously utilizes speech inactivity time to reduce the bit rate by compressing digital characters from a plurality of trunks onto a lesser plurality of channels. A signaling arrangement is typically employed therein to signal a receiver as to the activity of a trunk. If the number of active trunks exceeds the number of channels, an overload may exist. Known arrangements for mitigating overload typically include apparatus responsive to an activity signal for truncating one or more bits from the digital characters and for transmitting the truncated characters. Unfortunately, quantization noise is increased and digital precision decreased in such arrangements. The hereindisclosed system includes an improved automatically adaptive arrangement for advantageously incorporating nearly instantaneous companding (NIC) and priority trunk rotation in a plurality of frames, called a multiframe. Thereby, a mitigation of overload as well as a lessening of quantization noise relative to known DSI arrangements is attained.

Proceedings ArticleDOI
13 Dec 1976
TL;DR: An operational video compressor was built using field-to-field differencing, on fields processed using Landau and Slepian's Hadamard transform method, with a substantial performance improvement.
Abstract: An adaptive video compressor was designed as part of the CTS Digital Video Curriculum Sharing Experiment. The compressor was constructed using field-to-field differencing on fields processed using a Hadamard transform method. The spatial resolution was improved using a fixed-rate, three-mode adaptive system to compress each field.

Book
01 Jan 1976

Proceedings ArticleDOI
13 Dec 1976
TL;DR: The channel rate equalization problem inherent in a variable rate coding problem is analysed and specific solutions are developed for adaptive transform coding algorithms.
Abstract: The channel rate equalization problem inherent in a variable rate coding problem is analysed in this paper. Specific solutions are developed for adaptive transform coding algorithms. The actual algorithms depend on either pretransform or post-transform buffering. Simulations indicate small performance variations between the techniques.© (1976) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Journal ArticleDOI
TL;DR: The goal of this paper is to provide a unified tutorial development of the various algorithms used and proposed for speech data compression by providing sufficient theoretical background to establish the algorithm relationships without stressing mathematical rigor.

Proceedings ArticleDOI
13 Dec 1976
TL;DR: Analytical results generally indicate that due to the lack of high spatial correlation in the Rayleigh distributed radar surface reflectivity, application of data compression to SAR signals and images under the square difference fidelity criterion may be less effective than its application to images obtained using incoherent illumination.
Abstract: This paper describes some analytical results relative to the effectiveness of applying data compression techniques for efficient transmission of synthetic aperture radar (SAR) signals and images. A Rayleigh target model is assumed in the analysis. It is also assumed that all surface reflectivity information is of interest and needs to be transmitted. Spectral characteristics of radar echo signals and processed images are analyzed. Analytical results generally indicate that due to the lack of high spatial correlation in the Rayleigh distributed radar surface reflectivity, application of data compression to SAR signals and images under the square difference fidelity criterion may be less effective than its application to images obtained using incoherent illumination. On the other hand, if certain random variations in radar images are considered as undesirable, substantial compression ratio may be achieved by removing such variations.

01 Feb 1976
TL;DR: The fast KL transform algorithm was applied for data compression of a set of four ERTS multispectral images and its performance was compared with other techniques previously studied on the same image data.
Abstract: The fast KL transform algorithm was applied for data compression of a set of four ERTS multispectral images and its performance was compared with other techniques previously studied on the same image data. The performance criteria used here are mean square error and signal to noise ratio. The results obtained show a superior performance of the fast KL transform coding algorithm on the data set used with respect to the above stated perfomance criteria. A summary of the results is given in Chapter I and details of comparisons and discussion on conclusions are given in Chapter IV.

Journal ArticleDOI
TL;DR: It is shown that the compressor or expander can be simulated by comb filtering followed by frequency translation of the tooth contents, and provides a flexible technique for the two-dimensional smoothing of television pictures in which simultaneous control of vertical and horizontal resolution is possible.
Abstract: The bandwidth compressor-expander developed by Gabor more than 25 years ago is reexamined. It is shown that the compressor or expander can be simulated by comb filtering followed by frequency translation of the tooth contents. With a proper choice of window function, the comb teeth do not overlap and no ambiguity is introduced by the coding process. When applied to television messages, with the line harmonics chosen as preferred frequencies, compression or expansion is best described as providing a line frequency standards conversion. The combined compression-expansion system gives a reduced transmission bandwidth at the expense of vertical resolution and provides a flexible technique for the two-dimensional smoothing of television pictures in which simultaneous control of vertical and horizontal resolution is possible. A sampled signal form of the compressor-expander can be realized in digital form by using shift registers as delay lines. The digital version permits simple control of the compression ratio and preferred frequencies.

Patent
05 Jun 1976
TL;DR: In this article, a data compression system for data transmission is designed to eliminate any reduncies in bit series, where the series of input bits at the transmitter are processed through a run-length-coding station (2) to produce groups of 3 bits that are entered into a six stage shift register (31).
Abstract: A data compression system for data transmission is designed to eliminate any reduncies in bit series. The series of input bits at the transmitter are processed through a run-length-coding station (2) to produce groups of 3 bits that are entered into a six stage shift register (31). The two words in the shift register are compared (32) and if identified a counter (33) is activated (I). Switching stages (4, 5, 6) are controlled (35) to combine the shift register outputs with the counter outputs in a second run-length-coding station (7). If identical the run length coding station produces an output with the redundancy eliminated.

Journal ArticleDOI
TL;DR: In this signal processing system for video disc and home use VTR based on the time-division-multiplex technique, a burst sinusoidal signal is added in the horizontal synchronizing pulse and a playback time reference is derived from it.
Abstract: This paper reports on a signal processing system for video disc and home use VTR based on the time-division-multiplex technique. In this system, the chrominance signal is compressed along the time axis and put in the horizontal blanking period. Thus, it is time-division-multiplexed with the luminance signal. The time-compression is achieved by the time conversion operation of specially developed analog memory. This memory transfers majority carriers in bulk silicon and is provided with sufficient frequency range for the time compression and expansion. In order to eliminate the influence of time base error which is generally introduced during recording and playback processes, a burst sinusoidal signal is added in the horizontal synchronizing pulse and a playback time reference is derived from it.

Proceedings ArticleDOI
13 Dec 1976
TL;DR: In this paper, an approach is proposed that utilizes modern adaptive estimation and identification theory techniques to learn the picture statistics in real time so that an optimal set of coefficients can be identified as the signal statistics change.
Abstract: Historically, the data compression techniques utilized to process image data have been Unitary Transform encoding or time domain encoding. Recently, these two approaches have been combined into a hybrid transform domain time domain system. The hybrid system incorporates some of the advantages of both concepts and eliminates some of the disadvantages of each. However, the problems of picture statistics dependence and error propagation still exist. This is due to the fact that the transformed coefficients are non-stationary processes, which implies that a constant DPCM coefficient set cannot be optimal for all scenes. In this paper, an approach is suggested that has the potential of eliminating or greatly alleviating these problems. The approach utilizes modern adaptive estimation and identification theory techniques to "learn" the picture statistics in real time so that an optimal set of coefficients can be identified as the signal statistics change. In this way, the dependency of the system on the picture statistics is greatly reduced. Furthermore, by updating and transmitting a new set of predictor coefficients periodically, the channel error propagation problem is alleviated.

David B. Cooper1
01 Apr 1976
TL;DR: Models which can be used to accurately represent the type of line drawings which occur in teleconferencing and transmission for remote classrooms and which permit considerable data compression were described.
Abstract: Models which can be used to accurately represent the type of line drawings which occur in teleconferencing and transmission for remote classrooms and which permit considerable data compression were described. The objective was to encode these pictures in binary sequences of shortest length but such that the pictures can be reconstructed without loss of important structure. It was shown that exploitation of reasonably simple structure permits compressions in the range of 30-100 to 1. When dealing with highly stylized material such as electronic or logic circuit schematics, it is unnecessary to reproduce configurations exactly. Rather, the symbols and configurations must be understood and be reproduced, but one can use fixed font symbols for resistors, diodes, capacitors, etc. Compression of pictures of natural phenomena such as can be realized by taking a similar approach, or essentially zero error reproducibility can be achieved but at a lower level of compression.

Journal ArticleDOI
TL;DR: The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described.
Abstract: The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cm2 and consumes 0.4 W.

Patent
20 Dec 1976
TL;DR: In this paper, the data compression system which can make the conversion having accurate and effective density of information, by obtaining the video information of specific domain with a given video information density.
Abstract: PURPOSE: To realize the data compression system which can make the conversion having accurate and effective density of information, by obtaining the video information of specific domain with a given video information density. COPYRIGHT: (C)1978,JPO&Japio

Journal ArticleDOI
TL;DR: It is shown that the distortion-delay functions and regions are explicitly derived for single channel systems and can be performed by solving separately the source coding problem and the network's queuing problem.
Abstract: The problem of data compression for communication networks is considered. The system performance criterion is the signal distortion resulting both from data compression and from average message delay through the network. The delay-distortion function is defined as the smallest message delay among all data-compression schemes that yield the given distortion value. The distortion-delay region is similarly defined. The capacity region is defined to include all incoming message rates for which there exists a set of data-compression schemes yielding a prescribed network distortion-delay value. The basic characteristics of these functions and regions are derived. In particular, it is shown that their evaluations can be performed by solving separately the source coding problem and the network's queuing problem. The distortion-delay functions and regions are explicitly derived for single channel systems.

01 Dec 1976
TL;DR: An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures that can be significantly improved by applying a background skipping technique.
Abstract: An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

Proceedings ArticleDOI
Robert T. P. Wang1
13 Dec 1976
TL;DR: This paper describes the general scenario in which the RPV video capability will be used and major techniques discussed will include transform coding on subpictures, hybrid coding, slow frame rate data processing and source content adaptive variable resolution data processing.
Abstract: The challenge of providing a robust six megahertz video data channel for remotely piloted vehicles (RPV's) has caused several old standby techniques to be reviewed. Spread spectrum methods, the most promising class for robust channel signalling, require source bandwidths much smaller than those of the channel. Consequently, the application of video data compression algorithms to limit source bandwidths play an important role. Several video data compression techniques that have been applied to the RPV problem to date are discussed, ranging from direct pictorial redundancy reduction, to reduction of frame rate, to data synthesis. These illustrate how special requirements of the RPV video problem are addressed in the designs. This paper describes the general scenario in which the RPV video capability will be used. Major techniques discussed will include transform coding on subpictures, hybrid coding, slow frame rate data processing and source content adaptive variable resolution data processing. The RPV video communications system truly provides a new challenge to the video data compression community.

Journal ArticleDOI
TL;DR: A new narrow-band TV system called Sampledot, which produces a live picture with motion and sharpness that is a satisfactory replica of conventional TV, is described, and bandwidth compression ratios of 10:1 have been demonstrated with relatively simple electronics.
Abstract: A new narrow-band TV system called Sampledot, which produces a live picture with motion and sharpness that is a satisfactory replica of conventional TV, is described. Bandwidth compression ratios of 10:1 have been demonstrated with relatively simple electronics. Higher compression ratios are projected using the technique with a dynamic display memory or storage. The system is compatible with NTSC or EIA video cameras and monitors. Sampledot works on the principle of gating the line-scan video signal raster with a pseudorandom (PR) dot-sample matrix. About 3 percent, or less, of the picture is sent every fast scan field instead of the usual 50 percent. At the receiving end, the monitor raster is gated in step with the PR matrix. The natural integration effects of the eye-brain characteristics plus optional display memory, the large redundancy of TV video, and the high degree of correlation between adjacent TV pixels are exploited in the Sampledot technique.