scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1977"


Journal ArticleDOI
TL;DR: The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable- to-block codes designed to match a completely specified source.
Abstract: A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source.

5,844 citations


Journal ArticleDOI
TL;DR: It is shown that two useful detection criteria lead to quantization which gives the minimum mean-squared error between the quantized output and the locally optimum nonlinear transform for each data sample.
Abstract: Optimum quantization of data, primarily for signal detection applications, is considered. It is shown that two useful detection criteria lead to quantization which gives the minimum mean-squared error between the quantized output and the locally optimum nonlinear transform for each data sample. This criterion is an extension of the usual minimum distortion criterion for optimum quantizers. Numerical results show that it leads to optimum quantizers which can be considerably better in their performance for non-Guassian inputs than the minimum-distortion quantizers.

167 citations


Journal ArticleDOI
TL;DR: A theoretical and experimental extension of two-dimensional transform coding and hybrid transform/DPCM coding techniques to the coding of sequences of correlated image frames for Markovian image sources is presented.
Abstract: Two-dimensional transform coding and hybrid transform/DPCM coding techniques have been investigated extensively for image coding. This paper presents a theoretical and experimental extension of these techniques to the coding of sequences of correlated image frames. Two coding methods are analyzed: three-dimensional cosine transform coding, and two-dimensional cosine transform coding within an image frame combined with DPCM coding between frames. Theoretical performance estimates are developed for the coding of Markovian image sources. Simulation results are presented for transmission over error-free and binary symmetric channels.

143 citations


Journal ArticleDOI
TL;DR: In this article the adaptive systems are divided to four categories and the theoretical and the implementational problems of the optimum system are discussed and the assumptions that are made to overcome these problems are outlined.
Abstract: The following is a survey of the technical literature on adaptive coding of imagery. Section 1 briefly discusses the general problem of image data compression. The optimum image data compression system, from a theoretical viewpoint, is presented in Section 1.1. The theoretical and the implementational problems of the optimum system are discussed and the assumptions that are made to overcome these problems are outlined. One important assumption is the stationarity which is not true for most imagery. In adaptive systems the parameters are varied according to changes in signal statistics optimizing the system performance for nonstationary signals. In this article the adaptive systems are divided to four categories. Section 2 is a survey of adaptive transform coding systems. Section 3 discusses adaptive predictive coding systems. Sections 4 and 5 discuss adaptive cluster coding and adaptive entropy technique, respectively.

110 citations


Journal ArticleDOI
TL;DR: The field of the efficient coding of color television signals is reviewed, pushed primarily by the desire in the television area to find digital coding standards accepted by both broadcasters and carriers and suitable for use with NTSC, PAL and SECAM television systems.
Abstract: This paper reviews the field of the efficient coding of color television signals. Because this paper is perhaps the first review on this topic, some background is given in the areas of colorimetry, visual perception of color and color television systems. We assume that the reader has some familiarity with luminance encoding techniques. Coding techniques themselves are divided into two broad groups: component coding methods in which each component (usually three) is coded separately, and composite coding methods in which the composite television signal with its "color" modulated subcarrier is processed as a single entity. Both approaches are covered in detail. The field is still growing, pushed primarily by the desire in the television area to find digital coding standards accepted by both broadcasters and carriers and suitable for use with NTSC, PAL and SECAM television systems. We discuss this aspect by comparing composite and component coding methods.

100 citations


Journal ArticleDOI
01 May 1977
TL;DR: A data compression technique is given which yields a compression ratio slightly better than 12 to 1 for cardiogrmns and is implementable with either a mini- or microcomputer.
Abstract: A data compression technique is given which yields a compression ratio slightly better than 12 to 1 for cardiogrmns and is implementable with either a mini- or microcomputer. The technique involves two applications of the discrete Karhunen-Loeve expansion, intrinsic components, principal factors, or principal components, all synonyms. The first application reduces the effects of respiration and the various orientations of different patients' hearts, and requires the solution of a 3 × 3 matrix eigenvalue, eigenvector problem for each beat. The second application involves expressing the transformed cardiogram in a Kathunen-Loeve series, and requires the solution of the eigenvalue, eigenvector problem for a large matrix. However, the solution, which must be obtained only once for all time, can be performed off line. (The same eigenvectors are used for all patients.) Comparisons are given of the cardiograms reconstructed from the compressed data with the original cardiograms.

88 citations


01 Sep 1977
TL;DR: It is shown that in general there is an inherent and significant loss of optimality if a joint source/channel linear encoder is used when the goal is relaxed to reproduction of the source within some specified non-negligible distortion.
Abstract: : The advantages and disadvantages of combining the functions of source coding ('data compression') and channel coding ('error correction') into a single coding unit are considered. Particular attention is given to linear encoders, both for sources and for channels, because their ease of implementation makes their use desirable in practice. It is shown that, without loss of optimality, a joint source/channel linear encoder may be used when the goal is the distortionless reproduction of the source at the destination. On the other hand, it is shown that in general there is an inherent and significant loss of optimality if a joint source/channel linear encoder is used when the goal is relaxed to reproduction of the source within some specified non-negligible distortion. (Author)

85 citations


Journal ArticleDOI
A. K. Jain1
TL;DR: In this article, the fast Karhunen-Loeve transform is extended to images with nonseparable or nearly isotropic covariance functions, or both, for image restoration, data compression, edge detection, image synthesis, etc.
Abstract: Stochastic representation of discrete images by partial differential equation operators is considered. It is shown that these representations can fit random images, with nonseparable, isotropic covariance functions, better than other common covariance models. Application of these models in image restoration, data compression, edge detection, image synthesis, etc., is possible. Different representations based on classification of partial differential equations are considered. Examples on different images show the advantages of using these representations. The previously introduced notion of fast Karhunen-Loeve transform is extended to images with nonseparable or nearly isotropic covariance functions, or both.

80 citations


Patent
14 Jan 1977
TL;DR: In this article, the left-hand boundary of a character is developed in the form of a sequence of Freeman direction codes, the codes being stored in digital form within a processor.
Abstract: A data compression system is disclosed in which the left-hand boundary of a character is developed in the form of a sequence of Freeman direction codes, the codes being stored in digital form within a processor. The number of binary data bits required to define the character using different criteria is then generated and compared to determine which criteria defines the character in the minimum amount of binary data bits.

50 citations


01 Jan 1977
TL;DR: In this article, the spectral mean vector of a blob can be regarded as a defined feature and used in a conventional pattern recognition procedure and the benefits of use are: ease in locating training units in imagery; data compression of from 10 to 30 depending on the application; reduction of scanner noise and consequently potential improvements in classification/proportion estimation performances.
Abstract: A basic concept of Multispectral Scanner data processing was developed for use in agricultural inventories; namely, to introduce spatial coordinates of each pixel into the vector description of the pixel and to use this information along with the spectral channel values in a conventional unsupervised clustering of the scene. The result is to isolate spectrally homogeneous field-like patches (called blobs). The spectral mean vector of a blob can be regarded as a defined feature and used in a conventional pattern recognition procedure. The benefits of use are: ease in locating training units in imagery; data compression of from 10 to 30 depending on the application; reduction of scanner noise and consequently potential improvements in classification/proportion estimation performances.

39 citations


01 Dec 1977
TL;DR: The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described and experimental results are presented to show trade-offs and characteristics of the various implementations.
Abstract: The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

Patent
Jyoichi Fuwa1
27 Sep 1977
TL;DR: In this article, a data compression unit reads out the signals and performs data compression thereon asynchronously with storage of the signals in the buffer memory, and the data compression rate decreases in accordance with the proportion of high density areas of the document, as the electrical data signals accumulate in buffer memory.
Abstract: For facsimile transmission or the like an optical system scans an original document and produces quantized electrical signals which are stored in a buffer memory. A data compression unit reads out the signals and performs data compression thereon asynchronously with storage of the signals in the buffer memory. The data compression rate decreases in accordance with the proportion of high density areas of the document, and the electrical data signals accumulate in the buffer memory. The scanning speed is automatically decreased as the amount of signals in the buffer memory increases so that the scanning speed is controlled to correspond to the data compression rate.

Journal ArticleDOI
TL;DR: In this article, a fast numerical algorithm is presented for recompressing signals propagated over different path lengths in a dispersive duct, and an appropriate distortion of a signal spectrum recompresses dispersed arrivals regardless of their arrival times.
Abstract: A fast numerical algorithm is presented for recompressing signals propagated over different path lengths in a dispersive duct. It is shown that an appropriate distortion of a signal spectrum recompresses dispersed arrivals regardless of their arrival times. The method is applied to the preprocessing of seismic data acquired in an underground pulse-echo location system for imaging faults in coal seams.

Book ChapterDOI
01 Jan 1977
TL;DR: This paper will show how similar computational modules can be configured to provide similar computational advantages for a large class of timevariant linear transforms including one-dimensional and multi-dimensional discrete Fourier transforms and one- dimensional and two- dimensional discrete cosine transforms.
Abstract: A large portion of the computational load for many signal processing problems consists of the computation of linear transforms. For time-invariant linear transforms such as cross convolution or matched filtering, the transversal filter provides a highly parallel computational module with high throughput and minimal control overhead. This paper will show how similar computational modules can be configured to provide similar computational advantages for a large class of timevariant linear transforms including one-dimensional and multi-dimensional discrete Fourier transforms and one-dimensional and two-dimensional discrete cosine transforms. Furthermore, time-variant transform modules may be combined to implement high capacity time-invariant linear transforms. The implementation of these techniques using surface acoustic wave (SAW) and charge coupled device (CCD) technology permits the real-time solution of several important signal processing problems including image data compression, spectrum analysis, convolutional array scanning and beamforming. Advanced digital and integrated analog/digital architectures will permit these fast processing techniques to be extended to high accuracy and adaptive processing tasks.

01 Nov 1977
TL;DR: An efficient adaptive encoding technique using a new implementation of the Fast Discrete Cosine Transform (FDCT) for bandwidth compression of monochrome and color images is described, demonstrating excellent performance in terms of mean square error and direct comparison of original and reconstructed images.
Abstract: An efficient adaptive encoding technique using a new implementation of the Fast Discrete Cosine Transform (FDCT) for bandwidth compression of monochrome and color images is described. Practical system application is attained by maintaining a balance between complexity of implementation and performance. FDCT sub-blocks are sorted into four classes according to level of image activity, measured by the total ac energy within each sub-block. Adaptivity is provided by distributing bits between classes, favoring higher levels of activity over lower levels. Excellent performance is demonstrated in terms of mean square error and direct comparison of original and reconstructed images. Results are presented for both noiseless and noisy transmission at a total rate of 1 bit and 0.5 bit per pixel for a monochrome image and for a total rate of 2 bits and 1 bit per pixel for a color image. In every case the total bit rate includes all overhead required for image reconstruction and bit protection.

Patent
24 Feb 1977
TL;DR: In this paper, the black and white binary bit pattern of each scan line is compared with that of the adjacent scan line just above it, and any bit pattern variations are identified as belonging to one of a number of predetermined modes.
Abstract: In a facsimile scanning and transmission system, the black and white binary bit pattern of each scan line is compared with that of the adjacent scan line just above it, and any bit pattern variations are identified as belonging to one of a number of predetermined modes. Each variation or bit difference mode, as well as a no variation mode, is assigned a coded bit pattern or sequence, and only these coded bit patterns are transmitted. Variations not conforming to one of the predetermined modes may be transmitted uncoded, after being preceded by an identification code. The net result is significant data compression with no loss in intelligence. The process is easily reversed at the receiver end to reconstruct the facsimile picture.

Journal ArticleDOI
TL;DR: This paper describes several adaptive delta modulators designed to encode video signals and describes and illustrates the effect of large amounts of motion on the reconstructed picture.
Abstract: This paper describes several adaptive delta modulators designed to encode video signals. One- and two-dimensional ADM algorithms are discussed and compared. Results are shown for bit rates of 2 bits/pixel, 1 bit/pixel and 0.5 bits/pixel. Pictures showing the difference between the encodeddecoded pictures and the original pictures are presented. Results are also presented to illustrate the effect of channel errors on the reconstructed picture. A two-dimensional ADM using interframe encoding is also presented. This system operates at the rate of 2 bits/pixel and produces excellant quality pictures when there is little motion. We also describe and illustrate the effect of large amounts of motion on the reconstructed picture.

Proceedings ArticleDOI
08 Dec 1977
TL;DR: A conditional replenishment transform video compressor has been simulated in preparation for hardware design, which operates at a fixed rate and uses compressed frame memories at the transmitter and receiver.
Abstract: Most interframe video compression using transforms uses three-dimensional transforms or differential pulse code modulation (DPCM) between successive frames of two-dimensionally transformed video. Conditional replenishment work is nearly all based on DPCM of individual picture elements, although conditional replenishment of transform subpictures can take advantage of the predefined subpictures for addressing and can use the most significant transform vectors for change detection without decoding the compressed image. A conditional replenishment transform video compressor has been simulated in preparation for hardware design. The system operates at a fixed rate and uses compressed frame memories at the transmitter and receiver. Performance is a function of transmission rate and memory capacity and is dependent on the motion content of the compressed scene.© (1977) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Proceedings ArticleDOI
09 May 1977
TL;DR: A data compression technique, Adaptive Differential PCM with Time Assignment Speech Interpolation (TASI), that is capable of reducing the bit rate required for PCM encoded speech is described and evaluated using computer simulation.
Abstract: A data compression technique, Adaptive Differential PCM (ADPCM) with Time Assignment Speech Interpolation (TASI), that is capable of reducing the bit rate required for PCM encoded speech is described. The particular case of 2:1 compression in a T1 system environment is described in detail and evaluated using computer simulation. The ADPCM/TASI system has wide dynamic range, little degradation under loading, than standard PCM. Signal-to-noise ratios provide an objective metric. An audio tape containing computer processed speech for various ADPCM/TASI systems in various environments accompanies this presentation.

Journal ArticleDOI
TL;DR: Applications to handwriting recognition are presented, starting from a series of complete pages, as well as the extraction of cadastral structures from aerial photographs.
Abstract: Orthogonalized components of images are provided by the association of pure optical processing and digital procedures, in order to implement statistical pattern recognition operations. In a first step, preprocessed data are optically delivered—typically by sampling Fourier spectra that are to be modelled for information compression purposes. They satisfy both the orthonormal and dimensionally reduced description of the considered images which are not to be entirely digitized. The extraction of features consists of computing the dominant eigenvectors of the data covariance matrix, or the Fourier description of differences between spectra. Images are classified in the reduced eigenvector space. Applications to handwriting recognition are presented, starting from a series of complete pages, as well as the extraction of cadastral structures from aerial photographs.

Book ChapterDOI
01 Jan 1977
TL;DR: It is shown that it is also possible to split the source output into variable-length blocks which can be coded with a fixed-length code such that the efficiency also converges to the entropy of the source.
Abstract: By coding fixed-length blocks of symbols from an information source with a minimum redundancy (the variable-length Huffman code) the source entropy is approached as a function of the block size. In the present paper it is shown that it is also possible to split the source output into variable-length blocks which can be coded with a fixed-length code such that the efficiency also converges to the entropy of the source. An algorithm for optimal splitting is given, as well as a proof of the convergence.

Proceedings ArticleDOI
08 Dec 1977
TL;DR: Micro-Adaptive Picture Sequencing (MAPS), a computationally-efficient contrast-adaptive variable-resolution digital image coding technique, is described, based on the combination of a simple vision heuristic and a highly nonlinear spatial encoding.
Abstract: Micro-Adaptive Picture Sequencing (MAPS), a computationally-efficient contrast-adaptive variable-resolution digital image coding technique, is described. Both compression and decompression involve only integer operations with no multiplies or explicit divides. The compression step requires less than 20 operations per pixel and the decompression step even fewer. MAPS is based on the combination of a simple vision heuristic and a highly nonlinear spatial encoding. The heuristic asserts that the fine detail in an image is noticed primarily when it is sharply defined in contrast while larger more diffuse features are perceived at much lower contrasts. The coding scheme then exploits the spatial redundancy implied by this heuristic to maintain high resolution where sharp definition exists and to reduce resolution elsewhere. Application of MAPS to several imagery types with compressions extending to below 0.2 bits per pixel is illustrated.© (1977) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
17 Jun 1977
TL;DR: In this article, the adaptability of the inversion for the binary information of the picture element pattern based on the Binary Information of the Picture Element Pattern (BEP) was investigated.
Abstract: PURPOSE:To increase the effect caused by the data compression process by deciding the adaptability of the inversion for the binary information of the picture element pattern based on the binary information of the picture element pattern.

Patent
08 Jun 1977
TL;DR: In this article, the authors proposed to improve the efficiency of data compression by coding patterns with the coding unit made as large as the fixed two-dimensional block on 2D binary picture information such as a facsimile signal.
Abstract: PURPOSE: To improve the efficiency of data compression by coding patterns with the coding unit made as large as the fixed two-dimensional block on two-dimensional binary picture information such as a facsimile signal. COPYRIGHT: (C)1979,JPO&Japio

Proceedings ArticleDOI
08 Dec 1977
TL;DR: A previously-developed block adaptive Differential Pulse Code Modulation (DPCM) procedure has been combined with a buffer feedback technique and the result is an efficient variable rate DPCM algorithm.
Abstract: A previously -developed block adaptive Differential Pulse Code Modulation (DPCM) procedure has beencombined with a buffer feedback technique. The result is an efficient variable rate DPCM algorithm. The new technique is fully adaptive, yet it retains the basic simplicity of DPCM. It utilizes the appropriatequantizer parameters and also assigns the available channel bandwidth according to need as determined bythe local image structure.A buffer feedback procedure, previously reported by the authors, has been generalized and wasimplemented to control the bit rate selection. Examples demonstrate that the algorithm is successful in achieving adaptivity objectives. Although buffer control requires additional hardware, because of itsrelatively low speed, the impact on overall hardware complexity is negligible. Introduction Although classical DPCM is limited in the range of useful compression rates, its simplicity has madeit attractive for numerous applications. DPCM in conventional implementations assumes the source is

Proceedings ArticleDOI
08 Dec 1977
TL;DR: It is shown that an off-the-shelf microprocessor chip, the Am 2901 4-bit bipolar slice, can be employed in a 12-bit configuration to perform TV imagery data compression in real time.
Abstract: Applying a new algorithm for the Discrete Cosine Transform superior to any published to date, it is shown that an off-the-shelf microprocessor chip, the Am 2901 4-bit bipolar slice, can be employed in a 12-bit configuration to perform TV imagery data compression in real time. The method of compression is the hybrid technique due to Habibi -- DCT along a 32-pixel segment of a TV line and Differential Pulse Code Modulation line to line, thus processing only one eighth of each field at one time. The significance of the new algo-rithm is that it permits an all-digital implementation of a TV data compression system for Remotely Piloted Vehicles and spacecraft using "off-the-shelf" circuitry.© (1977) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
29 Jun 1977
TL;DR: In this paper, a switch to the one-dimensional process in accordance with the correlation of the picture signal between the scanning lines in the facsimile communication was proposed to obtain the maximum compressibility as well as to ensure an automatic prevention for the effect of the error.
Abstract: PURPOSE: To obtain the maximum compressibility as well as to ensure an automatic prevention for the effect of the error, by having a switch to the one-dimensional process in accordance with the correlation of the picture signal between the scanning lines in the facsimile communication. COPYRIGHT: (C)1979,JPO&Japio

Journal ArticleDOI
TL;DR: In this article, a modified slant and a modified Haar transform were proposed for monochrome image processing in nonoverlapping blocks and a block of (4 × 4) or a (8 × 8) subpicture size is reasonable in terms of both mean square error and subjective picture quality.

Proceedings ArticleDOI
B. R. Hunt1
08 Dec 1977
TL;DR: A method for realizing interpolative DPCM is discussed, and can be implemented solely with incoherent optics and analog electronics, with results showing performance comparable to conventional D PCM.
Abstract: The causal predictor methods of DPCM image data compression can be replaced by non-causal or interpolative DPCM compression. A method for realizing interpolative DPCM is discussed, and can be implemented solely with incoherent optics and analog electronics. A digital simulation of this method is presented, with results showing performance comparable to conventional DPCM.© (1977) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.