scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1988"


Journal ArticleDOI
John Daugman1
TL;DR: A three-layered neural network based on interlaminar interactions involving two layers with fixed weights and one layer with adjustable weights finds coefficients for complete conjoint 2-D Gabor transforms without restrictive conditions for image analysis, segmentation, and compression.
Abstract: A three-layered neural network is described for transforming two-dimensional discrete signals into generalized nonorthogonal 2-D Gabor representations for image analysis, segmentation, and compression. These transforms are conjoint spatial/spectral representations, which provide a complete image description in terms of locally windowed 2-D spectral coordinates embedded within global 2-D spatial coordinates. In the present neural network approach, based on interlaminar interactions involving two layers with fixed weights and one layer with adjustable weights, the network finds coefficients for complete conjoint 2-D Gabor transforms without restrictive conditions. In wavelet expansions based on a biologically inspired log-polar ensemble of dilations, rotations, and translations of a single underlying 2-D Gabor wavelet template, image compression is illustrated with ratios up to 20:1. Also demonstrated is image segmentation based on the clustering of coefficients in the complete 2-D Gabor transform. >

1,977 citations


Journal Article
TL;DR: The applications of digital data compression and the major components of compression systems are described and data modeling is discussed, and the role of entropy and data statistics is examined.
Abstract: The applications of digital data compression and the major components of compression systems are described. Data modeling is discussed, and the role of entropy and data statistics is examined. Gray-scale image modeling is used to illustrate some of these mechanisms. The coding mechanisms are examined, and prefix codes are explained. Arithmetic coding is considered. >

440 citations


Journal ArticleDOI
TL;DR: An efficient subband coding method for encoding monochrome and color images is presented and results indicate that good quality reproduction can be achieved at high compression rates.
Abstract: An efficient subband coding method for encoding monochrome and color images is presented. In this method, the spectrum of the input image signal is decomposed nonuniformly into seven bands. In the coding process the lowest band is DPCM (differential pulse-code modulation)-coded while the higher bands are PCM-coded. The PCM quantizer is designed such that it reduces the subjective effect of quantization noise, and it also helps in an efficient transmission of higher band signals at low transmission rates. For encoding color images, the red (R), green (G), and blue (B) components of color images are first transferred to luminance (Y) and two chrominance components (I.Q). Each component is then decomposed separately into seven narrowbands. The higher-band signals corresponding to the luminance component are retained and coded separately, while the higher bands associated with I and Q components are discarded. The computer simulation results are presented in terms of average b/pixel and the quality of the reconstructured pictures. These results indicate that good quality reproduction can be achieved at high compression rates. >

225 citations


Journal ArticleDOI
TL;DR: It appears that a hierarchical decorrelation method based on interpolation (HINT) outperforms all other methods considered and is presented in terms of entropy.
Abstract: The performance of several reversible, intraframe compression methods is compared by applying them to angiographic and magnetic resonance (MR) images. Reversible data compression involves two consecutive steps: decorrelation and coding. The result of the decorrelation step is presented in terms of entropy. Because Huffman coding generally approximates these entropy measures within a few percent, coding has not been investigated separately. It appears that a hierarchical decorrelation method based on interpolation (HINT) outperforms all other methods considered. The compression ratio is around 3 for angiographic images of 8-9 b/pixel, but is considerably less for MR images whose noise level is substantially higher. >

223 citations


Journal ArticleDOI
TL;DR: An algorithm for data compression of grey level images based on coding geometric and grey level information of the contours in the image and using the Laplacian pyramid coding algorithm to give intelligible reconstructed images.

192 citations


Patent
26 Jan 1988
TL;DR: In this paper, a television sensor mounted on a vehicle provides video image information which is reduced in bandwidth by a factor of approximately 1000:1 for transmission by narrow band RF data link to a remote control station.
Abstract: Methods and apparatus are provided for driving a vehicle from a remote control station achieving tele-operation of the vehicle. A television sensor mounted on the vehicle provides video image information which is reduced in bandwidth by a factor of approximately 1000:1 for transmission by narrow band RF data link to a remote control station. The large bandwidth reduction is accomplished by first sampling the video output of the sensor at a reduced frame rate and then compressing the data further by standard data compression techniques. Vehicle position and attitude data which may be derived from the vehicle's on board inertial reference unit are also transmitted via narrow band data link to the control station. At the control station, the data is first reconstructed by a technique which is the complement of the compression technique, and the instantaneous position and attitude data are used to compute transform coefficients that are used by a pipeline processor to extrapolate the frame data to generate a real time video display that enables an operator to drive the vehicle by transmitting appropriate control signals to the vehicle.

181 citations


Patent
02 Dec 1988
TL;DR: In this paper, bank checks are processed at a high rate of speed on an image processing system which optically scans the documents and converts optically perceptible data on the documents into video image data.
Abstract: Documents, such as bank checks, are processed at a high rate of speed on an image processing system which optically scans the documents and converts optically perceptible data on the documents into video image data. The video image data from the scanner is compressed by data compression techniques and the compressed data is sent over a high speed data channel to a high speed mass storage device which receives and temporarily stores the data. A lower speed mass data storage device, such as an optical disk, is connected for receiving at a lower data transfer rate the compressed video image data and for storing the video image data for subsequent retrieval. The system also includes real-time quality control systems for monitoring the quality of the image data to detect the existence of unacceptable image quality data and for generating a signal which can be used for immediately stopping the generation of unacceptable quality image data.

175 citations


Journal ArticleDOI
TL;DR: In this article, a data compression technique called self-testable and error-propagating space compression is proposed and analyzed, and the use of these gates in the design of self-testing and error propagating space compressors is discussed.
Abstract: A data compression technique called self-testable and error-propagating space compression is proposed and analyzed. Faults in a realization of Exclusive-OR and Exclusive-NOR gates are analyzed, and the use of these gates in the design of self-testing and error propagating space compressors is discussed. It is argued that the proposed data-compression technique reduce the hardware complexity in built-in self-test (BIST) logic designs using external tester environments. >

126 citations


Proceedings ArticleDOI
25 Oct 1988
TL;DR: It is shown that a family of contours extracted from an image can be modelled geometrically as a single entity, based on the theory of recurrent iterated function systems (RIFS), a rich source for deterministic images, including curves which cannot be generated by standard techniques.
Abstract: A new fractal technique for the analysis and compression of digital images is presented. It is shown that a family of contours extracted from an image can be modelled geometrically as a single entity, based on the theory of recurrent iterated function systems (RIFS). RIFS structures are a rich source for deterministic images, including curves which cannot be generated by standard techniques. Control and stability properties are investigated. We state a control theorem - the recurrent collage theorem - and show how to use it to constrain a recurrent IFS structure so that its attractor is close to a given family of contours. This closeness is not only perceptual; it is measured by means of a min-max distance, for which shape and geometry is important but slight shifts are not. It is therefore the right distance to use for geometric modeling. We show how a very intricate geometric structure, at all scales, is inherently encoded in a few parameters that describe entirely the recurrent structures. Very high data compression ratios can be obtained. The decoding phase is achieved through a simple and fast reconstruction algorithm. Finally, we suggest how higher dimensional structures could be designed to model a wide range of digital textures, thus leading our research towards a complete image compression system that will take its input from some low-level image segmenter.

120 citations


Proceedings ArticleDOI
11 Apr 1988
TL;DR: A novel coding scheme has been developed which is based on multidimensional subband coding that yields high compression with sustained good quality and compares favorably to DCT with interframe prediction and inter/intra frame DPCM.
Abstract: A novel coding scheme has been developed which is based on multidimensional subband coding. The digital video signal is filtered and sub-sampled in all three dimensions (temporally, horizontally and vertically) to yield the subbands, from which the input signal can be losslessly reconstructed in the absence of coding loss. The subbands can be more efficiently coded than the input signal in terms of compression and quality, because the restricted information in each band allows well-tailored encoding. The computational complexity of this coding scheme compares favorably to DCT with interframe prediction and inter/intra frame DPCM. The scheme has an architectural structure suitable for parallel implementation, and it yields high compression with sustained good quality. >

120 citations


Patent
Edward R. Fiala1, Daniel H. Greene1
29 Apr 1988
TL;DR: In this article, the source data is encoded by literal codewords of varying length and displacement value, with or without the encoding of copy codewords of varying lengths and displacement values.
Abstract: In accordance with the present invention source data is encoded by literal codewords of varying length value, with or without the encoding of copy codewords of varying length and displacement value. Copy codeword encoding is central to textual substitution-style data compression, but the encoding of variable length literals may be employed for other types of data compression.

Journal ArticleDOI
TL;DR: The splay-prefix algorithm is one of the simplest and fastest adaptive data compression algorithms based on the use of a prefix code and applications of these algorithms to encryption and image processing are suggested.
Abstract: The splay-prefix algorithm is one of the simplest and fastest adaptive data compression algorithms based on the use of a prefix code. The data structures used in the splay-prefix algorithm can also be applied to arithmetic data compression. Applications of these algorithms to encryption and image processing are suggested.

Proceedings Article
01 Jan 1988
TL;DR: It is argued that the proposed data-compression technique reduce the hardware complexity in built-in self-test (BIST) logic designs using external tester environments.
Abstract: Presentation et analyse d'une technique de compression, appelee compression spaciale, applicable a la compression de donnees de tests soit dans l'environnement BIST soit dans l'environnement externe

Journal ArticleDOI
TL;DR: The implications of strong signal compression for the signal-to-noise ratio lead to the formulation of a two-step optimal experimental setup for system identification and parameter estimation of linear systems.
Abstract: An overview is given of existing analytical and numerical methods for the comparison of the peaks of discrete, finite sum of sines. A novel method that compresses the signals optimally or almost optimally is presented. The algorithm is extended to the simultaneous compression of the input and output signals of a linear system. The implications of strong signal compression for the signal-to-noise ratio lead to the formulation of a two-step optimal experimental setup for system identification and parameter estimation of linear systems. >

Journal ArticleDOI
R. B. Arps1, T. K. Truong1, D. J. Lu1, R. C. Pasco1, T. D. Friedman1 
TL;DR: A VLSI chip for data compression has been implemented based on a general-purpose adaptive binary arithmetic coding (ABAC) architecture, which permits the reuse of adapter and arithmetic coder logic in a universal way, which together with application-specific model logic can create a variety of powerful compression systems.
Abstract: A VLSI chip for data compression has been implemented based on a general-purpose adaptive binary arithmetic coding (ABAC) architecture. This architecture permits the reuse of adapter and arithmetic coder logic in a universal way, which together with application-specific model logic can create a variety of powerful compression systems. The specific version of the adapter/coder used herein is the "Q-Coder," described in various companion papers. The hardware implementation is in a single HCMOS chip, to maximize speed and minimize cost. The primary purpose of the chip is to provide superior data compression performance for bilevel image data by using conditional binary source models together with adaptive arithmetic coding. The coding scheme implemented is called the Adaptive Bilevel Image Compression (ABIC) algorithm. On business documents, it consistently outperforms such nonadaptive algorithms as the CCITT Group 4 (T.6) Standard and comes into its own when adapting to documents scanned at different resolutions or which include significantly different data such as digital halftones. The multi-purpose nature of the chip allows access to internal partition combinations such as the "Q" adapter/coder, which in combination with external logic can be used to realize hardware for other compression applications. On-chip memory limitations can also be overcome by the addition of external memory in special cases. Other options include the uploading and downloading of adaptive statistics and choices to encode or decode, with or without adaptation of these statistics.

Journal ArticleDOI
TL;DR: Video coding has been investigated for the novel application of video transmission over packed-switched networks and seems promising in that it lends itself to parrallel implementation, is robust enough to handle errors due to lost packets, and yields high compression with sustained good quality.
Abstract: Video coding has been investigated for the novel application of video transmission over packed-switched networks. The underlying design goals are presented, together with a software implementation of a coding scheme, which divides the input signal into frequency bands in all three dimensions, seems promising in that it lends itself to parrallel implementation, is robust enough to handle errors due to lost packets, and yields high compression with sustained good quality. Moreover, it may be well integrated with the network to handle issues is given, together with the results obtained from simulated system.

Patent
24 Mar 1988
TL;DR: In this paper, a method and apparatus for encoding interframe error data in an image transmission system, and in particular in a motion compensated image transmission systems for transmitting a sequence of image frames from a transmitter to a receiver, employ hierarchial vector quantization and arithmetic coding to increase the data compression of the images being transmitted.
Abstract: A method and apparatus for encoding interframe error data in an image transmission system, and in particular in a motion compensated image transmission system for transmitting a sequence of image frames from a transmitter to a receiver, employ hierarchial vector quantization and arithmetic coding to increase the data compression of the images being transmitted. The method and apparatus decimate an interframe predicted image data and an uncoded current image data, and apply hierarchial vector quantization encoding to the resulting pyramid data structures. Lossy coding is applied on a level-by-level basis for generating the encoded data representation of the image difference between the predicted image data and the uncoded original image. The method and apparatus are applicable to systems transmitting a sequence of image frames both with and without motion compensation. The method and apparatus feature blurring those blocks of the predicted image data which fail to adequately represent the current image at a pyramid structural level and shifting block boundaries to increase the efficiency of the vector quantization coding mechanism. The method further features techniques when gain/shape vector quantization is employed for decreasing the data which must be sent to the receiver by varying the size of the shape code book as a function of the gain associated with the shape. Thresholding and the deletion of isolated blocks of data also decrease transmission requirements without objectionable loss of image quality.

Journal ArticleDOI
TL;DR: Using an arithmetic coder as the low-level encoding unit, it is shown how practical encoding systems can be constructed from syntactic models formulated using context-free grammars augmented with derivation step probabilities.
Abstract: The use of syntactic information source models for the source encoding (data compression) of messages is described. Syntactic models formulated using context-free grammars augmented with derivation step probabilities are considered. Using an arithmetic coder as the low-level encoding unit, it is shown how practical encoding systems can be constructed from such models. Application of the techniques to the encoding of syntactically correct Pascal computer programs is described, and additional techniques including the use of symbol tables are introduced. The resultant syntactic encoders achieve compression of Pascal programs approaching 90%. >

Journal ArticleDOI
TL;DR: A real-time compression algorithm that represents a modification of the amplitude zone time epoch coding (AZTEC) technique extended with several statistical parameters used to calculate the variable threshold has been developed and applied in the design of a pacemaker followup system.
Abstract: A real-time compression algorithm has been developed which is suitable for both real-time ECG (electrocardiogram) transmission and ECG data storing. The algorithm represents a modification of the amplitude zone time epoch coding (AZTEC) technique extended with several statistical parameters used to calculate the variable threshold. The proposed algorithm has been applied in the design of a pacemaker followup system for the online ECG data transmission from the pacemaker implanted in a human being to the computer system located at the clinic. >

Proceedings ArticleDOI
28 Nov 1988
TL;DR: The authors describe the existing reference model and the evolution toward the latest reference model (RM5), which is going to be submitted for standardization, and some examples for possible improvements are presented.
Abstract: A description is given of studies of source coding in the specialists group on coding for visual telephony in CCITT SG XV. Using a macro block approach, the source coding algorithm will have the capability of operating from 64 kb/s for videophone applications up to 2 Mb/s for high-quality videoconferencing applications. The authors describe the existing reference model and the evolution toward the latest reference model (RM5), which is going to be submitted for standardization. Theoretical information is provided and some examples for possible improvements are presented for the most important techniques used in the reference model, among which are quantization, variable block size, scanning classes, loop filter, entropy coding, multiple vs. single variable-length coding, and block-type discrimination. >

Journal ArticleDOI
TL;DR: This paper introduces the modern approach to text compression and describes a highly effective adaptive method, with particular emphasis on its potential for protecting messages from eavesdroppers.

Proceedings ArticleDOI
07 Jun 1988
TL;DR: An image compression system that incorporates Peano scan with a fractal-based coding scheme for intraframe compression and achieves high picture quality with a bit rate of <1 bit per pixel and is suitable for many applications in which complexity is a major concern.
Abstract: The authors describe an image compression system that incorporates Peano scan with a fractal-based coding scheme for intraframe compression and achieves high picture quality with a bit rate of >

Patent
03 Aug 1988
TL;DR: In this article, an adaptive data compression system is reset when performance drops below a predetermined threshold to permit greater compression of long files with evolving distributions of symbol combinations, where initially unassigned codes are strategically assigned to symbol combinations as they are encountered in the data stream.
Abstract: An adaptive data compression system is reset when performance drops below a predetermined threshold to permit greater compression of long files with evolving distributions of symbol combinations. The compression system uses a resettable dictionary in which initially unassigned codes are strategically assigned to symbol combinations as they are encountered in the data stream. The difference between the bit-lengths of corresponding lengths of the compressor input and output is compared with a value representing a predetermined performance threshold. The dictionary can be reset if the actual performance falls below the performance threshold. The reset can be inhibited while the dictionary is less than half-full to ensure that performance measures are statistically significant. However, if the performance is such that data expansion is occurring, reset is not so delayed.

Proceedings ArticleDOI
12 Sep 1988
TL;DR: A signature analysis technique is presented, which achieves smaller aliasing probability than other recently proposed schemes, and it is shown that there exist compression techniques for which the aliase probability can be reduced to zero asymptotically.
Abstract: A coding theory framework is developed for analysis and synthesis of compression techniques in the built-in self test (BIST) environment. Using this framework, exact expressions are derived for the linear feedback shift register aliasing probability. These are shown to be more accurate than earlier ones. Also shown is that there exist compression techniques for which the aliasing probability can be reduced to zero asymptotically. An error model is presented that incorporates the effects of faults on output response. It is shown that the coding theory framework correlates well with this proposed error model. A signature analysis technique is presented, which achieves smaller aliasing probability than other recently proposed schemes. >

Proceedings ArticleDOI
11 Apr 1988
TL;DR: This work investigates the application of the subband concept to the efficient coding of color images by decoding and merging the reinterpolated subband images.
Abstract: Subband coding (SBC) has been shown to be an effective method for coding monochrome images at low bit rates. SBC systems decompose the input image into a small number of decimated, spectrally disjoint subband images which are then coded by a variety of means. Reconstruction is performed by decoding and merging the reinterpolated subband images. This work investigates the application of the subband concept to the efficient coding of color images. The color representations are cast into a subband coding framework to form color image coding systems. Differential PCM and finite-state vector quantization methods are used in coding the subband images. Bit rates below 1 bit/pixel are obtainable for high-quality representations of 256*256 color images. >

Proceedings ArticleDOI
16 Dec 1988
TL;DR: In this paper, the authors explored noisy compression algorithms which may compress multispectral data by up to 30:1 or more, using both traditional and mission-oriented criteria (e.g., feature classification consistency).
Abstract: The Earth Orbiting Satellite (EOS), scheduled for launch in the mid-1990s, will include a next-generation multispectral imaging system (HIRIS) having unprecedented spatial and spectral resolution. Its high resolution, however, comes at the cost of a raw data rate which exceeds the communication channel capacity assigned to the entire EOS mission. This paper explores noisy compression algorithms which may compress multispectral data by up to 30:1 or more. Algorithm performance is measured using both traditional and mission-oriented criteria (e.g., feature classification consistency). It is shown that vector quantization, merged with suitable preprocessing techniques, is the most viable candidate.

Patent
03 Nov 1988
TL;DR: In this article, a data modem includes a data compression circuit which compresses incoming data prior to transmission, and the compression ratio obtained from the compression process is used to select a constellation for transmission of the data.
Abstract: A data modem includes a data compression circuit which compresses incoming data prior to transmission. The compression ratio obtained from the compression process is used to select a constellation for transmission of the data. When higher compression rates are achieved, fewer constellation points (symbols) representing fewer bits per point are transmitted. In this manner, incoming data rate can be held constant while utilizing a more robust constellation which has greater immunity to transmission line impairments. This reduces errors and retransmissions which can ultimately lead to reduction in effective throughput in conventional systems.

Patent
01 Apr 1988
TL;DR: In this paper, a halftone image data compression method is described, where the image data is divided into a plurality of blocks and orthogonal transformation is performed in units of blocks to obtain transformed coefficients.
Abstract: A halftone image data compression method includes steps wherein digital halftone image data is divided into a plurality of blocks, orthogonal transformation is performed in units of blocks to obtain transformed coefficients, and the transform coefficients are quantized and coded. A final quantization interval is determined in units of blocks using a frequency of generation of the AC components of the transform coefficients of each block quantized to zero as a parameter.

Journal ArticleDOI
TL;DR: With this scheme of two-dimensional (2-D) spline interpolation for image reconstruction, compression rate of greater than 12 is achieved for X-ray left ventricular cineangiogram image data compression.

Patent
03 Aug 1988
TL;DR: A data compression system implementing expansion protection employs one or more pairs of FIFO pairs to compare the lengths of raw and processed versions of a block of received data as discussed by the authors, where the shorter version is transmitted so that the data transmitted by the compression system is at most negligibly expanded relative to the system input.
Abstract: A data compression system implementing expansion protection employs one or more pairs of FIFOs to compare the lengths of raw and processed versions of a block of received data. A data compression system includes a data compressor (11), a controller (19), a FIFO (13) for compressed data, and another FIFO (15) for uncompressed data. The FIFOs (13 and 15) are used to compare the length of a data block processed by the data compressor (11) with the raw version of the data block. The shorter version is transmitted so that the data transmitted by the data compression system is at most negligibly expanded relative to the system input. A code injector (17) inserts a code into the output stream to indicate the beginning of the transmission of a raw data block so that a receiving or retrieving system can determine whether the data following needs to be decompressed or not. Further codes can be injected to indicate a switch from raw data to processed data in the output of the compression system.