scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1990"


Journal ArticleDOI
TL;DR: A new competitive-learning algorithm based on the “conscience” learning method is introduced that is shown to be efficient and yields near-optimal results in vector quantization for data compression.

726 citations


Journal ArticleDOI
TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >

690 citations


Journal ArticleDOI
TL;DR: It is shown that the estimates made by Cleary and Witten of the resources required to implement the PPM scheme can be revised to allow for a tractable and useful implementation.
Abstract: The prediction by partial matching (PPM) data compression algorithm developed by J Cleary and I Witten (1984) is capable of very high compression rates, encoding English text in as little as 22 b/character It is shown that the estimates made by Cleary and Witten of the resources required to implement the scheme can be revised to allow for a tractable and useful implementation In particular, a variant is described that encodes and decodes at over 4 kB/s on a small workstation and operates within a few hundred kilobytes of data space, but still obtains compression of about 24 b/character for English text >

437 citations


Patent
28 Mar 1990
TL;DR: In this article, a log polar mapper was used to match the image display to human perceptual resolution to achieve a 25:1 compression ratio with no loss of perceived cues, and then the perceptual channels were separated into low resolution, high discrimination level color and high resolution, low discrimination level contrast edges.
Abstract: Video image compression apparatus and method provides full color, wide field of view, real time imagery having high central resolution. Compression ratios of 1600:1 are achieved thus reducing a required data transmission bandwidth sufficiently to abolish line-of-sight restrictions. Data compression apparatus and method (a) uses a log polar mapper to match the image display to human perceptual resolution to achieve a 25:1 compression ratio with no loss of perceived cues, (b) separates perceptual channels into low resolution, high discrimination level color and high resolution, low discrimination level contrast edges to yield an additional 8:1 compression ratio and (c) applies a data compression technique to yield an additional 8:1 compression ratio. A Gaussian filter is employed in generating a display of the imagery from the compressed data. An operator is provided a capability to rapidly move the high resolution window to any point of interest within the display.

221 citations


Journal ArticleDOI
Peter Strobach1
TL;DR: A new type of scene adaptive coder that involves a quadtree mean decomposition of the motion-compensated frame-to-frame difference signal followed by a scalar quantization of the local means to achieve a subjective image quality that is as good or better than that of the traditional transform-based counterpart.
Abstract: A new type of scene adaptive coder has been developed. It involves a quadtree mean decomposition of the motion-compensated frame-to-frame difference signal followed by a scalar quantization of the local means. As a fundamental property, the new coding algorithm treats the displacement estimation problem and the quadtree construction problem as a unit. The displacement vector and the related quadtree are jointly optimized in order to minimize the direct frame-to-frame update information rate (in bits), which turns up as a new and more adequate cost function in displacement estimation. This guarantees the highest possible data compression ratio at a given quality threshold. Excellent results have been obtained for coding of color image sequences at a rate of 64 kb/s. The quadtree concept entails a much lower computational complexity compared to the conventional motion-compensated transform coder while achieving a subjective image quality that is as good or better than that of the traditional transform-based counterpart. >

213 citations


Proceedings ArticleDOI
03 Apr 1990
TL;DR: An approach to digital image coding, rooted in iterated transformation theory (ITT) and referred to as ITT-based coding, is proposed, which is a fractal block-coding method which relies on the assumption that image redundancy can be efficiently exploited through block self-transformability.
Abstract: An approach to digital image coding, rooted in iterated transformation theory (ITT) and referred to as ITT-based coding, is proposed. It is a fractal block-coding method which relies on the assumption that image redundancy can be efficiently exploited through block self-transformability. The coding-decoding system is based on the construction, for any given original image to encode, of an image transformation of a special kind which (when iterated on any initial image) produces a sequence of images that converges to a fractal approximation of the original. The requirements on the transformation are that (i) it is contractive in the metric space of images endowed with the L/sub 2/ metric, (ii) it leaves the original image approximately invariant, and (iii) its complexity is smaller than that of the original image. The fully automated ITT-based system has comparable performance, in terms of signal-to-noise-ratio and bit rate, to state-or-the-art vector quantizers, with which it shares some features. >

192 citations


Journal ArticleDOI
TL;DR: A novel framework for digital image compression called visual pattern image coding, or VPIC, is presented; set of visual-patterns is defined independent of the images to be coded, and there is no training phase required.
Abstract: A novel framework for digital image compression called visual pattern image coding, or VPIC, is presented. In VPIC, set of visual-patterns is defined independent of the images to be coded. Each visual pattern is a subimage of limited spatial support that is visually meaningful to a normal human observer. The patterns are used as a basis for efficient image representation; since it is assumed that the images to be coded are natural optical images to be viewed by human observers, visual pattern design is developed using relevant psychophysical and physiological data. Although VPIC bears certain resemblances to block truncation (BTC) and vector quantification (VQ) image coding, there are important differences. First, there is no training phase required: the visual patterns derive from models of perceptual mechanisms; second, the assignment of patterns to image regions is not based on a standard (norm) error criterion; expensive search operations are eliminated. >

144 citations


Journal ArticleDOI
TL;DR: The quality (SNR value) of the images encoded by the proposed A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate would be reduced by a factor of approximately two when compared to aMemoryless Vector quantizer.
Abstract: A novel vector quantization scheme, called the address-vector quantizer (A-VQ), is proposed. It is based on exploiting the interblock correlation by encoding a group of blocks together using an address-codebook. The address-codebook consists of a set of address-codevectors where each codevector represents a combination of addresses (indexes). Each element of this codevector is an address of an entry in the LBG-codebook, representing a vector quantized block. The address-codebook consists of two regions: one is the active (addressable) region, and the other is the inactive (nonaddressable) region. During the encoding process the codevectors in the address-codebook are reordered adaptively in order to bring the most probable address-codevectors into the active region. When encoding an address-codevector, the active region of the address-codebook is checked, and if such an address combination exist its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The quality (SNR value) of the images encoded by the proposed A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate would be reduced by a factor of approximately two when compared to a memoryless vector quantizer. >

138 citations


Proceedings ArticleDOI
01 May 1990
TL;DR: Methods of compressing HDTV (high-definition television) image sequences are investigated and a new combination of median adaptive intraframe prediction and arithmetic coding of the resultant prediction error results in fully reversible compression with low computational complexity.
Abstract: Methods of compressing HDTV (high-definition television) image sequences are investigated. Best results are achieved by a new combination of median adaptive intraframe prediction and arithmetic coding of the resultant prediction error. Fully reversible compression with low computational complexity is possible. The method has been used to compress the simulated progressive scan HDTV image sequence 'Kiel Harbor' to a rate less than that of an (uncompressed) interlace source. Interlaced HDTV images have been compressed to approximately 55% of their original rate. Intended applications are image storage and full-quality data exchange. >

132 citations


Journal ArticleDOI
TL;DR: A unified coding method of image and text data is proposed and results show that a text of about 4 k to 8 kbytes can be embedded into the image of 256 × 256 pixels and it retains an S/N ratio of about 30 to 40 dB.
Abstract: A unified coding method of image and text data is proposed in this paper. This scheme enables us to embed a text into an image by using some redundant region in the frequency space. Our results show that a text of about 4 k to 8 kbytes can be embedded into the image of 256 × 256 pixels and it retains an S/N ratio of about 30 to 40 dB.

118 citations


Journal ArticleDOI
TL;DR: An optimal class of distances satisfying an orthogonality condition analogous to that enjoyed by linear projections in Hilbert space is derived and possess the geometric properties of cross entropy useful in speech and image compression, pattern classification, and cluster analysis.
Abstract: Minimum distance approaches are considered for the reconstruction of a real function from finitely many linear functional values. An optimal class of distances satisfying an orthogonality condition analogous to that enjoyed by linear projections in Hilbert space is derived. These optimal distances are related to measures of distances between probability distributions recently introduced by C.R. Rao and T.K. Nayak (1985) and possess the geometric properties of cross entropy useful in speech and image compression, pattern classification, and cluster analysis. Several examples from spectrum estimation and image processing are discussed. >

Journal ArticleDOI
TL;DR: The range of applicability of nonlinear interpolative vector quantization is illustrated with examples in which optimal nonlinear estimation from quantized data is needed for efficient signal compression.
Abstract: A process by which a reduced-dimensionality feature vector can be extracted from a high-dimensionality signal vector and then vector quantized with lower complexity than direct quantization of the signal vector is discussed. In this procedure, a receiver must estimate, or interpolate, the signal vector from the quantized features. The task of recovering a high-dimensional signal vector from a reduced-dimensionality feature vector can be viewed as a generalized form of interpolation or prediction. A way in which optimal nonlinear interpolation can be achieved with negligible complexity, eliminating the need for ad hoc linear or nonlinear interpolation techniques, is presented. The range of applicability of nonlinear interpolative vector quantization is illustrated with examples in which optimal nonlinear estimation from quantized data is needed for efficient signal compression. >

Patent
05 Nov 1990
TL;DR: In this paper, a Discrete Cosine Transform (DCT) is performed on blocks of the image data and the average RMS values for the coefficients of the images are determined.
Abstract: In a method for creating a scan sequence for a single chip color camera an analysis of typical images, captured through a color filter array, is performed to determine an optimal scan sequence for the particular array and/or the typical images. The images, after being digitally captured, are separated into red, green and a blue image databases and the color databases are processed separately. A Discrete Cosine Transform is performed on blocks of the image data and the average RMS values for the coefficients of the images are determined. The RMS values are sorted in descending order to produce a descending scan sequence that optimizes the performance of run length coding schemes. The scan sequence can be stored in a hardware, firmware or software lookup table as a list of block coordinates or indices and used by the camera system to convert two dimensional blocks of coefficients into one dimensional lists of coefficients suitable for run length coding. The block coefficients are used to convert the decoded coefficients into image blocks before presentation on a color CRT of before producing a color print. By incorporating the method into an image capture system an adaptive system which will optimize coding for different image environments and/or different color filter arrays suitable for the different environments is produced.

Patent
13 Dec 1990
TL;DR: In this paper, a method for maximizing data compression by optimizing model selection during coding of an input stream of data symbols, wherein at least two models are run and compared, and the model with the best coding performance for a given size segment or block of compressed data is selected such that only its block is used in an output data stream.
Abstract: A system and method for maximizing data compression by optimizing model selection during coding of an input stream of data symbols, wherein at least two models are run and compared, and the model with the best coding performance for a given-size segment or block of compressed data is selected such that only its block is used in an output data stream. The best performance is determined by 1) respectively producing comparable-size blocks of compressed data from the input stream with the use of the two, or more, models and 2) selecting the model which compresses the most input data. In the preferred embodiment, respective strings of data are produced with each model from the symbol data and are coded with an adaptive arithmetic coder into the compressed data. Each block of compressed data is started by coding the decision to use the model currently being run and all models start with the arithmetic coder parameters established at the end of the preceding block. Only the compressed code stream of the best model is used in the output and that code stream has in it the overhead for selection of that model. Since the decision as to which model to run is made in the compressed data domain, i.e., the best model is chosen on the basis of which model coded the most input symbols for a given-size compressed block, rather than after coding a given number of input symbols, the model selection decision overhead scales with the compressed data. Successively selected compressed blocks are combined as an output code stream to produce an optimum output of compressed data, from input symbols, for storage or transmission.

Patent
09 Nov 1990
TL;DR: In this paper, a nonadaptive predictor, a nonuniform quantizer and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.
Abstract: A non-adaptive predictor, a nonuniform quantizer and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.

Patent
16 Mar 1990
TL;DR: In this paper, an adaptive compression/decompression method for color video data with an anti-aliasing mode was proposed, where the user settable thresholds can be used to shift the types of compression used.
Abstract: An adaptive compression/decompression method for color video data with an anti-aliasing mode. 4×4 blocks of pixel data are examined to determine which one of four compression techniques should be used on each block. User settable thresholds can be used to shift the types of compression used. Highest compression is obtained when more data is stored in run length blocks of single colors and lowest compression when more data is stored as two colors with a 32-bit bitmap for each 4×4 block. One type of compression used provides anti-aliasing.

Journal ArticleDOI
TL;DR: This letter has found that using the wavelet transform in time and space, combined with a multiresolution approach, leads to an efficient and effective method of compression.
Abstract: This letter present results on using wavelet transforms in both space and time for compression of real time digital video data. The advantages of the wavelet transform for static image analysis are well known.2 We have found that using the wavelet transform in time and space, combined with a multiresolution approach, leads to an efficient and effective method of compression. In addition, the computational requirements are considerably less than for other compression methods, and are more suited to VLSI implementation. Some preliminary results of compression on a sample video will be presented.

Journal ArticleDOI
TL;DR: A special case of the data compression problem is presented, in which a powerful encoder transmits a coded file to a decoder that has severely constrained memory.
Abstract: A special case of the data compression problem is presented, in which a powerful encoder transmits a coded file to a decoder that has severely constrained memory. A data structure that achieves minimum storage is presented, and alternative methods that sacrifice a small amount of storage to attain faster decoding are described.

Journal ArticleDOI
01 May 1990
TL;DR: Time-recursive motion compensation prediction is introduced, in which all previously displayed history is used to predict the missing pixel values, and how to apply this algorithm to pyramid coding to achieve a better compression rate and compatibility with other lesser resolution standards is discussed.
Abstract: Most improved-definition television (IDTV) receivers use progressive scanning to reduce artifacts associated with interlacing (e.g. interline flicker, line crawl). Some novel techniques of motion compensated interpolation of the missing lines of interlaced monochrome and color sequences, reducing the artifacts associated with interlacing, and effectively increasing the vertical resolution of the image sequences are proposed. Time-recursive motion compensation prediction is introduced, in which all previously displayed history (not just the previous field) is used to predict the missing pixel values. The next future field is also used for the same purpose, by a lookahead scheme. Motion estimation is done using a quadtree-based segmented block-matching technique with half-pixel accuracy. To avoid artifacts and obtain full resolution in still regions, such as background, motion adaptation is also used. How to apply this algorithm to pyramid coding to achieve a better compression rate and compatibility with other lesser resolution standards is discussed. >

Journal ArticleDOI
TL;DR: Tree compression can be seen as a trade-off problem between time and space in which the authors can choose different strategies depending on whether they prefer better compression results or more efficient operations in the compressed structure.
Abstract: Different methods for compressing trees are surveyed and developed. Tree compression can be seen as a trade-off problem between time and space in which we can choose different strategies depending on whether we prefer better compression results or more efficient operations in the compressed structure. Of special interest is the case where space can be saved while preserving the functionality of the operations; this is called data optimization. The general compression scheme employed here consists of separate linearization of the tree structure and the data stored in the tree. Also some applications of the tree compression methods are explored. These include the syntax-directed compression of program files, the compression of pixel trees, trie compaction and dictionaries maintained as implicit data structures.


Journal ArticleDOI
TL;DR: An image coding method for low bit rates based on alternate use of the discrete cosine transform and the discrete sine transform on image blocks achieves the removal of redundancies in the correlation between neighboring blocks as well as the preservation of continuity across the block boundaries.
Abstract: An image coding method for low bit rates is proposed. It is based on alternate use of the discrete cosine transform (DCT) and the discrete sine transform (DST) on image blocks. This procedure achieves the removal of redundancies in the correlation between neighboring blocks as well as the preservation of continuity across the block boundaries. An outline of the mathematical justification of the method, assuming a certain first-order Gauss-Markov model, is given. The resulting coding method is then adapted to nonstationary real images by locally adapting the model parameters and improving the block classification technique. Simulation results are shown and compared with the performance of related previous methods, namely adaptive DCT and fast Karhunen-Loeve transform (FKLT). >

Journal ArticleDOI
TL;DR: A model of the CSF is described that includes changes as a function of image noise level by using the concepts of internal visual noise, and is tested in the context of image compression with an observer study.
Abstract: The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display- observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise. The model is tested in the context of image compression with an observer study.

Patent
21 Sep 1990
TL;DR: In this article, a method and apparatus for performing data compression that does not require interpolation of pixel data in order to define image blocks is presented, where spatially interleaved image blocks composed of high frequency image components are sampled at a pitch or spatial sample frequency equal to that of the low-frequency image components.
Abstract: A method and apparatus for performing data compression is disclosed that does not require interpolation of pixel data in order to define image blocks. More specifically, the present invention provides spatially interleaved image blocks composed of high frequency image components by sampling the high frequency image components at a pitch or spatial sample frequency equal to that of the low frequency image components. The present invention provides the added advantage of reducing the number of image blocks that must be defined in order to perform data compression.

Patent
19 Mar 1990
TL;DR: In this paper, a data compression method which recognizes the adverse conditions of duochrominance-isoluminance and nonlinear color distribution is presented, where a mxn block of pixel data is examined to compute two colors and a bitmap which best represents the block generally using a luminance partitioning technique.
Abstract: A data compression method which recognizes the adverse conditions of duochrominance-isoluminance and nonlinear color distribution. A mxn block of pixel data is examined to compute two colors and a bitmap which best represent the block generally using a luminance partitioning technique. The original data and the compressed data are examined to determine if the resultant decompresssed image will contain artifact associated with duochrominance-isoluminance or nonlinear color distribution. If these artifacts will occur in the decompressed data, the decompressed data is not used but rather the block is represented by storing the color of each pixel. This method produces compressed images of excellent quality.

Patent
28 Sep 1990
TL;DR: In this paper, an electronic still camera is used for converting a taken optical image of a subject into digital image data, and recording it into a memory card incorporating semiconductor memories.
Abstract: The invention is based on an electronic still camera for converting a taken optical image of a subject into digital image data, and recording it into a memory card incorporating semiconductor memories. The digital image data obtained from this electronic still camera is stored in a large-capacity recording medium through an exclusive recording device, or displayed in plural monitors, or stored on a disk or magnetic tape by existing recording devices, or even compressed, expanded, edited or processed, so that it may satisfy versatile requests of users. Image data processing apparatus converts a taken optical image into digital image data, compresses the digital image data, and records it in a memory. The apparatus includes means for dividing a screen into a plurality of blocks, calculating an activity by digitizing the complexity of an image data of each block and an activity of the entire screen of the image data, determining a code amount of the entire screen by setting a data compression rate based on the activity of the entire screen, and determining code amount allotted to each block in proportion to the activity calculated for each block, thereby controlling the code amount of each block. Second means can be provided for setting a code amount of the entire screen based on a manually set data compression rate, and then setting the code amount for each of the divided blocks based on the activity calculated for each block, thereby controlling the code amount of each block.

Journal ArticleDOI
TL;DR: DigiCipher provides full HDTV performance with virtually no visible transmission impairments due to noise, multipath, and interference, making it ideal for simulcast HDTV transmission using unused or prohibited channels.
Abstract: DigiCipher, an all-digital HDTV (high-definition television) system, with transmission over a single 6 MHz VHF or UHF channel, is described. It provides full HDTV performance with virtually no visible transmission impairments due to noise, multipath, and interference. It offers high picture quality, while the complexity of the decoder is low. Furthermore, low transmitting power can be used, making it ideal for simulcast HDTV transmission using unused or prohibited channels. DigiCipher can also be used for cable and satellite transmission of HDTV. There is no satellite receive dish size penalty (compared to FM-NTSC) in the satellite delivery of DigiCipher HDTV. To achieve the full HDTV performance in a single 6 MHz bandwidth, a highly efficient unique compression algorithm based on DCT (discrete cosine transform) transform coding is used. Through the extensive use of computer simulation, the compression algorithm has been refined and optimized. Computer simulation results show excellent video quality for a variety of HDTV material. For error-free transmission of the digital data, power error correction coding combined with adaptive equalization is used. At a carrier-to-noise ratio of above 19 dB, essentially error-free reception can be achieved. >

Journal ArticleDOI
TL;DR: An artificial neural network is proposed which is able to compress an image by computing a nonlinear nonorthogonal transform and its inverse and gives satisfactory performances both for the learned and for different unlearned images.
Abstract: An artificial neural network is proposed which is able to compress an image by computing a nonlinear nonorthogonal transform and its inverse. The network is trained with small blocks extracted from an image; after the learning phase, it proves to give satisfactory performances both for the learned and for different unlearned images.

Journal ArticleDOI
TL;DR: A data-compression algorithm for digital Holter recording using artificial neural networks (ANNs) is described, using a three-layer ANN that has a hidden layer with a few units to extract features of the ECG waveform as a function of the activation levels of the hidden layer units.
Abstract: A data-compression algorithm for digital Holter recording using artificial neural networks (ANNs) is described A three-layer ANN that has a hidden layer with a few units is used to extract features of the ECG (electrocardiogram) waveform as a function of the activation levels of the hidden layer units The number of output and input units is the same The backpropagation algorithm is used for learning The network is tuned with supervised signals that are the same as the input signals One network (network 1) is used for data compression and another (network 2) is used for learning with current signals Once the network is tuned, the common waveform features are encoded by the interconnecting weights of the network The activation levels of the hidden units then express the respective features of the waveforms for each consecutive heartbeat >

Journal ArticleDOI
TL;DR: A scheme is proposed which is based on vector quantization (VQ) for the data-compression of multichannel ECG waveforms, and both m-AZTEC and CVQ provide data compression, and their performance improves as the number of channels increases.
Abstract: A scheme is proposed which is based on vector quantization (VQ) for the data-compression of multichannel ECG waveforms. N-channel ECG is first coded using m-AZTEC, a new, multichannel extension of the AZTEC algorithm. As in AZTEC, the waveform is approximated using only lines and slopes; however, in m-AZTEC, the N channels are coded simultaneously into a sequence of N+1 dimensional vectors, thus exploiting the correlation that exists across channels in the AZTEC duration parameter. Classified VQ (CVQ) of the m-AZTEC output is next performed to exploit the correlation in the other AZTEC parameter, namely, the value parameter. CVQ preserves the waveform morphology by treating the lines and slopes as two perceptually distinct classes. Both m-AZTEC and CVQ provide data compression, and their performance improves as the number of channels increases. >