scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1983"


Journal ArticleDOI
Jorma Rissanen1
TL;DR: A universal data compression algorithm is described which is capable of compressing long strings generated by a "finitely generated" source, with a near optimum per symbol length without prior knowledge of the source.
Abstract: A universal data compression algorithm is described which is capable of compressing long strings generated by a "finitely generated" source, with a near optimum per symbol length without prior knowledge of the source. This class of sources may be viewed as a generalization of Markov sources to random fields. Moreover, the algorithm does not require a working storage much larger than that needed to describe the source generating parameters.

708 citations



Patent
17 Aug 1983
TL;DR: In this article, variable length data (e.g., for hospital patients) is embedded in a B-tree type index structure of a relational data base, and a logically related inverted B -tree index is used to access the original index.
Abstract: Variable length data (e.g., for hospital patients) is embedded in a B-tree type index structure of a relational data base. A logically related inverted B-tree index is used to access the original index. Access time, and storage space for the inverted lists, are decreased by data compression techniques and by encoding certain inverted list parameters in sparse array bit maps.

150 citations


Journal ArticleDOI
TL;DR: Three fast and efficient "scan-along" algorithms for compressing digitized electrocardiographic data are described, based on the minimum perimeter polygonal approximation for digitized curves.
Abstract: Three fast and efficient "scan-along" algorithms for compressing digitized electrocardiographic data are described. These algorithms are "scan-along" in the sense that they produce the compressed data in real time as the electrocardiogram is generated. The algorithms are based on the minimum perimeter polygonal approximation for digitized curves. The approximation restricts the maximum error to be no greater than a specified value. Our algorithms achieve a compression ratio of ten on a database of 8000 5-beat abnormal electrocardiograms sampled at 250 Hz and a compression ratio of eleven on a database of 600 3-beat normal electrocardiograms (different from the preceding database) sampled at 500 Hz.

145 citations


Journal ArticleDOI
Jr. Glen George Langdon1
TL;DR: The symbolwise equivalent of the Ziv-Lempel algorithm is extended to incomplete parse trees and requires the proper handling of the comma when one phrase is the prefix of another phrase.
Abstract: The Ziv-Lempel compression algorithm is a string matching and parsing approach to data compression. The symbolwise equivalent for parsing models has been defined by Rissanen and Langdon and gives the same ideal codelength at the same cost in coding parameters. By describing the context and coding parameter for each symbol an insight is provided into how the Ziv-Lempel method achieves compression. This treatment does not employ a probabilistic source for the data string. The Ziv-Lempel method effectively counts symbol instances within parsed phrases. The coding parameter for each symbolwise context is determined by cumulative count ratios. The code string length increase for a symbol y following substring s , under the symbolwise equivalent, is the log of the ratio of node counts in subtrees s and s\cdot y of the Ziv-Lempel parsing tree. To demonstrate the symbolwise equivalent of the Ziv-Lempel algorithm, we extend the work of Rissanen and Langdon to incomplete parse trees. The result requires the proper handling of the comma when one phrase is the prefix of another phrase.

99 citations


Patent
21 Jul 1983
TL;DR: In this paper, a video image is cyclically assembled in low-resolution and high-resolution phases from digitized data representing gray level intensity for individual pixels which have been grouped into pixel.
Abstract: An improved apparatus for rapidly compressing, expanding, and displaying broad band information which is transmitted over a narrow band communications channel. In the preferred embodiment, a video image is cyclically assembled in low resolution and high resolution phases from digitized data representing gray level intensity for individual pixels which have been grouped into pixel. During the initial cycle of the low resolution phase, a representative sample of cell intensity values is transmitted by a sending station to a receiving station according to a video compression routine. The receiving station then uses a video expansion routine to calculate an intensity value for those pixels whose intensity values were not transmitted and displays an initial image. This image is refined during subsequent low-resolution cycles by additional transmissions from the sending station which replace the calculated cell intensity values with an actual or better approximation value for that pixel. During the high resolution phase, an error determination routine or external input from a viewer selects those pixels containing the greatest deviation in intensity levels from the input video image. The error compression and expansion routines substitute a plurality of individual pixel intensity values for previously calculated intensity values. The present invention also discloses an apparatus for allowing color capable stations to send or receive color transmissions while retaining the capability to interact with noncolor stations. Color data is compressed and interleaved with black and white data by a color capable sending station and subsequently separated and expanded by a color capable receiving station.

86 citations


Journal ArticleDOI
TL;DR: The feasibility of using a fast Walsh transform algorithm to implement a real-time microprocessor-based e.
Abstract: The feasibility of using a fast Walsh transform algorithm to implement a real-time microprocessor-based e.c.g. data-compression system was studied. Using the mean square error between the original and reconstructed e.c.g. signals as a measure of the utility of the reconstructed signals, the limit to which an e.c.g. signal could be compressed and still yield an acceptable reconstruction was determined. The possibility of enhancing the quality of the reconstructed signals using linear filtering techniques was also investigated.

72 citations


Patent
18 Nov 1983
TL;DR: In this paper, an image compression system where the image is arranged in a block of elements which are Hadamard transformed and then quantized into sequences is presented, and all the sequences are then encoded into a variable length code depending on the combination of the sequences.
Abstract: An image compression system wherein the image is arranged in a block 1 of elements which are Hadamard transformed 2 and then quantized 3 into sequences. The DC component is prediction 4 into a DC error prediction term and all the sequences are then encoded 5 into a variable length code depending on the combination of the sequences. The variable length codes are written into a buffer memory 6 at an input bit rate and read therefrom at an output bit rate asynchronously with the writing. A counter 10 keeps track of the amount of unused space in the buffer memory and a quantization characteristic selector 9 uses this amount to determine the quantization level.

64 citations


Journal ArticleDOI
TL;DR: The gray code is the best binary code system for DF-expression in information preserving sense, and a new data reduction technique in terms of uniformalization of excessively complicated regions in bit-planes is proposed.
Abstract: A picture coding strategy titled DF-expression (Depth-First picture expression) is now studied from another point of view. The basic idea of DF-expression is briefly reviewed at first. Its capability in data compression is demonstrated using 1024 × 1024 binary pictures. Then the new aspects of DF-expression are studied in reference to picture processing algorithm on the coded form. They include circular shiftings, spectrum of primitives, logical operations, etc. Application of DF-expression to gray images (or multivalued pictures) is the next topics of the paper. DF-expression is applicable to bit-plane coding of any binary image data. Our conclusion with this point is that the gray code is the best binary code system for DF-expression in information preserving sense. Information-lossy type data reduction is another topics in this paper. The authors propose a new data reduction technique in terms of uniformalization of excessively complicated regions in bit-planes. Experimental study follows using a 256 × 256 sized fourbit test picture. Finally, conclusions and other possibility of the applications are remarked.

59 citations


Book
01 Jan 1983

49 citations


PatentDOI
Aaron J. Davis1
TL;DR: In this paper, a seismic data compression technique is described, which comprises sampling each individual seismic trace, operating upon a set number of samples to generate a predicted sample and quantizing the difference between the next sample and the predicted value of the sample, and transmitting the quantum number whereby the amount of information which need be transmitted is limited.
Abstract: A seismic data compression technique is disclosed which comprises sampling each individual seismic trace, operating upon a set number of samples to generate a predicted sample and quantizing the difference between the next sample and the predicted value of the sample, and transmitting the quantum number whereby the amount of information which need be transmitted is limited. In a preferred embodiment, a linear prediction differential pulse code modulation scheme is used to provide the predicted value, while an adaptive quantization scheme is used to quantize the error value to be transmitted, thus yielding further improvements in accuracy. A feedback loop can be applied to the decompression operation to limit quantization noise and further improve the fidelity of representation of the decompressed signals.

Patent
Dimitris Anastassiou1
30 Jun 1983
TL;DR: In this paper, the data compression apparatus and method disclosed separates the graphics image into at least first and second bit planes identifies edge pixels from the first bit plane indicating a black/white change, locates the edge pixels and generates at least a single bit for each edge pixel indicating whether the edge pixel has a maximum intensity value such as black or white or an intermediate gray intensity value.
Abstract: Graphic images are generally considered to be those images comprised of text and/or drawings. Data compression of graphics images is desired whenever a fast image transmission speed is desired in a limited band width channel. It is also used for storage of a large number of images in a limited capacity storage system. A high compression ratio is achieved by thresholding the graphics image to a bilevel black-white image at one bit per pixel and then employing a second data compression on the black-white image. At low resolution, bilevel images have poor quality at edges and a quality improvement is needed. The data compression apparatus and method disclosed separates the graphics image into at least first and second bit planes identifies edge pixels from the first bit plane indicating a black/white change, locates the edge pixels and generates at least a single bit for each edge pixel indicating whether the edge pixel has a maximum intensity value such as black or white or an intermediate gray intensity value. Intermediate values are not allowed except at edge pixels which enhances both quality and compressibility of the resulting graphics image.

Patent
14 Feb 1983
TL;DR: In this article, a first compression using Time Domain Harmonic Scaling (TDHS), where two periods of voiced data are averaged together, followed by a second compression using Continuously Variable Slope Delta Modulation (CVSD).
Abstract: Speech signal data compression is improved by a first compression using Time Domain Harmonic Scaling (TDHS), wherein two periods of voiced data are averaged together, followed by a second compression using Continuously Variable Slope Delta Modulation (CVSD). Pitch period detection and sampling-rate conversion are also features of the invention.

Journal ArticleDOI
TL;DR: A wealth of literature on data compression is reviewed and facts and guidelines are presented which will assist system designers in evaluating the costs and benefits of compression and in selecting techniques appropriate for their needs.

Patent
01 Feb 1983
TL;DR: In this paper, a method for compressing character or pictorial image data which contemplates to compress a quantity of data by means of establishing sampling points and establishing blocks as well as storing information for specifying outlines of a character in the case where the outlines of characters, pictorial images or the like are approximated with sets of functional curves or straight lines.
Abstract: A method for compressing character or pictorial image data which contemplates to compress a quantity of data by means of a method for establishing sampling points and a method for establishing blocks as well as storing information for specifying outlines of a character in the case where the outlines of character, pictorial image or the like are approximated with sets of functional curves or straight lines to effect data compression.

Journal ArticleDOI
TL;DR: It has been found that the Mandala sorting of the block cosine domain results in a more effective domain for selecting target identification parameters and useful features from this Mandala/cosine domain are developed based upon correlation parameters and homogeneity measures which appear to successfully discriminate between natural and man-made objects.
Abstract: The problem of recognition of objects in images is investigated from the simultaneous viewpoints of image bandwidth compression and automatic target recognition. A scenario is suggested in which recognition is implemented on features in the block cosine transform domain which is useful for data compression as well. While most image frames would be processed by the automatic recognition algorithms in the compressed domain without need for image reconstruction, this still allows for visual image classification of targets with poor recognition rates (by human viewing at the receiving terminal). It has been found that the Mandala sorting of the block cosine domain results in a more effective domain for selecting target identification parameters. Useful features from this Mandala/cosine domain are developed based upon correlation parameters and homogeneity measures which appear to successfully discriminate between natural and man-made objects. The Bhattacharyya feature discriminator is used to provide a 10:1 compression of the feature space for implementation of simple statistical decision surfaces (Gaussian and minimum distance classification). Imagery sensed in the visible spectra with a resolution of approximately 5-10 ft is used to illustrate the success of the technique on targets such as ships to be separated from clouds. A data set of 38 images is used for experimental verification with typical classification results ranging from the high 80's to low 90 percentile regions depending on the options choosen.

Proceedings ArticleDOI
26 Oct 1983
TL;DR: An image coding technique, based on a simplified description of regions composing the image, is presented, which leads to compression ratios greater than 50 to 1.1.
Abstract: An image coding technique, based on a simplified description of regions composing the image, is presented. Each region of the image is made of the maximum number of adjacent picture elements (pels) whose grey level evolution contains no sharp discontinuities. The pels within the regions provide the texture information whereas the region boundary points represent the contour information. Image coding is carried out by approximating the contour information and the texture information in each region. This is done by using different global analytical functions for each component. This adaptive image coding scheme leads to compression ratios greater than 50 to 1.1

Proceedings ArticleDOI
R. Pieraccini1, R. Billi
01 Apr 1983
TL;DR: In this work three different pattern compression techniques are compared on the basis of efficiency as well as recognition performance when applied to pattern matching by means of dynamic programming in a speaker dependent context.
Abstract: It is well known that the isolated word recognition strategy based on pattern matching gives good performance; however, in order to achieve efficient implementation it is necessary to develop techniques to reduce computational complexity and memory requirements, especially when the vocabulary size is not very small. In this work three different pattern compression techniques are compared on the basis of efficiency as well as recognition performance when applied to pattern matching by means of dynamic programming in a speaker dependent context.

Patent
Mario Caneschi1, Giorgio Tadini1
15 Mar 1983
TL;DR: In this article, a shift register controlled by a logic unit capable of recognizing a group of least significant bits is used to allow additional shifting of the register, and addressing of the second ROM with a bit number less than the maximum length of the codes.
Abstract: One-dimensional compression is effected by addressing a first ROM (62) with information in respect of the length and the type of run. One-dimensional decompression is effected by addressing with the compressed code, a second ROM, bearing the information in respect of the runs and the length of the code. That is provided by a shift register controlled by a logic unit capable of recognising a group of least significant bits, to permit additional shifting of the register, and addressing of the second ROM with a bit number less than the maximum length of the codes. For two-dimensional compression or decompression, there is provided a pair of RAMs for temporarily storing a reference line which, together with the current line, actuates a logic means (38), for controlling coding and decoding of the latter. For compression of the medium tones, besides the runs of the two colors, coding is effect in respect of the runs of the alternations of the two colors defined on the basis of the color of the initial pixel, and the type of the successive run is defined. Besides the coding and decoding for two-color images, the two ROMs also carry the coding and decoding for images with the half tone. Finally, the apparatus comprises an archive memory which can be connected to the compression module for archival storage of the images with the maximum degree of compression. By instead connecting the decompression module between the archive memory and the compression module, the image can be transmitted with a type of compression compatible with the receiving station.

Journal ArticleDOI
TL;DR: Radiometric and geometric transforms are derived which generate nearly stationary images in the first and second moments to enhance the performance of nonadaptive processing techniques, in particular data compression.
Abstract: The statistical behavior of images is inherently nonstationary. Unfortunately, most image processing algorithms assume stationary image models. Spatially adaptive algorithms have been developed which take into account local image statistics. In this paper we derive radiometric and geometric transforms which generate nearly stationary (block stationary) images in the first and second moments. We show that true stationarity is impossible to realize. The aim of these transformations is to enhance the performance of nonadaptive processing techniques, in particular data compression.

Patent
26 Aug 1983
TL;DR: In this article, a method and apparatus for data compression in a digital facile document transmission system is described, in which only alternate scan lines are transmitted, i.e., every other line is deleted in transmission, and at the receiver, the missing scan line are interpolated from the transmitted data.
Abstract: A method and apparatus are provided for data compression in a digital facile document transmission system. In accordance with the method, only alternate scan lines are transmitted, i.e., every other line is deleted in transmission, and at the receiver, the missing scan lines are interpolated from the transmitted data. This compression technique provides a fixed compression ratio of 2 to 1 regardless of the complexity of the input document. The interpolation method relies on comparing, for each picture element to be interpolated, the colors of one or more pairs of adjacent picture elements of the transmitted scan lines and making a decision as to the color of the picture element to be interpolated on this basis.

Patent
David C. Bullis1
15 Mar 1983
TL;DR: A data compression interface is characterized by a memory system having an architecture configured from a first and a second serial memory connected in parallel as mentioned in this paper, where one memory serves during alternate frames as a data collection memory while the other serves during that same frame as an output memory.
Abstract: A data compression interface is characterized by a memory system having an architecture configured from a first and a second serial memory connected in parallel. One memory serves during alternate frames as a data collection memory while the other serves during that same frame as an output memory.

Proceedings ArticleDOI
01 Oct 1983
TL;DR: Issues related to displacement estimation and motion compensation techniques in the low bit rate coding environment are discussed and results show that acceptable picture quality is obtained at 50 kb/s.
Abstract: This paper describes the coding algorithm and the simulation of a multimode movement compensated interframe video coder operating at 50 kb/s for possible use in the proposed National Command Authority Teleconferencing System (NCATS)[1]. The system provides multisite, multimedia conferencing. The coding algorithm combines several data rate reduction techniques in what is termed as a multimode interframe video coder. A key element to achieving the required bit rate reduction while maintaining acceptable picture quality is the utilization of motion compensation techniques. In this paper issues related to displacement estimation and motion compensation techniques in the low bit rate coding environment are discussed. Simulation results using several color video sequences with varying amounts of motion are presented. These results show that acceptable picture quality is obtained at 50 kb/s.

Journal ArticleDOI
TL;DR: It is shown that compression factors between 5 and 10 may be achieved without loss of diagnostic information and it is furthermore demonstrated that storage of the images in the form of Fourier coefficients leads to advantages in fast retrieval, enhancement of morphology, and possibility of further quantitative analysis.
Abstract: While digital techniques in radiology develop rapidly, problems arise with archival storage and communication of image data. This paper reports on experiments concerning data reduction of digital image sequences of the heart and the brain. The time‐intensity curves corresponding to every picture element are subjected to the Fourier transform and reconstructed from a number of coefficients smaller than the original number of images. The reconstruction error is assessed by visual inspection and by determining the mean‐square deviation of the original and the reconstructed curve. It is shown that compression factors between 5 and 10 may be achieved without loss of diagnostic information. It is furthermore demonstrated that storage of the images in the form of Fourier coefficients leads to advantages in fast retrieval, enhancement of morphology, and possibility of further quantitative analysis.

Patent
Paul H. Bardell1, William H. McAnney1
05 Oct 1983
TL;DR: In this paper, the LSSD scan paths (18) on a number of logic circuit chips (10) are modified and connected together in series to simultaneously serve as a random signal generator and data compression circuit to perform random stimuli signature generation.
Abstract: The LSSD scan paths (18) on a number of logic circuit chips (10) are modified and connected together in series to simultaneously serve as a random signal generator and data compression circuit to perform random stimuli signature generation.

Journal ArticleDOI
TL;DR: The design, implementation, and performance of a video bandwidth compression system is described, and the quality of the reconstructed video as predicted by computer simulation has been demonstrated by the actual hardware performance.
Abstract: The design, implementation, and performance of a video bandwidth compression system is described In this system, compression is obtained by several methods including the use of DCT/DCPM hybrid coding, frame rate reduction, and resolution reduction The overall compression ratio is up to 1000:1 The hardware-constrained design of the DCT and the DPCM is described and a new method is derived to solve the optimum integer bit-assignment problem associated with the block quantization process in the DPCM Computer simulation results are presented which predict that the performance of the system using the derived optimal bit assignment method is superior to those obtained by other bit assignment methods The real-time hybrid coding system design is optimized for a set of ?modified? average statistics to compress a wide variety of input video images This approach eliminates the problem of nonzero dc mean value which could otherwise cause serious degradations in the system performance The compression system is fully implemented and the quality of the reconstructed video as predicted by computer simulation has been demonstrated by the actual hardware performance The PSNR of the reconstructed imagery is in excess of 36 dB at 2 bits per pixel

Journal ArticleDOI
TL;DR: Compression techniques that reduce the amount of tape needed to store image data and the time to do so and the benefits delivered have also been applied to CT disk systems in which limited on-line memory requires compact file storage.
Abstract: Large digital files are inherent to computed tomography (CT) image data. The CT installations that routinely archive patient data are penalized computer time, technologist's time, tape, purchase, and file space. This paper introduces compression techniques that reduce the amount of tape needed to store image data and the amount of computer time to do so. The benefits delivered by this technique have also been applied to CT disk systems in which limited on-line memory requires compact file storage. Typical reductions of 40 to 50% of original file space are reported.

Patent
21 Nov 1983
TL;DR: In this article, the authors proposed a method and apparatus that improves data compression, resolution and coding efficiency by eliminating transitions between gray levels at edges in an image, converting all gray levels to a common value to achieve a 3-level representation of a graphics image, and reversibly converting the 3 level representation to a bilevel double resolution representation by increasing the data sampling rate and therefore allowing the use of two level data compression techniques.
Abstract: Graphics images are generally considered to be those images comprised of text and/or line drawings. Data compression of graphics images is desired whenever a fast image transmission speed is desired in a limited band width channel. It is also used for storage of a large number of images in a limited capacity storage system. The method and apparatus described herein improves data compression, resolution and coding efficiency by eliminating transitions between gray levels at edges in an image, converting all gray levels to a common value to achieve a 3 level representation of a graphics image, and reversibly converting the 3 level representation to a bilevel double resolution representation by increasing the data sampling rate and therefore allowing the use of two level data compression techniques. A high resolution display or printed output may be obtained from the bilevel multiresolution representation.

Journal ArticleDOI
TL;DR: An improvement based on this viewpoint for the Fourier transform coding, which possesses simple spatial domain relations, is presented.
Abstract: Image transform coding is first briefly reviewed using conventional viewpoints. Then a new spatial domain interpretation is given to image transform coding. An improvement based on this viewpoint for the Fourier transform coding, which possesses simple spatial domain relations, is presented.

Proceedings ArticleDOI
Jorma Rissanen1
01 Dec 1983
TL;DR: A connection between the problems of prediction, data compression, and statistical estimation is established with the central notion being the information in a string relative to a class of processes.
Abstract: In this paper a connection between the problems of prediction, data compression, and statistical estimation is established with the central notion being the information in a string relative to a class of processes. The earlier derived MDL-criterion for estimation of parameters, including their number, is given a fundamental information theoretic justification by showing that its estimators achieve the information in the strings. It is also shown that one cannot do prediction in gaussian ARMA processes below a bound, which is completely determined by the same estimators.