scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1980"


Patent
02 Jun 1980
TL;DR: In this article, an in-line data compression system which reduces the number of binary bits required to transmit a given text or similar message over a data network such as Telex or TWX was proposed.
Abstract: An in-line data compression system which reduces the number of binary bits required to transmit a given text or similar message over a data network such as Telex or TWX. The compression unit can transmit or receive standard messages or can transmit compressed and encrypted messages to remote stations decrypt, and decompress messages from remote stations. The text data is compressed by identifying each word, searching for the word in a fixed library of words, and transmitting a first escape code plus the library address if the word is found. If the word is not found in the fixed library, a search is made for the word in a reconfiguration library and a second escape code plus the reconfiguration library address is transmitted if the word is found. If the word is not found in the reconfiguration library, the word is transmitted one character at a time using variable length character codes produced by a "Huffman" type code generator. The reconfiguration library is compiled by placing each word which is not in the library in the reconfiguration library before it is transmitted by variable length code. Then the second and each subsequent time that the same word is found in the message, the second escape code plus the address in the reconfiguration library will be transmitted in lieu of the Huffman coded characters of the word. The system is also applicable to compression of other types of data, serial or parallel, such as digital color television, for example.

214 citations


Journal ArticleDOI
TL;DR: It was made clear that the DF-expression is a very effective technique as a data compression for binary pictorial patterns not only because it yields high data compression but also because its coding and decoding algorithms are very feasible.
Abstract: A method of representing a binary pictorial pattern is developed. Its original idea comes from a sequence of terminal symbols of a context-free grammar. It is a promising technique of data compression for ordinary binary-valued pictures such as texts, documents, charts, etc. Fundamental notions like complexity, primitives, simplifications, and other items about binary-valued pictures are introduced at the beginning. A simple context-free grammar G is also introduced. It is shown that every binary-valued picture is interpretable as a terminal sequence of that G. The DF-expression is defined as the reduced terminal sequence of G. It represents the original picture in every detail and contains no surplus data for reproducing it. A quantitative discussion about the total data of a DF-expression leads to the conclusion that any binary-valued picture with complexity less than 0.47 is expressed by the DF-expression with fewer data than the original ones. The coding algorithm of original data into the DF-expression is developed. It is very simple and recursively executable. Experiments were carried out using a PDS (photo digitizing system), where test pictures were texts, charts, diagrams, etc. with 20 cm × 20 cm size. Data compression techniques in facsimile were also simulated on the same test pictures. Throughout these studies it was made clear that the DF-expression is a very effective technique as a data compression for binary pictorial patterns not only because it yields high data compression but also because its coding and decoding algorithms are very feasible.

154 citations


Journal ArticleDOI
TL;DR: New high-speed algorithms together with fast digital hardware have produced a system for missile and aircraft identification and tracking that possesses a degree of ``intelligence'' not previously implemented in a real-time tracking system.
Abstract: Object identification and tracking applications of pattern recognition at video rates is a problem of wide interest, with previous attempts limited to very simple threshold or correlation (restricted window) methods New high-speed algorithms together with fast digital hardware have produced a system for missile and aircraft identification and tracking that possesses a degree of ``intelligence'' not previously implemented in a real-time tracking system Adaptive statistical clustering and projection-based classification algorithms are applied in real time to identify and track objects that change in appearance through complex and nonstationary background/foreground situations Fast estimation and prediction algorithms combine linear and quadratic estimators to provide speed and sensitivity Weights are determined to provide a measure of confidence in the data and resulting decisions Strategies based on maximizing the probability of maintaining track are developed This paper emphasizes the theoretical aspects of the system and discusses the techniques used to achieve real-time implementation

138 citations


Journal ArticleDOI
01 Jul 1980
TL;DR: A facsimile data compression system, called combined symbol matching (CSM), is presented that exceeds that obtained with the best run-length coding techniques by a factor of two or more and is comparable for graphics-predominate documents.
Abstract: A facsimile data compression system, called combined symbol matching (CSM), is presented. The system operates in two modes: facsimile and symbol recognition. In the facsimile mode, a symbol blocking operator isolates document symbols such as alphanumeric characters and other recurring binary patterns. The first symbol encountered is placed in a library, and as each new symbol is detected, it is compared with each entry of the library. If the comparison is within a tolerance, the library identification code is transmitted along with the symbol location coordinates. Otherwise, the new symbol is placed in the library and its binary pattern is transmitted. Nonisolated symbols are left behind as a residue, and are coded by a two-dimensional run-length coding method. In the symbol recognition mode, the library is prerecorded and each entry is labeled with its ASCII code. As each character is recognized, only the ASCII code in transmitted. Computer simulation results are presented for the CCITT standard documents. With text-predominate documents, the compression ratio obtained with the CSM algorithm in the facsimile mode exceeds that obtained with the best run-length coding techniques by a factor of two or more and is comparable for graphics-predominate documents. In the symbol recognition mode, compression ratios of 250:1 have been achieved on business letter documents.

98 citations


Patent
31 Oct 1980
TL;DR: In this paper, a finite-impulse response digital compression filter is used to generate estimated signal values which are subtracted from actual signal values to provide a sequence of difference signals, and the difference signals are encoded using a truncated Huffman type encoding method and means, and transmitted to a remote receiver and/or are recorded.
Abstract: Digital data compression method and means are disclosed which allow for transmission of digital data over a short time period and/or narrow bandwidth transmission line. Also a maximum amount of information may be stored on a movable recording medium using data compression method of this invention. Digital signals to be stored and/or transmitted first are compressed using a finite-impulse response digital compression filter which generates estimated signal values which are subtracted from actual signal values to provide a sequence of difference signals. The difference signals are encoded using a truncated Huffman type encoding method and means, and the encoded signals are transmitted to a remote receiver and/or are recorded. The receiver includes a decoder and digital reconstruction filter for exact reproduction of transmitted digital signals. The invention is well adapted for storage and/or transmission of three lead electrocardiogram (ECG) signals, recording and playback of music, and the like.

91 citations


Journal ArticleDOI
01 Jul 1980
TL;DR: This paper gives a review of block coding for picture data compression with a slight increase in complexity, block coding can be made adaptive in a number of ways, leading to much higher compressions.
Abstract: This paper gives a review of block coding for picture data compression. Block coding has been devised primarily for coding of graphics, but it has subsequently been extended to multilevel pictures. All the proposed codes are simple suboptimum prefix codes. Their simplicity make them suitable for real-time applications. Although blocks can be of any shape, higher efficiencies are obtained with two-dimensional blocks, thus exploiting the inherent two-dimensional correlation of pictures. According to the value of a preset parameter, block coding can be either information lossless or information lossy. In the former case, the original digitized picture can be exactly reconstructed from its coded version. In the latter case, where the compression is much higher, distortions possess easily identified features. An appropriate filtering can restore the decoded picture satisfactorily. With a slight increase in complexity, block coding can be made adaptive in a number of ways, leading to much higher compressions. For each case, comprehensive theoretical models are developed to predict the performances and to optimize the parameters. The dependence of the compression ratio on image resolution for each specific code is also examined.

63 citations


Patent
09 Jun 1980
TL;DR: In this paper, a data stream representing the data to be compressed is input to a buffer which is of a size to store sufficient information for prediction purposes, and a predictor is responsive to the buffer for producing a predicted data representation from a plurality of data units comprising a two dimensional matrix.
Abstract: Data compression, for either a storage or transmission, of facsimile information is effected employing a two dimensional, non-contiguous prediction matrix. A data stream representing the data to be compressed is input to a buffer which is of a size to store sufficient information for prediction purposes. A predictor is responsive to the buffer for producing a predicted data representation from a plurality of data units comprising a two dimensional matrix. A selector is responsive to the data unit employed in the prediction process for making a select/non-select determination. For those data units which are selected, a comparator compares the predicted status of the data unit with the actual status of the data unit. At least one run length encoder is responsive to the comparator for run length encoding successive correct predictions and a following incorrect prediction. An output buffer is provided for storing the run length encoded output of the run length encoder as well as representations of the non-selected data units. By using plural run length encoders, each can be optimized for the encoded data by correlating prediction difficulty with code length, i.e., easy predictions are encoded by long code words and hard predictions are encoded with short words. Further, the unselected class data units correspond to most difficult predictions and these are not coded.

63 citations


Patent
Robert Paul Davidson1
26 Sep 1980
TL;DR: In this article, a logic structure for an LSI digital circuit includes data compression circuitry for deriving a signature word from the data on a multiplicity of internal nodes which are not directly accessible from the terminals of the circuit.
Abstract: A logic structure for an LSI digital circuit includes data compression circuitry for deriving a signature word from the data on a multiplicity of internal nodes which are not directly accessible from the terminals of the circuit. The signature word provides error information concerning the data on the internal nodes which are not otherwise available for testing purposes. The addition of data compression circuitry facilitates the testing of LSI digital circuits and can be complemented with minimal overhead chip area.

40 citations


Journal ArticleDOI
01 Jan 1980
TL;DR: A comparative study shows that the proposed coding scheme performs much better than conventional predictive coding schemes and achieves a compression factor of about 8:1 for 8 gray-level image data.
Abstract: A technique of compressing image data derived from personal checks which possess several gray levels is described. Check images consist of both essential information such as printed and handwritten characters and nonessential background pattern or picture. Only the character plane is to be coded. Our proposed technique is divided into two phases: character plane extraction and character plane coding. In the first phase, a character plane which is composed of character pels on a uniform background is extracted from an original digital check image by using a combination of fundamental techniques of image segmentation. In the second phase, the extracted character plane is separated into a bit plane and a gray-level plane. The bit plane which preserves the position information of character pels on the character plane is conditional entropy coded. An adaptive two- or one-dimensional predictive coding scheme is applied to the gray-level plane which consists of only the character pels on the character plane. The check data are stored for further use as a combination of the codes derived from the bit plane encoder and the gray-level encoder in a check processing machine. A comparative study shows that the proposed coding scheme performs much better than conventional predictive coding schemes. For 8 gray-level image data, a compression factor of about 8:1 has been achieved.

38 citations


PatentDOI
TL;DR: In this paper, the data is compressed by discarding signal spectral samples which do not vary from previously stored spectral samples by a threshold amount, using difference-gating, which is used to reduce recognition errors.
Abstract: An improved acoustic signal recognition system, suitable for speech or other acoustic pattern recognition, features data compression of original signal data to reduce requirements for data storage and comparison, as well as reduce recognition errors, the data is compressed by discarding signal spectral samples which do not vary from previously stored spectral samples by a threshold amount, using difference-gating.

30 citations


Journal ArticleDOI
TL;DR: It is shown here that the Hough transform may be used for encoding of line curves and waveforms that consist of the concatenation of curves from an underlying set of families of curves.

Journal ArticleDOI
TL;DR: Source encoding for digital image transmission is revisited with an energy distribution approach in the perceptual domain and the cosine transform is utilized on a partitioned image, suggesting a more rapid hardware implementation.
Abstract: Source encoding for digital image transmission is revisited with an energy distribution approach in the perceptual domain. Past investigations have utilized power spectral density in conjunction with the Frei eye model and full image Fourier transform coding. In this investigation, the cosine transform is utilized on a partitioned image. A cosine energy function is defined and weighted by the eye model. This results in a circular symmetric form of a bit map which simplifies source coding. This approach outperforms a standard bit allocation procedure allowing graceful degradation at 1, .75, and .5 bits/pixel. Analysis includes the perceptual mean square error and peak signal-to-noise ratio as metrics of performance. This procedure suggests a more rapid hardware implementation.

Journal ArticleDOI
TL;DR: Some of the considerations and comparison criteria presented, even though not completely general because extracted from experimental results, can be useful in selecting and defining the more pertinent data compression system for the different practical applications.
Abstract: Several data compression methods are reviewed for signal and image digital processing and transmission, including both established and more recent techniques. Methods of prediction-interpolation, differential pulse code modulation, delta modulation and transformations are examined in some detail. The processing of two-dimensional data is also considered.Results of the application of these techniques to space telemetry and biomedical digital signal processing and telemetry systems are presented.Some of the considerations and comparison criteria presented, even though not completely general because extracted from experimental results, can be useful in selecting and defining the more pertinent data compression system for the different practical applications.

Proceedings ArticleDOI
C. Galand1, D. Esteban
01 Apr 1980
TL;DR: This paper focuses on the implementation and software techniques which were used to achieve a complete sub-band coder with about 1.3 million of instructions per second on a processor with a 16 bits instruction and data flow.
Abstract: The availability of medium performance microprocessors, in conjunction with different concepts to generate real time efficient signal processing software on microprocessors without hardwired multipliers, allows the real time implementation of sub-band voice compression algorithms. This paper deals with the implementation of a 16kbps (8kHz, 2 bits per sample) 8 sub-band coder assuming a bank of 40 tap QMF decimator and interpolator filters, an adaptive allocation of the bits resource based on channels activity and a straigth block PCM coding of the decimated samples. The paper focuses on the implementation and software techniques which were used to achieve a complete sub-band coder with about 1.3 million of instructions per second on a processor with a 16 bits instruction and data flow.

Journal ArticleDOI
T. Usubuchi1, T. Omachi, K. Iinuma
01 Jul 1980
TL;DR: An adaptive predictive coding method, which efficiently encodes newspaper pages with printed text and screened photographs is presented, and the data compression ratio is improved by about two times for a screened photograph and is almost the same or slightly higher for printed text.
Abstract: An adaptive predictive coding method, which efficiently encodes newspaper pages with printed text and screened photographs is presented. This coding technique utilizes two kinds of predictors with different reference picture elements (pels). One is applied to printed text and the other is applied to screened photographs. The Dth previous pel and its neighboring pels age adopted as the reference pels for the photograph predictor, where distance D coincides with the screen period. Comparing the adaptive predictive coding with a typical document facsimile coding, the data compression ratio is improved by about two times for a screened photograph (compression ratio is 5 ∼ 6) and is almost the same or slightly higher for printed text. Computer simulation shows that if a 500-kbits buffer memory is employed, it is possible to transmit most pages, including an extreme case of a 100 percent photograph page, with a 4500-rev/min scanner at a transmission bit rate of 128 kbit/s. For average pages the revolution speed can be raised to 6000 rev/min. Page transmission time of about 5 min in analog facsimile through a 48-kHz band can be reduced to 1.8 min by adopting the digital transmission with a 128-kbit/s data modem and the adaptive predictive coding technique, when the facsimile revolution speed is set at 6000 rev/min.

BookDOI
01 Jan 1980
TL;DR: This paper presents a meta-modelling procedure that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of image processing called “skeletonization”.
Abstract: I. Filtering.- Transfer Function Analysis of Picture Processing Operators.- Enhancement, Filtering and Preprocessing Techniques.- II. Neighborhood Operators.- Clip-A User's Viewpoint.- Principles, Criteria and Algorithms in Mathematical Morphology.- Skeletonization in Quantitative Metallography.- III. Discrete and Probabilistic Relaxation.- A Relational View of Text Image Processing.- Cooperative Processes.- Local Structure, Consistency and Continuous Relaxation.- Scene Matching Methods.- IV. Applications.- Digital Imagery Processing - With Special Reference to Data Compression in Remote Sensing.- Biomedical Image Analysis.- Image Data Analysis in Remote Sensing.- A 4 View Automatic Measuring System for Bubble Chamber Film.- On the use of a Peano Scanning in Image Processing.

Journal ArticleDOI
TL;DR: It appears that the approach has limitations and is not completely satisfactory, but a two-stage approach is studied, including the reuired additional C/D table decompression time.
Abstract: The paper is concerned with the reduction of overhead storage, i.e., the stored compression/decompression (C/D) table, in field-level data file compression. A large C/D table can occupy a lage fraction of maim memory space during compression and decompression, and may cause excessve page swapping in virtual memory systems. A two-stage approach is studied, including the reuired additional C/D table decompression time. It appears that the approach has limitations and is not completely satisfactory.

Journal ArticleDOI
TL;DR: A new data compression method is presented which is applicable to systems whose observables are a linear combination of only a part of the system states, and the peculiar Kalman filter form of this case is used.
Abstract: In this work a new data compression method is presented which is applicable to systems whose observables are a linear combination of only a part of the system states. This characteristic is quite common in applied problems, especially in inertial navigation systems (INS). As a result, the formulation of the Kalman filter can be revised to yield a peculiar form for the covariance and state update which is the foundation of the new data compression method. The computations, according to this method, are divided into fast and slow rates. At the fast rate, which is determined by the availability of the measured data, a reduced-order Kalman filter is propagated and updated. The full-order system is propagated and updated only at a slow rate chosen by the designer. Utilizing the peculiar Kalman filter form of this case, the full-order system update is performed on the basis of the output of the reduced-order filter at each slow rate update time. Results of the application of this new data compression method to an INS are presented.

Journal ArticleDOI
R.B. Arps1
01 Jul 1980
TL;DR: This bibliography lists published papers that specifically relate to the compression of digitized binary images from graphics data that is basically two-level: information on background.
Abstract: This bibliography lists published papers that specifically relate to the compression of digitized binary images from graphics data. By graphics data we mean printed matter, handwriting, line drawings, or any image data that is basically two-level: information on background. The data compression covered here assumes information reduction to images with two levels of signal amplitude followed by redundancy reduction of these binary (i.e., black/white) images.

Journal ArticleDOI
TL;DR: By the two-dimensional fast Hadamard transform, non-linear characteristics of the electrode process have been compressed into a small matrix which can be fed to an automatic retrieval system or a learning machine.

Journal ArticleDOI
TL;DR: The minimax optimization criterion for the segmented compression characteristic for speech signal is formulated and the optimal seven segment compression characteristic is given.
Abstract: The minimax optimization criterion for the segmented compression characteristic is formulated. For speech signal, the optimal seven segment compression characteristic is given.

Journal ArticleDOI
TL;DR: This communication points out that both prediction and interpolation for data compression of ECG can be viewed as linear filtering and give the same result in terms of the amount of data compression achieved.
Abstract: A recent paper1 discusses prediction and interpolation for data compression of ECG. This communication points out that both methods can be viewed as linear filtering. They are therefore equivalent and give the same result in terms of the amount of data compression achieved.

Journal ArticleDOI
B. Doherty1
TL;DR: An extension of this strategy which allows systematic comparison with schemes based on non-Markovian grammars, such as relative address coding (RAC), is described.
Abstract: Ueno et al. [1] developed a strategy for systematic comparison of facsimile data compression schemes based on Markovian models. An extension of this strategy is described which allows systematic comparison with schemes based on non-Markovian grammars, such as relative address coding (RAC).

Patent
30 Jan 1980
TL;DR: In this paper, a system for time-compressing radar video is presented to improve the grac presentation thereof on a CRT display, where the video is pipelined as cell pairs into and out of a pair of parallelly-arranged memory banks by means of input and output latches.
Abstract: A system for time-compressing radar video is disclosed to improve the grac presentation thereof on a CRT display Quantized input video is pipelined as cell pairs into and out of a pair of parallelly-arranged memory banks by means of input and output latches The memory banks are controlled having alternating read and write modes which are interchanged in synchronism with the video scan period so that a preceding scan is read out of one bank at a controlled read clock frequency while a current scan is written into the other bank at a fixed write clock frequency, the degree of video compression being determined by the ratio of the read-to-write frequencies A pair of series-connected rate multipliers each are fed 6-bit rate words for varying the read clock frequency on either a linear or non-linear scale Video integration is provided at the inputs of each memory bank by magnitude comparators which compare stored contents of each range cell in the banks with current input video for the respective cell and steer multiplexers to either retain or rewrite the contents of the cell

Proceedings Article
01 Jan 1980

01 Jan 1980
TL;DR: A new method of image coding by autoregressive (AR) synthesis is presented and is shown to give superior resolution and to suppress the "block—effects present in block—by—block transform coding methods.
Abstract: A new method of image coding by autoregressive (AR) synthesis is presented. The physics of image formation suggests that an image may be considered as a power spectrum. Using this formulation a Cosine transform of the sampled image is shown to yield a set of autocorrelations. These are used to find an equivalent AR model whose parameters are encoded for transmission, Compared to conventional Cosine transform coding, this method is shown to give superior resolution and is shown to suppress the "block—effects present in block—by—block transform coding methods. Distinction between this method and linear predictive coding (LPC) used for speech data compression is made. Extensions and examples for two dimensional images are given.

Proceedings Article
01 Jan 1980

Proceedings ArticleDOI
09 Apr 1980
TL;DR: This paper describes in general the three main approaches to speech synthesis, including Waveform digitization with unique compression algorithms, and the procedure for generating ROM code and key circuit elements of the control chip are presented.
Abstract: This paper describes in general the three main approaches to speech synthesis. The details of Waveform digitization with unique compression algorithms are described. A system for implementing this technique has been developed and realized in software and silicon. The procedure for generating ROM code and key circuit elements of the control chip are presented.

Journal ArticleDOI
01 Dec 1980
TL;DR: In this paper, a recursive linear optimal estimation method is presented which processes a batch of measured data at each estimation point and serves as a data compression scheme, which is an integrated one in the sense that the optimization is performed on a pre-filter/estimator combination.
Abstract: A recursive linear optimal estimation method is presented which processes a batch of measured data at each estimation point. Due to this feature the new method serves as a data compression scheme. This scheme is an integrated one in the sense that the optimization is performed on a pre-filter/estimator combination. The method is derived and compared for accuracy performance and complexity with the Kalman filter. It is found that a considerable saving in computational effort, with a minimal degradation of accuracy is achieved using this method. An example is presented and efficient algorithms to implement this method are developed.

Proceedings ArticleDOI
22 Aug 1980
TL;DR: Interpolated DPCM is a mechanism for separating an image into low- and high- spatial frequency components, with a similar amount of data compression being achieved.
Abstract: The increasing complexity and variety of image sensors has been the source of interest in the development of data compression for images. Image data has become one of the most active topics of research in digital image processing as a result [1]. The continued evolution of digital circuitry has caused the focus of data compression research to lie in digital implementations. However, there is also a potential for optical computations in image data compression, as was demonstrated in the concepts of interpolated DPCM [2]. The method of DPCM data compression is one of the most thoroughly studied techniques. DPCM achieves data compression by separating the image information into two parts: the low-spatial frequencies and the high-spatial frequencies. Low-spatial frequencies are re-tained by exploiting their predictability; high-spatial frequencies are retained at fewer significant bits, and substantial data compression is achieved. Interpolated DPCM is a mechanism for separating an image into low- and high- spatial frequency components, with a similar amount of data compression being achieved. The computations to achieve the sep-aration can be implemented by simple incoherent optical devices [2].© (1980) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.