scispace - formally typeset
Search or ask a question

Showing papers on "Data compression published in 1986"


Journal ArticleDOI
TL;DR: A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described and proves that it never performs much worse than Huffman coding and can perform substantially better.
Abstract: A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described. The scheme is based on a simple heuristic for self-organizing sequential search and on variable-length encodings of integers. We prove that it never performs much worse than Huffman coding and can perform substantially better; experiments on real files show that its performance is usually quite close to that of Huffman coding. Our scheme has many implementation advantages: it is simple, allows fast encoding and decoding, and requires only one pass over the data to be compressed (static Huffman coding takes two passes).

564 citations


Journal ArticleDOI
TL;DR: The proposed picture compressibility is shown to possess the properties that one would expect and require of a suitably defined concept of two-dimensional entropy for arbitrary probabilistic ensembles of infinite pictures.
Abstract: Distortion-free compressibility of individual pictures, i.e., two-dimensional arrays of data, by finite-state encoders is investigated. For every individual infinite picture I , a quantity \rho(I) is defined, called the compressibility of I , which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved for I by any finite-state information-lossless encoder. This is demonstrated by means of a constructive coding theorem and its converse that, apart from their asymptotic significance, might also provide useful criteria for finite and practical data-compression tasks. The proposed picture compressibility is also shown to possess the properties that one would expect and require of a suitably defined concept of two-dimensional entropy for arbitrary probabilistic ensembles of infinite pictures. While the definition of \rho(I) allows the use of different machines for different pictures, the constructive coding theorem leads to a universal compression scheme that is asymptotically optimal for every picture. The results are readily extendable to data arrays of any finite dimension.

217 citations


Journal ArticleDOI
TL;DR: The method of Fourier descriptors (FD's) is presented for ECG data compression, resistant to noisy signals and is simple, requiring implementation of forward and inverse FFT.
Abstract: The method of Fourier descriptors (FD's) is presented for ECG data compression. The two-lead ECG data are segmented into QRS complexes and S-Q intervals, expressed as a complex sequence, and are Fourier transformed to obtain the FD's. A few lower order descriptors symmetrically situated with respect to the dc coefficient represent the data in the Fourier (compressed) domain. While compression ratios of 10:1 are feasible for the S-Q interval, the clinical information requirements limit this ratio to 3:1 for the QRS complex. With an overall compression ratio greater than 7, the quality of the reconstructed signal is well suited for morphological studies. The method is resistant to noisy signals and is simple, requiring implementation of forward and inverse FFT. The results of compression of ECG data obtained from more than 50 subjects with rhythm and morphological abnormalities are presented.

183 citations


Patent
20 Aug 1986
TL;DR: In this paper, an efficient speech compression algorithm is applied to transform a sequence of digitized speech samples into a much shorter sequence of compression variables, which are further processed to construct a minimum-length bit string, and an identifying header is appended to form a packet.
Abstract: An efficient system for simultaneously conveying a large number of telephone conversations over a much smaller number of relatively low-bandwidth digital communication channels is disclosed. Each incoming telephone speech signal is filtered, periodically sampled, and digitized. An efficient computational speech compression algorithm is applied to transform a sequence of digitized speech samples into a much shorter sequence of compression variables. The compression variables sequence is further processed to construct a minimum-length bit string, and an identifying header is appended to form a packet. Only a few packets containing information on representative background noise are generated during pauses in speech, thereby conserving digital bandwidth. The packets are queued and transmitted asynchronously over the first available serial digital communication channel. Numerical feedback to the compression algorithm is employed which results in the packet size being reduced during periods of high digital channel usage. Packet header information is utilized to establish a "virtual circuit" between sender and receiver.

173 citations


Journal ArticleDOI
31 Aug 1986
TL;DR: A method is developed for surface-fitting from sampled data based on an adaptive subdivision approach, a technique previously used for the design and display of free-form curved surface objects, which is simple in concept, yet realizes efficient data compression.
Abstract: A method is developed for surface-fitting from sampled data Surface-fitting is the process of constructing a compact representation to model the surface of an object based on a fairly large number of given data points In our case, the data is obtained from a real object using an automatic three-dimensional digitizing system The method is based on an adaptive subdivision approach, a technique previously used for the design and display of free-form curved surface objects Our approach begins with a rough approximating surface and progressively refines it in successive steps in regions where the data is poorly approximated The method has been implemented using a parametric piecewise bicubic Bernstein-Bezier surface possessing G1 geometric continuity An advantage of this approach is that the refinement is essentially local reducing the computational requirements which permits the processing of large databases Furthermore, the method is simple in concept, yet realizes efficient data compression Some experimental results are given which show that the representation constructed by this method is faithful to the original database

169 citations


Patent
Victor S. Miller1, Mark N. Wegman1
11 Aug 1986
TL;DR: In this paper, a data compression method for communications between a host computing system and a number of remote terminals is enhanced by adding new character and string extensions to improve the compression ratio and deletion of a least recently used routine.
Abstract: Communications between a Host Computing System and a number of remote terminals is enhanced by a data compression method which modifies the data compression method of Lempel and Ziv by addition of new character and new string extensions to improve the compression ratio, and deletion of a least recently used routine to limit the encoding tables to a fixed size to significantly improve data transmission efficiency.

162 citations


Patent
19 Sep 1986
TL;DR: In this paper, a data compression system for increasing the speed of data transmission system over a communication channel with a predefined data transmission rate is presented, where a table changer is used to select from among the encoding tables the one which minimizes the bit length of the encoded data for a preselected sample of the input data.
Abstract: A data compression system for increasing the speed of data transmission system over a communication channel with a predefined data transmission rate. The system has two data compression units--one on each end of the channel, coupled to first and second data processing systems. Input data from either data processing system is encoded using a selected one of a plurality of encoding tables, each of which defines a method of encoding data using codes whose length varies inversely with the frequency of units of data in a predefined set of data. Whenever an analysis of the encoded data indicates that the data is not being efficiently compressed, the system invokes a table changer for selecting from among the encoding tables the one which minimizes the bit length of the encoded data for a preselected sample of the input data. If a new table is selected, a table change code which corresponds to the selected table is added to the encoded data. Also, a dynamic table builder builds a new encoding table to be including in the set of available encoding tables using a preselected portion of the previously encoded input data which an analysis of the encoded data indicates that a new encoding table will enhance compression. Each data compression unit includes a data decoder for decoding encoded data sent over the channel by the other unit. Thus the data decoder uses a set of decoding tables corresponding to the encoding tables, means for selecting a new table when a table change code is received, and means for building a new decoding table when it receives a table change code which indicates that the encoded data following the table change code was encoded using a new encoding table.

147 citations


Proceedings ArticleDOI
M. Lukacs1
07 Apr 1986
TL;DR: The use of digital predictive coding as a means of data compression for the transmission or storage of a set of spatially related images needed for an autostereoscopic display and a new sort of predictor called Disparity Corrected Prediction are described.
Abstract: Three dimensional display of moving images greatly enhances realism and adds a unique sense of "presence". Three dimensional video systems have been kept from widespread application by two technical problems, the need for glasses, viewing hoods, or other cumbersome devices for image steering, and the high bandwidths needed for transmission. Devices that avoid the discomfort of headgear by using autostereoscopic (pseudo-holographic) displays are known, but these methods require even higher bandwidths to be effective. This paper introduces the use of digital predictive coding as a means of data compression for the transmission or storage of a set of spatially related images needed for an autostereoscopic display. (Interframe coding without frame memories.) The algorithms, implementations, and application of a new sort of predictor called Disparity Corrected Prediction are described.

118 citations


Journal ArticleDOI
TL;DR: An OPM/L data compression scheme suggested by Ziv and Lempel, LZ77, is applied to text compression and a slightly modified version suggested by Storer and Szymanski, L ZSS, is found to achieve compression ratios as good as most existing schemes for a wide range of texts.
Abstract: An OPM/L data compression scheme suggested by Ziv and Lempel, LZ77, is applied to text compression. A slightly modified version suggested by Storer and Szymanski, LZSS, is found to achieve compression ratios as good as most existing schemes for a wide range of texts. LZSS decoding is very fast, and comparatively little memory is required for encoding and decoding. Although the time complexity of LZ77 and LZSS encoding is O(M) for a text of M characters, straightforward implementations are very slow. The time consuming step of these algorithms is a search for the longest string match. Here a binary search tree is used to find the longest string match, and experiments show that this results in a dramatic increase in encoding speed. The binary tree algorithm can be used to speed up other OPM/L schemes, and other applications where a longest string match is required. Although the LZSS scheme imposes a limit on the length of a match, the binary tree algorithm will work without any limit.

88 citations



Patent
Toshio Koga1
19 Mar 1986
TL;DR: In this article, a motion compensated interframe prediction error is orthogonally transformed and then is subject to quantization, in which the quantized error signal undergoes inverse orthogonal transformation.
Abstract: In order to encode image signals using interframe correlation while maintaining high data compression as well as high resulting images, a motion compensated interframe prediction error is orthogonally transformed and then is subject to quantization. Thereafter, the quantized error signal undergoes inverse orthogonal transformation. In order to compensate for distortions caused by the quantization, a difference between the inversely orthogonally transformationed error signal and the original prediction error signal is quantized. By selecting the quantization characteristics, the influence or contribution of the orthogonal transformation can be controlled continuously.


Proceedings ArticleDOI
E. Walach1, E. Karnin
07 Apr 1986
TL;DR: The proposed approach is very simple (both conceptually and from the point of view computational complexity), and it seems to be well suited to the psycho-visual characteristics of human eye.
Abstract: We introduce a new approach to the issue of lossy data compression. The basic concept has been inspired by the theory of fractal geometry. The idea is to traverse the entire data string utilizing a fixed length "yardstick". The coding is achieved by transmitting, only, the sign bit (to distinguish between the ascent and the descent) and the horizontal distance covered by the "yardstick". All data values are estimated, at the receiver's site, based on this information. We have applied this approach in the context of image compression, and the preliminary results seem to be very promising. Indeed, the proposed approach is very simple (both conceptually and from the point of view computational complexity), and it seems to be well suited to the psycho-visual characteristics of human eye. The paper includes a brief description of the coding concept. Next a number of possible modifications and extensions are discussed. Finally a number of simulations are included in order to support the theoretical derivations. Good quality images are achieved with as low as .5 bit/pel.

Patent
12 Dec 1986
TL;DR: In this paper, the authors proposed a data compression method and apparatus particularly suitable for use in electrical power line fault data recorders, which performs both gain compression and frequency compression, and a predetermined number of samples are analyzed to determine a gain setting common to each sample in the set of samples.
Abstract: A data compression method and apparatus particularly suitable for use in electrical power line fault data recorders. The system performs both gain compression and frequency compression. For gain compression, a predetermined number of samples are analyzed to determine a gain setting common to each sample in the set of samples. A reduced data string consisting of a gain code and data words having fewer bits than the input words are transmitted as a compressed data string. For frequency compression, a sample set representing the input signal is decimated until there remain only a sufficient number of data samples to satisfy the Nyquist criterion for the highest frequency component of interest. The frequency compressed output data string comprises a frequency code representing the highest frequency of interest followed by the set of decimated data samples.



Proceedings ArticleDOI
01 Apr 1986
TL;DR: This paper presents a one chip operator, achieving a full 16 × 16 DCT computation at video rate, and suggests a low-cost implementation of a high-speed DCT operator would lower the price of CODEC and could open new fields of applications for DCT in real time image processing.
Abstract: The Discrete Cosine Transform [1] is a good but computation consuming first step of many image coding and compression algorithms for good quality, low rate transmissions. A low-cost implementation of a high-speed DCT operator would lower the price of CODEC and could open new fields of applications for DCT in real time image processing. This paper presents a one chip operator, achieving a full 16 × 16 DCT computation at video rate. Algorithmic, architectural and implementation choices combined with a careful optimization of the layout have made it possible to design a reasonnable size chip exhibiting such high performance.

Proceedings ArticleDOI
01 Dec 1986
TL;DR: A new speech processing and coding method which makes use of perceptual redundancy for slowly varying short-time phase characteristics and is proven to be superior to other coders both in terms of the SNR and the subjective quality.
Abstract: A new speech processing and coding method is proposed which makes use of perceptual redundancy for slowly varying short-time phase characteristics. The method employs waveform conversion through a phase-equalizing filter, which is based on the time domain matched filter for the residue of Linear Predictive Coding (LPC). Phase-equalized speech is found to be almost perceptually equivalent and to be efficiently encoded by a two-stage quantization. In the first stage, vector quantization is performed for the pulse pattern in the time domain. In the second stage, vector-scalar quantization is applied to the spectral components using adaptive bit allocation. The proposed coder is proven to be superior to other coders both in terms of the SNR and the subjective quality. The averaged subjective quality at 9.6 kbps is comparable to that of a 6 bit log PCM.

Journal ArticleDOI
TL;DR: A new gray-scale image coding technique has been developed, in which an extended DPCM approach has been combined with entropy coding, which has been implemented in a freeze-frame videoconferencing system which is now operational at IBM sites throughout the world.
Abstract: A new gray-scale image coding technique has been developed, in which an extended DPCM approach has been combined with entropy coding This technique has been implemented in a freeze-frame videoconferencing system which is now operational at IBM sites throughout the world Following image preprocessing, the two fields of the interlaced 512 x 480 pixel video frame are compressed sequentially with different algorithms The reconstructed image quality is improved by subsequent image postprocessing, the final reconstructed image being almost indistinguishable from the original image Typical gray-scale video images compress to about a half bit per pixel and transmit over 48 kbit/s dial-up telephone lines in about a half minute The gray-scale image processing and compression algorithms are described in this paper

Patent
31 Jul 1986
TL;DR: In this article, a low bandwidth video teleconferencing system and method is described, which employs novel data compression techniques by which continuous transmission of imagery at a rate of 9600 bits/second is possible.
Abstract: A low bandwidth video teleconferencing system and method is disclosed. The video teleconferencing system employs novel data compression techniques by which continuous transmission of imagery at a rate of 9600 bits/second is possible. A sketch coder converts the grey scale image to be transmitted to a sketch or line drawing, which comprises an outline of the principal boundaries plus shading to represent depth. The bandwidth required for the data representing the sketch is then compressed by two-dimensional run length coding techniques which exploit interframe and interline redundancy as well as intraline redundancy to generate a binary transmission code. Other features are also provided.

Proceedings ArticleDOI
10 Dec 1986
TL;DR: A recursive algorithm for DCT whose structure allows the generation of the next higher-order DCT from two identical lower order DCT's, which requires fewer multipliers and adders than other DCT algorithms.
Abstract: The Discrete Cosine Transform (DCT) has found wide applications in various fields, including image data compression, because it operates like the Karhunen-Loeve Transform for stationary random data. This paper presents a recursive algorithm for DCT whose structure allows the generation of the next higher-order DCT from two identical lower order DCT's. As a result, the method for implementing this recursive DCT requires fewer multipliers and adders than other DCT algorithms.

Journal ArticleDOI
TL;DR: A Video-Data-Acquisition-System (VDAS) has been developed to record image data from a scintillating glass fiber-optic target developed for High Energy Physics.
Abstract: A Video-Data-Acquisition-System (VDAS) has been developed to record image data from a scintillating glass fiber-optic target developed for High Energy Physics.[1],[2] The major components of the VDAS are a flash ADC, a "real time" high speed data compactor, and high speed 8 megabyte FIFO memory. The data rates through the system are in excess of 30 megabytes/second. The compactor is capable of reducing the amount of data needed to reconstruct typical images by as much as a factor of 20. The FIFO uses only standard NMOS DRAMS and TTL components to achieve its large size and high speed at relatively low power and cost.

Journal ArticleDOI
TL;DR: In this article, a scale-invariant segregation of image structure solely on the basis of orientation content is described, which is complementary to the well-explored method of filtering in spatial frequency bands; the latter technique is rotation invariant, whereas the former technique is scale invariant.
Abstract: A method is described for scale-invariant segregation of image structure solely on the basis of orientation content. This kind of image decomposition is an unexplored image-processing method that is complementary to the well-explored method of filtering in spatial frequency bands; the latter technique is rotation-invariant, whereas the former technique is scale-invariant. The complementarity of these two approaches is explicit in the fact that orientation and spatial frequency are orthogonal variables in the two-dimensional Fourier plane, and the filters employed in the one method depend only on the radial variable, whereas those employed in the other method depend only on the angular variable. The biological significance of multiscale (spatial frequency selective) image analysis has been well-recognized and often cited, yet orientation selectivity is a far more striking property of neural architecture in cortical visual areas. In the present paper, we begin to explore some coding properties of the scale-invariant orientation variable, paying particular attention to its perceptual significance in texture segmentation and compact image coding. Examples of orientation-coded pictures are presented with data compression to 0.3 bits per pixel.

Patent
05 Jun 1986
TL;DR: In this paper, an information signal delay system utilizes a solid-state memory for continuously storing the information and reading it out on a time-delayed basis, where the time delay is related to the anticipated reaction time it takes to cycle completely through all of the address locations in that portion of the memory being used to store the information.
Abstract: An information signal delay system utilizes a solid-state memory (20) for continuously storing the information and reading it out on a time-delayed basis. An information signal is converted by a converter (14) into a digital format and compressed using conventional compression algorithms in an analyzer and converter (14). The compressed digital signal is then sequentially written into successive locations in a random access memory (20). These locations are sequentially addressed at a later point in time to read the digitized information out of the memory on a time-delayed basis relative to when it was stored in the memory. The time delay is related to the anticipated reaction time it takes to cycle completely through all of the address locations in that portion of the memory being used to store the information. The digitized information that is read out of the memory can be synthesized by a synthesizer (24) or otherwise suitably processed to reconstruct the original information signal as a delayed signal.

Book ChapterDOI
01 Jan 1986
TL;DR: A novel approach to image data compression is proposed which uses a stochastic learning automaton to predict the conditional probability distribution of the adjacent pixels and these conditional probabilities are used to code the gray level values using a Huffman coder.
Abstract: A novel approach to image data compression is proposed which uses a stochastic learning automaton to predict the conditional probability distribution of the adjacent pixels. These conditional probabilities are used to code the gray level values using a Huffman coder. The system achieves a 4/1.7 compression ratio. This performance is achieved without any degradation to the received image.

Patent
24 Oct 1986
TL;DR: In this paper, a multimode scrambling system for video signal transmission systems is described, which provides for baseband video scrambling controlled by a central originating computer facility, where the scrambling of each field of a video changes on a per field basis.
Abstract: A multimode scrambling system for video signal transmission systems is described. The system provides for baseband video scrambling controlled by a central originating computer facility. The scrambling of each field of a video changes on a per field basis. The video signal is scrambled in several modes including vertical interval scrambling, alternate line inversion, bogus sync pulse generation, video compression and video offset techniques. A scrambling sequence is sent by a unique algorithm, sent during the vertical interval to each system decoder. The algorithm is reordered for each of a plurality of fields of the video signal and the reordering position is identified by a unique synchronization pulse transmitted during the vertical interval of the video signal. Additional security measures are provided to inhibit a subscriber from avoiding a transmitted disable command, or an attempt to invade the subscriber decoder mechanical packaging security.

Journal ArticleDOI
TL;DR: The feasibility of digital video recording for consumer applications has been investigated with a view to a high image quality with full PAL bandwidth and the use of standard V2000 mechanics, ferrite heads and chromium-dioxide tape.
Abstract: The feasibility of digital video recording for consumer applications has been investigated with a view to a high image quality with full PAL bandwidth. A dominant boundary condition was that there should be no loss of maximum playing-time as compared to the analog system. To meet this requirement, compression of the digitized video signal with DPCM and Hadamard Transform coding has been applied. Other requirements were the use of standard V2000 mechanics, ferrite heads and chromium-dioxide tape.

DOI
01 Jun 1986
TL;DR: From the tested range of 2-D and 3-D techniques, ‘relative address coding’ is most successful, being about 10% more efficient than its nearest rival, except for high-resolution pictures with little movement, where ‘block location coding” of diffence frames shows a small advantage.
Abstract: An analysis of two-level picture compression techniques applied to low-resolution moving images is reported. The object is to discover the most suitable technique for visual communication at low data rates using feature-extracted ‘cartoons’. Reversible facsimile compression techniques are reviewed, then results presented to demonstrate their relative performance. In general, three-dimensional techniques prove to be most efficient, since they exploit both spatial and temporal relationships in the picture. However, the lower compression of two-dimensional coding is balanced by its superior error recovery performance, and it is therefore recommended for very low-data-rate transmision over conventional telephone lines, where error rates are high. From the tested range of 2-D and 3-D techniques, ‘relative address coding’ is most successful, being about 10% more efficient than its nearest rival, except for high-resolution pictures with little movement, where ‘block location coding’ of diffence frames shows a small advantage. Application of irreversible preprocessing improves the compression performance, but the gain is small, and requires additional processing power.

Journal ArticleDOI
31 Aug 1986
TL;DR: The differential compiler performs temporal domain image data compression using frame replenishment coding on successive frames of animation stored in memory as bitmaps and saves only the differences, resulting in a significant reduction in storage requirements and allows animation on general purpose computers which would otherwise be too slow or have insufficient memory.
Abstract: A program for the real-time display of computer animation on a bit-mapped raster display is presented The differential compiler performs temporal domain image data compression using frame replenishment coding on successive frames of animation stored in memory as bitmaps and saves only the differences A small run-time interpreter then retrieves and displays the differences in real-time to create the animated effect This results in a significant reduction in storage requirements, and allows animation on general purpose computers which would otherwise be too slow or have insufficient memory Frame creation is both device and method independent An animation environment supports interactive editing capabilities, reconstructing any arbitrary desired frame for later modification Frames can be added, modified, or deleted, and the animated sequence can be viewed at any point during the session The compiler is automatically called as needed; its operation is transparent to the user The compiler is described in detail, both in terms of data compression and the requirements of interactive animation editing

Proceedings ArticleDOI
K. Iinuma1, T. Koga1, K. Niwa1, Y. Iijima1
01 May 1986
TL;DR: Based upon the algorithms described in this paper a practical codec has been developed for videoconference use at sub-primary rate and provides good picture quality even at a 384 kb/s transmission bit rate.
Abstract: This paper describes an adaptive intra-interframe codec with motion-compensation followed by an entropy coding for prediction error signal as well as for motion vector information. This adaptive prediction is highly efficient even for very fast motion as well as scene change where motion compensation is ineffective. Prediction error and vector information are code-converted for transmission by means of an entropy coding where contiguous zero signal is run-length coded and non-zero signal is Huffman-coded. Based upon the algorithms described in this paper a practical codec has been developed for videoconference use at sub-primary rate. According to a brief subjective evaluation, the codec provides good picture quality even at a 384 kb/s transmission bit rate.