scispace - formally typeset
Search or ask a question

Showing papers on "Entropy encoding published in 1994"


Journal ArticleDOI
01 Jan 1994
TL;DR: This work shows how arithmetic coding works and describes an efficient implementation that uses table lookup as a first alternative to arithmetic operations that can speed up the implementation further by use of parallel processing.
Abstract: Arithmetic coding provides an effective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an efficient implementation that uses table lookup as a first alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible effect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding. >

266 citations


Patent
29 Jun 1994
TL;DR: In this article, a data compression technique was described for use in systems employing frequency separation, such as wavelet separation, in which the sub bands (0 to 9) have different numbers of frequency component values therein.
Abstract: A data compression technique is described for use in systems employing frequency separation, such as wavelet separation, in which the sub bands (0 to 9) have different numbers of frequency component values therein. The sub bands are scanned to form a stream of data for feeding to an entropy encoder (30) in an order in which a number of samples from each sub band are taken in turn, that number being proportional to how many frequency values are in the particular sub band.

90 citations


PatentDOI
TL;DR: The stereophonic embodiment eliminates redundancies in the sum and difference signals, so that the stereo coding uses significantly less than twice the bit rate of the comparable monaural signal.
Abstract: A technique for the masking of quantizing noise in the coding of audio signals is usable with the types of channel coding known as "noiseless" or Huffman coding and with variable radix packing. In a multichannel environment, noise masking thresholds may be determined by combining sets of power spectra for each of the channels. The stereophonic embodiment eliminates redundancies in the sum and difference signals, so that the stereo coding uses significantly less than twice the bit rate of the comparable monaural signal. The technique can be used both in transmission of signals and in recording for reproduction, particularly recording and reproduction of music. Compatibility with the ISDN transmission rates known as 1B, 2B and 3B rates has been achieved.

80 citations


Journal ArticleDOI
TL;DR: A video coding method is proposed which is based upon fractal block coding, which utilizes a novel three-dimensional partitioning of input frames for which a number of efficient block-matching search methods can be used, and permits spatio-temporal splitting of the input blocks to improve overall-encoding quality.
Abstract: A video coding method is proposed which is based upon fractal block coding The method utilizes a novel three-dimensional partitioning of input frames for which a number of efficient block-matching search methods can be used, and permits spatio-temporal splitting of the input blocks to improve overall-encoding quality After describing the basic fractal block coding algorithm, the details of the proposed three-dimensional algorithm are presented along with encoding and decoding results from two standard video test sequences, representative of video-conferencing data These results indicate that average compression rates ranging from 40 to 77 can be obtained with subjective reconstruction quality of video-conferencing quality The results also indicate that, in order to meet the compression rates required for very low bit rate coding, it is necessary to employ additional techniques such as entropy encoding of the fractal transformation coefficients >

78 citations


Proceedings ArticleDOI
13 Nov 1994
TL;DR: This paper investigates the classification technique, applied to subband coding of images, as a way of exploiting the non-stationary nature of image subbands, and proposes a method for reducing the side rate which exploits the dependence between subbands as well as the within band dependence.
Abstract: This paper investigates the classification technique, applied to subband coding of images, as a way of exploiting the non-stationary nature of image subbands. An algorithm for maximizing the classification gain, is presented. Each subband is optimally classified and the classification map is sent as side information. After optimum rate allocation, the classes are encoded using arithmetic and trellis coded quantization (ACTCQ) system. We compare this approach with other approaches for classification proposed in the literature. We propose a method for reducing the side rate which exploits the dependence between subbands as well as the within band dependence. >

66 citations


Journal ArticleDOI
TL;DR: Hardware design of a high speed and memory efficient Huffman decoder, introduced in Hashemian (1993) is presented, and the method is shown to be extremely efficient in the memory requirement, and fast in searching for the desired symbols.
Abstract: Hardware design of a high speed and memory efficient Huffman decoder, introduced in Hashemian (1993) is presented. The algorithm developed is based on a specific Huffman tree structure using a code-bit clustering scheme. The method is shown to be extremely efficient in the memory requirement, and fast in searching for the desired symbols. For experimental video data with code-words extended up to 13 bits, the entire memory space needed is shown to be 122 words in size, compared with normally 2/sup 13/=8196 words memory space requirement. The design of the decoder is carried out using the Si-gate CMOS process. >

65 citations


Patent
Cheung Auyeung1
01 Dec 1994
TL;DR: In this paper, a method for adaptive entropy encoding/decoding of a plurality of quantised transform coefficients in a video/image compression system is presented, where a predetermined number of quantized transform coefficients are received in a predetermined order, giving a generally decreasing average power.
Abstract: The present invention is a method (100) and apparatus (300) for adaptive entropy encoding/decoding of a plurality of quantised transform coefficients in a video/image compression system. For encoding, first, a predetermined number of quantized transform coefficients are received in a predetermined order, giving a generally decreasing average power. Then the quantized transform coefficients are parsed into a plurality of coefficient groups. When the last coefficient group comprises all zero quantized coefficients, it is discarded. The coefficient groups are then converted into a plurality of parameter sets in the predetermined order. A current parameter set is obtained from the parameter sets in the reverse order of the predetermined order. A current entropy encoder is selected adaptively based on the previously selected entropy encoder and the previous parameter set. The current parameter set is encoded by the current entropy encoder to provide entropy encoded information bits.

38 citations


Proceedings ArticleDOI
13 Nov 1994
TL;DR: The problem of efficient transmission of multiple streams of variable-length coded data is solved by a unique coded data interleave method that is generalizable to any lossless or lossy system with deterministic decompression.
Abstract: Efforts to build high-speed hardware for many different entropy coders are limited by fundamental feedback loops. A method that allows for parallel compression in hardware is described. This parallelism results in extremely high rates, 100 million symbols/second or higher. The system is generalizable to any lossless or lossy system with deterministic decompression. Prototype hardware that divides the data into multiple streams that feed parallel coders is presented. The problem of efficient transmission of multiple streams of variable-length coded data is solved by a unique coded data interleave method. >

37 citations


Journal ArticleDOI
TL;DR: A new technique for coding gray-scale images for facsimile transmission and printing on a laser printer using a perceptually based subband coding approach that uses a perceptual masking model that was empirically derived for printed images using a specific printer and halftoning technique.
Abstract: The authors present a new technique for coding gray-scale images for facsimile transmission and printing on a laser printer. They use a gray-scale image encoder so that it is only at the receiver that the image is converted to a binary pattern and printed. The conventional approach is to transmit the image in halftoned form, using entropy coding (e.g., CCITT Group 3 or JBIG). The main advantages of the new approach are that one can get higher compression rates and that the receiver can tune the halftoning process to the particular printer. They use a perceptually based subband coding approach. It uses a perceptual masking model that was empirically derived for printed images using a specific printer and halftoning technique. In particular, they used a 300 dots/inch write-black laser printer and a standard halftoning scheme ("classical") for that resolution. For nearly transparent coding of gray-scale images, the proposed technique requires lower rates than the standard facsimile techniques. >

34 citations


Journal ArticleDOI
TL;DR: A segmentation algorithm based on morphological operators, which is applied to code the motion compensated prediction error images or the displaced frame differences (DFD), which is more efficient than a transform based coding.

32 citations


Proceedings ArticleDOI
03 Aug 1994
TL;DR: The first stage of a two stage lossless data compression algorithm consists of a lossless adaptive predictor and the second stage employs arithmetic coding.
Abstract: This paper describes the first stage of a two stage lossless data compression algorithm. The first stage consists of a lossless adaptive predictor. The term lossless implies that the original data can be recovered exactly. The second stage employs arithmetic coding. Results are presented for a seismic data base.

Proceedings ArticleDOI
19 Apr 1994
TL;DR: A previously introduced model circumvents some of the difficulties of dimensionality-related difficulties while maintaining accuracy sufficient to account for much of the high-order, nonlinear statistical interdependence of samples.
Abstract: The performance of a statistical signal processing system is determined in large part by the accuracy of the probabilistic model it employs. Accurate modeling often requires working in several dimensions, but doing so can introduce dimensionality-related difficulties. A previously introduced model circumvents some of these difficulties while maintaining accuracy sufficient to account for much of the high-order, nonlinear statistical interdependence of samples. Properties of this model are reviewed, and its power demonstrated by application to image restoration and compression. Also described is a vector quantization (VQ) scheme which employs the model in entropy coding a Z/sup N/-lattice. The scheme has the advantage over standard VQ of bounding maximum instantaneous errors. >

Proceedings ArticleDOI
29 Mar 1994
TL;DR: While the algorithm presented herein is designed to be used as a post-compressor in a lossy audio transform coding system, it is well suited for any instance where non-stationary source outputs must be compressed.
Abstract: A new adaptive algorithm for lossless compression of digital audio is presented. The algorithm is derived from ideas from both dictionary coding and source-modeling. An adaptive Lempel-Ziv (1977) style fixed dictionary coder is used to build a source model that fuels an arithmetic coder. As a result, variable length strings drawn from the source alphabet are mapped onto variable length strings that are on average shorter. The authors show that this algorithm outperforms arithmetic coding or Lempel-Ziv coding working alone on the same source (in their experiments the source is an ADPCM quantizer). Adaptation heuristics for the Lempel-Ziv coder, relevant data structures, and a discussion of audio source modeling (entropy estimation) experiments are described. While the algorithm presented herein is designed to be used as a post-compressor in a lossy audio transform coding system, it is well suited for any instance where non-stationary source outputs must be compressed. >

Book
25 Oct 1994
TL;DR: This paper presents a meta-modelling approach to statistical multiplexing used in the design of TV and HDTV Coding Algorithms, and some examples of this approach can be found in the literature on Coding, Programming and Statistics.
Abstract: Preface. Contents. List of Figures. List of Tables. Introduction to ATM Networks. Introduction. Data Communications on Networks. Broad-band ISDN. Potential User Services. ATM-OSI Layers. Cell Structure. ATM Switching Techniques. ATM Multiplexers. ATM Network Architecture. Traffic Models. Traffic Descriptors. ATM Network Management. Conclusions. Bibliography. ATM Terminology. TV and HDTV Coding Algorithms. Introduction. Sampling Theorem and Spectra. Theoretical Introduction to Coding. Techniques of Signal Decomposition. Lapped Orthogonal Transforms. Wavelet Transform and Multiresolution. Quantization. Variable-Length Entropy Coding. Quantizer Rate-Distortion Function. Extensions of the OSI Layering. TV Encoding Architectures. Conclusions. Bibliography. Bit Rate Models. Introduction. Experiment with Actual Time Digital TV. Complete Model Description. Statistical Modeling Approach. Statistics at the Programme Layer. Statistics on other Coders. Conclusions. Bibliography. Markov Modulated Poisson Processes. Video Signal Models. Auto-regressive Estimation of DNSPP. Detection of Scene Changes. Coding Control Algorithms. Introduction. Loop with Linear Feedback Response. Regulators in the Feedback Chain. Optimum Control Algorithm. Regulators against Controllers. The Use of Neural Networks. Conclusions. Bibliography. Complements on the Non-Linearities. Statistical Multiplexing. Introduction. Superposition of ATM Traffics. Queueing Models for ATM Multiplexers. Performances of ATM Multiplexers. Conclusions. Bibliography. Properties of Renewal Processes. Autocovariance of Counting Processes. ATM Adaptation Layer. Introduction. Cell Losses and Bit Errors. Synchronization of Decoders. Conclusions. Bibliography. Transmission on ATM Networks. Introduction. Principle of Queueing Networks. Allocation of Network Resources. Enforcement Actions. Switching Operations on Traffics. Input-Output Relations in Multiplexers. Congestion Control Strategies. Conclusions. Bibliography. Glossary. Abbreviations. Index.

Patent
14 Oct 1994
TL;DR: In this paper, a data encoding system for encoding input color pixel data and outputting encoded data is presented, which includes a reference pixel generating device which outputs reference pixel data corresponding to the input color pixels data to be encoded; a predictor having a color order table which sets the color ranks of color codes for every reference pixel pattern, reads and outputs the color rank of the corresponding color code from the color ordering table on the basis of the color pixels, and an entropy encoding device which converts the color ranking data into encoded data.
Abstract: A data encoding system for encoding input color pixel data and outputting encoded data. The data encoding system includes a reference pixel generating device which outputs reference pixel data corresponding to the input color pixel data to be encoded; a predictor having a color order table which sets the color ranks of color codes for every reference pixel pattern, reads and outputs the color rank of the corresponding color code from the color order table on the basis of the color pixel data to be encoded and its reference pixel data; and an entropy encoding device which converts the color ranking data into encoded data and outputs the encoded data.

Journal ArticleDOI
Ralf Steinmetz1
TL;DR: Details are outlined about the techniques developed by CCITT (H.261, i.e., px64), in the ISO/IEC (JPEG, MPEG, MPEG standardization bodies and the proprietary DVI system and the essential requirements for these techniques in the scope of multimedia systems and applications are stated.
Abstract: Integrated multimedia systems process text, graphics, and other discrete media as well as digital audio, and video data. Considerable amounts of graphics, audio and video data in their uncompressed form, especially moving pictures, require storage and digital network capacities that will not be available in the near future. Nevertheless, local, as well as networked, multimedia applications and systems have become realities. In order to cope with these storage and communication requirements in such integrated multimedia systems, compression technology is essential. This papers starts with a brief motivation of the need for compression and subsequently states the essential requirements for these techniques in the scope of multimedia systems and applications. As most of these techniques apply the same principles, namely, the source, entropy, and hybrid coding fundamentals, these are explained in detail. Based on a general framework of the steps encountered in a compression system -- data preparation, processing, quantization, and entropy coding -- this paper outlines details about the techniques developed by CCITT (H.261, i.e., px64), in the ISO/IEC (JPEG, MPEG) standardization bodies and the proprietary DVI system.

Patent
27 Jun 1994
TL;DR: In this paper, an encoding apparatus consisting of a time series sample buffer for dividing an input signal into blocks, an orthogonal transform encoding section for transforming the signals every blocks into spectrum signals, and an entropy encoding section was provided.
Abstract: An encoding apparatus for encoding a digital signal and a decoding apparatus for decoding the encoded signal. This encoding apparatus comprises a time series sample buffer for dividing an input signal into blocks, an orthogonal transform encoding section for transforming the signals every blocks into spectrum signals, and an entropy encoding section for implementing variable length encoding to all or a portion of the spectrum signals every blocks. At the entropy encoding section, there are provided an upper limit setting circuit for setting upper limit in the number of bits per each block of a signal encoded and outputted, and a bit number judging circuit for stopping outputting of a portion of spectrum signals in a block or blocks where the number of bits above the upper limit is required. The number of bits per each block is caused to have upper limit to stop recording or transmission of a portion of spectrum signals in a block or blocks where the number of bits above the upper limit is required. Thus, quantity of information caused to undergo encoding, recording or transmission and decoding can be reduced.

Proceedings ArticleDOI
A.D. Wyner1
27 Jun 1994
TL;DR: In this paper, the author applies pattern matching results to three problems in information theory and characterises the characterisation of a probability law, and shows that pattern matching can be used to solve information theory problems.
Abstract: The author applies pattern matching results to three problems in information theory. The characterisation of a probability law is also discussed. >

Journal ArticleDOI
TL;DR: In the implementation of Peanoscanning, tested on seven natural images, Peano-differential coding with an entropy coder gave the best results of reversible compression from 8 bits/ pixel to about 5 bits/pixel, which was better than predictive coding of equivalent raster-scanned data.
Abstract: Peanoscanning was used to obtain the pixels from an image by following a scan path described by a space-filling curve, the Peano-Hilbert curve. The Peanoscanned data were then compressed without loss of information by direct Huffrnan, arithmetic, and Lernpel-Ziv-Welch coding, as well as predictive and transform coding. In our implementation, tested on seven natural images, Peano-differential coding with an entropy coder gave the best results of reversible compression from 8 bits/pixel to about 5 bits/pixel, which was better than predictive coding of equivalent raster-scanned data. An efficient implementation of the Peanoscanning operation based on the symmetry exhibited by the Peano-Hilbert curve is also suggested.

Proceedings ArticleDOI
13 Nov 1994
TL;DR: This work proposes an optimal bit allocation strategy for PCVQ through the explicit incorporation of an entropy constraint within the product code framework, and proposes an iterative, locally optimal encoding strategy to improve performance over greedy encoding at a small cost in complexity.
Abstract: While product code VQ is an effective paradigm for reducing the encoding search and memory requirements of vector quantization, a significant limitation of this approach is the heuristic nature of bit allocation among the product code features. We propose an optimal bit allocation strategy for PCVQ through the explicit incorporation of an entropy constraint within the product code framework. Unrestricted entropy-constrained PCVQs require joint entropy codes over all features and concomitant encoding and memory storage complexity. To retain manageable complexity, we propose "product-based" entropy code structures, including independent and conditional feature entropy codes. We also propose an iterative, locally optimal encoding strategy to improve performance over greedy encoding at a small cost in complexity. This approach is applicable to a large class of product code schemes, allowing joint entropy coding of feature indices without exhaustive encoding. Simulations demonstrate performance gains for image coding based on the mean-gain-shape product code structure. >

Proceedings ArticleDOI
16 Sep 1994
TL;DR: This study develops an image compression algorithm based on a weak membrane model of the image which allows it to determine edge contours, represented as line processes, by minimizing a nonconvex energy functional associated with a membrane, and to reconstruct the original image by using the same model.
Abstract: Object boundaries detected by edge detection algorithms provide a rich, meaningful and sparse description of an image. In this study, we develop an image compression algorithm based on such a sparse description which is obtained by using weak membrane model of the image. In this approach, image is modelled as a collection of smooth regions separated by edge contours. This model allows us to determine edge contours, represented as line processes, by minimizing a nonconvex energy functional associated with a membrane, and to reconstruct the original image by using the same model. Thus despite the previous work where first edges are obtained by an edge detection algorithm based on convolution and then surface is reconstructed by using a completely different process such as interpolation, in our approach the same process is used for both detecting edges and reconstructing surfaces from them. We coded the line processes by using run length coding and the sparse data around line processes by using the entropy coding. We evaluate the performance of the algorithm qualitatively and quantitatively on various synthetic and real images, and show that good quality images can be obtained for moderate compression ratio like 5:1 while this ratio may reach up to 20:1 for some images.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
16 Sep 1994
TL;DR: Image compression methods for progressive transmission using optimal subband/wavelet decomposition, partition priority coding (PPC) and multiple distribution entropy coding (MDEC) are presented.
Abstract: Image compression methods for progressive transmission using optimal subband/wavelet decomposition, partition priority coding (PPC) and multiple distribution entropy coding (MDEC) are presented. In the proposed coder, hierarchical wavelet decomposition of the original image is achieved using wavelets generated by IIR minimum variance filters. The smoothed subband coefficients are coded by an efficient triple state DPCM coder and the corresponding prediction error is Lloyd-Max quantized. The detail coefficients are coded using a novel hierarchical PPC (HPPC) approach. That is, given a suitable partitioning of their absolute range, the detail coefficients are ordered based on their decomposition level and magnitude, and the address map is appropriately coded. Finally, adaptive MDEC is applied to both the DPCM and HPPC outputs by considering a division of the source of the quantized coefficients into multiple subsources and adaptive arithmetic coding based on their corresponding histograms.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
19 Apr 1994
TL;DR: This work proposes a general image-independent coding scheme which is applied to the coding of lattice quantized wavelet coefficient vectors and demonstrates that the quantizing and coding complexity can be reduced through reducing the dimension of the wavelet vectors by means of principal components analysis and a perceptually based nonlinearity.
Abstract: Lattice vector quantization has recently attracted some interest as an alternative to full-search VQ for signal and image coding problems. It is considerably more computationally efficient, and it avoids the difficult codebook design problem. Furthermore, it has been noted that the optimal high bit rate entropy constrained vector quantizer will approximate a lattice. Indeed for lattice VQ to be competitive, the quantized vectors should be entropy coded. This has traditionally been performed on a per image basis, a complex and inefficient process. We propose a general image-independent coding scheme which we apply to the coding of lattice quantized wavelet coefficient vectors. We also demonstrate that the quantizing and coding complexity can be reduced through reducing the dimension of the wavelet vectors by means of principal components analysis and a perceptually based nonlinearity. >

Proceedings ArticleDOI
13 Nov 1994
TL;DR: Experimental results show that the new method outperforms standard entropy-constrained residual vector quantization while also requiring lower encoding complexity and memory requirements.
Abstract: An extension of entropy-constrained residual vector quantization is presented where inter-vector dependencies are exploited. The method, which the authors call conditional entropy-constrained residual vector quantization, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. The complexity of the proposed design algorithm is relatively low, due mainly to the efficiency of the multistage structure of the residual vector quantizer, but also to the effectiveness of the searching techniques used to locate the best conditioning spatial-stage region of support. Experimental results show that the new method outperforms standard entropy-constrained residual vector quantization while also requiring lower encoding complexity and memory requirements. >

Proceedings ArticleDOI
03 Aug 1994
TL;DR: The fundamentals of lossless signal coding are introduced, and a wide variety of decorrelation and entropy coding techniques are discussed.
Abstract: Lossless compression of signals is of interest in a wide variety of fields such as geophysics, telemetry, nondestructive evaluation and medical imaging, where vast amounts of data must be transmitted or stored, and exact recovery of the original data is required. Nearly all lossless signal coding techniques consist of a decorrelation stage followed by an entropy coding stage. In this paper, fundamentals of lossless signal coding are introduced, and a wide variety of decorrelation and entropy coding techniques are discussed.

Proceedings ArticleDOI
13 Nov 1994
TL;DR: A new approach is presented, called binary-decomposed (BD) high-order entropy coding, that significantly reduces the complexity of the implementation and increases the accuracy in estimating the statistical model.
Abstract: Information theory indicates that coding efficiency can be improved by utilizing high-order entropy coding (HOEC). However, serious implementation difficulties limit the practical value of HOEC for grayscale image compression. We present a new approach, called binary-decomposed (BD) high-order entropy coding, that significantly reduces the complexity of the implementation and increases the accuracy in estimating the statistical model. In this approach a grayscale image is first decomposed into a group of binary sub-images, each corresponding to one of the gray levels. When HOEC is applied to these sub-images instead of the original image, the subsequent coding is made simpler and more accurate statistically. We apply this coding technique in lossless compression of medical images and imaging data, and demonstrate that the performance advantage of this approach is significant. >

Journal ArticleDOI
01 Sep 1994-Fractals
TL;DR: Using a first order (bilinear) Bath Fractal Transform (BFT), useful video sequences of talking heads with transmission rates as low as 40 KBits/sec are obtained using standard QCIF frames.
Abstract: Using a first order (bilinear) Bath Fractal Transform (BFT), we have obtained useful video sequences of talking heads with transmission rates as low as 40 KBits/sec. Previously, fractal image coding had been computationally asymmetric. In our work, 8 by 8 pixel blocks are coded in 250 µs and decoded in 200 µs on a 33 MHz i-486 based PC. This is of significance in consumer electronics, such as personal communications, where inexpensive coding systems will have an advantage over more expensive methods using DSP or custom chips to achieve the necessary speed. With a simple quantization and entropy coding scheme applied to standard QCIF frames, at 40 KBits/sec we achieve coding of 40% of each frame in a 25 Hz video sequence, equivalent to 100% at 10 Hz.

Patent
Yasushi Ooi1
28 Oct 1994
TL;DR: In this article, a data processing system for picture coding, including a data memory for storing a discrete cosine transform (DCT) coefficient data successively transferred one after another, is presented.
Abstract: A data processing system for a picture coding, includes a data memory for storing a discrete cosine transform (DCT) coefficient data successively transferred one after another, a flipflop set prior to the successive transfer of the DCT coefficient data, and a non-zero detector for detecting a non-zero data when the DCT coefficient data is successively transferred. When the non-zero data is detected, the non-zero detector resets the flipflop. When the successive transfer of the DCT coefficient data has been completed, an entropy coding central processing unit (CPU) discriminates on the basis of the condition of the flipflop whether or not all of the data stored in the data memory is zero, so that if the condition of the flipflop indicates that all of the data stored in the data memory is zero, the entropy coding CPU does not read the data memory.

Patent
14 Feb 1994
TL;DR: In this article, a method of lossless data compression for efficient coding of an electronic signal of information sources of very low information rate is disclosed, which allows direct coding and decoding of the n-bit positive integer binary digital data differences without the use of codebooks.
Abstract: A method of lossless data compression for efficient coding of an electronic signal of information sources of very low information rate is disclosed. In this method, S represents a non-negative source symbol set, {s 0 , s 1 , s 2 , . . . s N-1 } of N symbols with s i =i. The difference between binary digital data is mapped into symbol set S. Consecutive symbols in symbol set S are then paired into a new symbol set Γ which defines a non-negative symbol set containing the symbols {γ m } obtained as the extension of the original symbol set S. These pairs are then mapped into a comma code which is defined as a coding scheme in which every codeword is terminated with the same comma pattern, such as a 1. This allows a direct coding and decoding of the n-bit positive integer binary digital data differences without the use of codebooks.

Patent
08 Jun 1994
TL;DR: In this article, the authors proposed to realize efficient image communication by extracting the only part corresponding to a partial image, reconstituting encoded data into partial encoded data and transmitting the data to an image receiver.
Abstract: PURPOSE: To realize efficient image communication by extracting the only part corresponding to a partial image, reconstituting encoded data into partial encoded data and transmitting the data to an image receiver. CONSTITUTION: The quantization circuit 13 of an encoding part 2A performs linear quantization for each coefficient, delivers the quantized coefficient to an entropy encoding circuit 14, generates the DC coefficient data of each quantized block and outputs the data to a DC coefficient table 15. A code reconstitution transmission part 17 segments necessary partial images from the encoded data of the whole of images and transmits the partical images to each reception terminal. A location/size reception circuit 18 receives the data of the locations and sizes of the necessary partial images from the reception terminal from the outside. The received location/size data is transmitted to a partial block extraction circuit 19 and a DC entropy encoding circuit 20. Further, the data of reception opposity party is imparted to a transmission circuit 22. COPYRIGHT: (C)1995,JPO