scispace - formally typeset
Search or ask a question

Showing papers on "Entropy encoding published in 1983"


Proceedings ArticleDOI
01 Apr 1983
TL;DR: This realization includes several heretofore uncombined and novel approaches to the imagery compression problem including: 2-D lattice filter prediction, adaptive quantization, and entropy coding.
Abstract: Extremely efficient compression algorithms can be devised for digital imagery data via an extension of the linear predictive coding methods utilized extensively in speech processing. When appropriately employed, these methods introduce minimal distortion. Such realizations can operate in real time, completely in the spatial domain, and are capable of reducing imagery storage requirements by an order of magnitude. This paper describes such an extension of linear predictive coding techniques for imagery data compression applications both theoretically and experimentally. This realization includes several heretofore uncombined and novel approaches to the imagery compression problem including: 2-D lattice filter prediction, adaptive quantization, and entropy coding. System implementation shows proof of feasibility and favorable performance in comparison with alternative transform techniques using the standard Minimum Mean Square Error [MMSE) fidelity criterion.

5 citations


Journal ArticleDOI
TL;DR: A hybrid differential pulse-code-modulation (DPCM) system is presented to demonstrate the application of fuzzy set theory in data compression.
Abstract: A hybrid differential pulse-code-modulation (DPCM) system is presented to demonstrate the application of fuzzy set theory in data compression. The system involves entropy coding, Laplacian quantiser and fuzzy enhancement operations as applied on the DPCM signal. Saving of about 0.5 bit is found to be obtained when a DPCM signal is quantised to 32-level, and then Huffman-coded.

5 citations


Journal ArticleDOI
P. Bocci1, J. LoCicero
TL;DR: A study of the buffer requirements needed to support these entropy coders, using a maximum likelihood predictor in tandem with run length and Huffman coding, and bit rate reductions of 11-25 percent are achieved for the CVSD rates considered.
Abstract: We present the results of a study to reduce the bit rate of speech that has been digitized with a continuously variable slope delta modulator (CVSD) operating at 16, 24, and 32 kbits/s. The theoretical reduction is found from the bit stream entropy. The actual reduction, via Huffman coding, is within 1-2 Percent of the theoretical value. The conditional entropy indicates that additional bit rate reduction can be achieved if we use a set of Huffman codes, conditioned on the past CVSD bits. A third technique, tandem coding, using a maximum likelihood predictor in tandem with run length and Huffman coding, is also investigated. Using these entropy techniques, bit rate reductions of 11-25 percent are achieved for the CVSD rates considered. The paper concludes with a study of the buffer requirements needed to support these entropy coders.

2 citations


01 Jan 1983

1 citations


Journal ArticleDOI
TL;DR: Two generalizations of Shannon's inequality for the case of entropy of order α and type β in a simple way are proved and noiseless coding theorems are proved.

1 citations


Proceedings ArticleDOI
01 Apr 1983
TL;DR: A new approach to the encoding of speech signals is proposed based on directly maximizing the first order entropy of the data sent down the channel, which is formulated recursively and can be implemented via lookup tables.
Abstract: A new approach to the encoding of speech signals is proposed based on directly maximizing the first order entropy of the data sent down the channel. It is formulated recursively and can be implemented via lookup tables. A corresponding decoder is also derived and can be implemented in a similar manner. The performance of the system is competitive with other methods but differs qualitatively as the design is based on entropy considerations rather than minimizing mean square error as is done is most other methods.

Proceedings ArticleDOI
01 Apr 1983
TL;DR: The quantizer using entropy coding is shown to provide significant improvements over the other quantizers, and is incorporated in an adaptive predictive coder (APC), and used to quantize residual signals produced by inverse filtering.
Abstract: When applied to signal quantization, it is well known that entropy coding allows to obtain results closest from the limit of the rate distorsion function, than optimum quantizers. However, the entropy coding results in a variable number of bits per signal sample. This drawback can be partially avoided by using an algorithmic procedure which allows to keep the number of bits constant per block of signal samples. In the first part of the paper, three conventional feed-forwardly adapted quantizers: namely the Block Companded PCM (BCPCM) quantizer, the Max optimum quantizer, and the optimum uniform quantizer, are briefly reminded and the algorithmic procedure of the proposed quantizer is described. Then, the performances of the four methods are compared in terms of signal to quantizing noise ratio, when operating on signals having a Laplacian probability density function, assuming various block lengths of samples, and bit rates. The quantizer using entropy coding is shown to provide significant improvements over the other quantizers. Finally, the four quantizers are incorporated in an adaptive predictive coder (APC), and used to quantize residual signals produced by inverse filtering. Again, the quantizer using entropy coding is shown to provide the best performances.