scispace - formally typeset
Search or ask a question

Showing papers on "Entropy encoding published in 1977"


Book
01 Jan 1977
TL;DR: In this paper, the authors present a survey of advanced topics for part I and a survey for part II of a survey on the Gaussian channel and the source coding theorem, including linear codes, BCH Goppa codes, and variable-length source coding.
Abstract: 1. Entropy and mutual information 2. Discrete memoryless channels and their capacity-cost functions 3. Discrete memoryless sources and their rate-distortion functions 4. The Gaussian channel and source 5. The source-channel coding theorem 6. Survey of advanced topics for part I 7. Linear codes 8. BCH Goppa, and related codes 9. Convolutional codes 10. Variable-length source coding 11. Survey of advanced topics for part II.

673 citations


Journal ArticleDOI
TL;DR: In this article the adaptive systems are divided to four categories and the theoretical and the implementational problems of the optimum system are discussed and the assumptions that are made to overcome these problems are outlined.
Abstract: The following is a survey of the technical literature on adaptive coding of imagery. Section 1 briefly discusses the general problem of image data compression. The optimum image data compression system, from a theoretical viewpoint, is presented in Section 1.1. The theoretical and the implementational problems of the optimum system are discussed and the assumptions that are made to overcome these problems are outlined. One important assumption is the stationarity which is not true for most imagery. In adaptive systems the parameters are varied according to changes in signal statistics optimizing the system performance for nonstationary signals. In this article the adaptive systems are divided to four categories. Section 2 is a survey of adaptive transform coding systems. Section 3 discusses adaptive predictive coding systems. Sections 4 and 5 discuss adaptive cluster coding and adaptive entropy technique, respectively.

110 citations


Journal ArticleDOI
TL;DR: Rate-distortion functions for 2-dimensional homogeneous isotropic images are compared with the performance of five source encoders designed for such images and 6-pel DPCM with entropy coding performed best with the mean-square error distortion measure.
Abstract: Rate-distortion functions for 2-dimensional homogeneous isotropic images are compared with the performance of five source encoders designed for such images. Both unweighted and frequency weighted mean-square error distortion measures are considered. The coders considered are a) differential pulse code modulation (DPCM) using six previous samples or picture elements (pels) in the prediction--herein called 6-pel DPCM, b) simple DPCM using single-sample prediction, c) 6-pel DPCM followed by entropy coding, d) 8 \times 8 discrete cosine transform coding, and e) 4 \times 4 Hadamard transform coding. Other transform coders were studied and found to have about the same performance as the two transform coders above. With the mean-square error distortion measure, 6-pel DPCM with entropy coding performed best. Next best was the 8 \times 8 discrete cosine transform coder and the 6-pel DPCM--these two had approximately the same distortion. Next were the 4 \times 4 Hadamard and simple DPCM, in that order. The relative performance of the coders changed slightly when the distortion measure was frequency weighted mean-square error. From R = 1 to 3 bits/pel, which was the range studied here, the performances of all the coders were separated by only about 4 dB.

64 citations



Journal ArticleDOI
01 Jul 1977
TL;DR: A variable-length source coding theorem is proved for a pair of discrete memoryless correlated information sources and the Huffman encoding procedure is generalized for this case.
Abstract: A variable-length source coding theorem is proved for a pair of discrete memoryless correlated information sources. The average length of codewords per source letter for source X provided the side information Y is bounded below by the conditional entropy H(X|Y) and above by the same entropy plus J/L where L is the number of source letters encoded and J is the size of ensemble Y. The Huffman encoding procedure is also generalized for this case.

2 citations


H. W. Jones1
01 Mar 1977
TL;DR: In this article, the well-known algorithm of Max is used to determine the minimum distortion quantizers for normal, two-sided exponential, and specialized twosided gamma input distributions and for mean square, magnitude, and relative magnitude error distortion criteria.
Abstract: The well-known algorithm of Max is used to determine the minimum distortion quantizers for normal, two-sided exponential, and specialized two-sided gamma input distributions and for mean-square, magnitude, and relative magnitude error distortion criteria. The optimum equally-spaced and unequally-spaced quantizers are found, with the resulting quantizer distortion and entropy. The quantizers, and the quantizers with entropy coding, are compared to the rate distortion bounds for mean-square and magnitude error.

1 citations


Journal ArticleDOI
TL;DR: A method of joint source and channel encoding using block codes is presented, based on n variation of syndrome encoding, which results in both data compression and on error-detection capability for appropriate source probability distributions.
Abstract: A method of joint source and channel encoding using block codes is presented. The method is based on n variation of syndrome encoding. Making use of negacyclic codes, the technique results in both data compression and on error-detection capability for appropriate source probability distributions. Both situations, with and without distortion, are considered. An orthogonal extension of the above scheme is also introduced in the concluding remarks.