scispace - formally typeset
Search or ask a question

Showing papers on "Canonical Huffman code published in 1982"


Journal ArticleDOI
TL;DR: A classification of all probability distributions over the finite alphabet of an information source is given, where the classes are the sets of distributions sharing the same binary Huffman code.
Abstract: A classification of all probability distributions over the finite alphabet of an information source is given, where the classes are the sets of distributions sharing the same binary Huffman code. Such a classification can be used in noiseless coding, when the distribution of the finite memoryless source varies in time or becomes gradually known. Instead of applying the Huffman algorithm to each new estimate of the probability distribution, if a simple test based on the above classification is passed, then the Huffman code used previously is optimal also for the new distribution.

47 citations


Journal ArticleDOI
TL;DR: The well known algorithm of David A. Huffman for finding minimum redundancy codes has found many diverse applications, and in recent years it has been extended in a variety of ways as mentioned in this paper.

28 citations


Journal ArticleDOI
TL;DR: A summary of one method of data compression using Huffman variable length coding is presented, along with statistics of effectiveness for various file types and practical techniques for integrating such methods into a small computer system.
Abstract: The performance of most small computer systems is determined, to a large extent, by the characteristics of their mass storage devices. Data compression can expand the storage capacity of such devices and also slightly increase their speed. Here, a summary of one method of data compression using Huffman variable length coding is presented, along with statistics of effectiveness for various file types and practical techniques for integrating such methods into a small computer system.

22 citations


Journal ArticleDOI
TL;DR: The minimum redundancy code with the minimum variance of the word length is characterized and it is shown that it is in a certain sense unique and has a strong property in that it minimizes a general class of functions of the minimum redundancy codes.
Abstract: Huffman’s well-known coding method constructs a minimum redundancy code which minimizes the expected value of the word length. In this paper, we characterize the minimum redundancy code with the minimum variance of the word length. An algorithm is given to construct such a code. It is shown that the code is in a certain sense unique. Furthermore, the code is also shown to have a strong property in that it minimizes a general class of functions of the minimum redundancy codes as long as the functions are nondecreasing with respect to the path lengths from the root to the internal nodes of the corresponding decoding trees.

9 citations


Patent
09 Feb 1982
TL;DR: In this article, a binary data transmission system using a code consisting of n-bit code words of a quasi-cyclic code with k information bits and n-k check bits is described.
Abstract: A binary data transmission system uses a code consisting of n-bit code words of a quasi-cyclic code with k information bits and n-k check bits. An encoder and a decoder with an error-detection and -correction facility are provided which use a quasi-cyclic code that has a high error-correcting capacity and requires little storage space.

6 citations


Patent
Gerald Goertzel1
17 Mar 1982
TL;DR: In this paper, a technique for appending prefixed code words to a code word stream conditioned by each least probable symbol in a binary source symbol string was proposed, where the length of each code word requested is composed of two components.
Abstract: This invention relates to a technique for appending prefixed code words to a code word stream conditioned by each least probable symbol in a binary source symbol string. The generation of the prefixed code word from a bounded code word space is managed dynamically by a memory (9) requiring only space availability measures (47, 51) at the start and end of the encoding cycle as input thereto. The memory delivers the prefixed code word of an appropriate derived length to output devices (7) for further processing. The encoder (2), responsive to each input triplet of symbol, symbol estimate, and probability of occurrence measure, generates the space availability measures PC(i) and PG(i-l), and modifies the memory contents so as to reduce the remaining code space. This reduction reflects the fact that the particular code word delivered has been used up and cannot be the prefix of any future delivered code word. The length of each code word requested is composed of two components. The first is the run of most probable symbols component, and the second is the terminating least probable symbol component. When the code word length requested exceeds the capacity of the memory to deliver, then the code word length is decreased to the maximum permissible. When the code word space is used up, the memory is considered overflowed and results in a reset in which the code capacity is reinitialized.

3 citations


Journal ArticleDOI
TL;DR: This paper presents a method of right-to-left code generation for arithmetic expressions given in the postfix notation, which is faster and requires less memory than the code generator from the binary tree.
Abstract: This paper presents a method of right-to-left code generation for arithmetic expressions given in the postfix notation. The number of generated instruction lines is equal to the one obtained from the binary tree structure. However, the right-to-left code generator is faster and requires less memory than the code generator from the binary tree. These results were obtained by introducing a vector, called vector-generatrice, assigned to every postfix string. A translation grammar which generates the postfix string and its associated vector-generatrice is defined. An experiment has been performed over a set of arithmetic expressions to compare the right-to-left code generator and the code generator from the binary tree.

1 citations