scispace - formally typeset
Search or ask a question

Showing papers on "Canonical Huffman code published in 1987"


Journal ArticleDOI
TL;DR: A new one-pass algorithm for constructing dynamic Huffman codes is introduced and analyzed, and it is shown that the number of bits used by the new algorithm to encode a message containing t letters is < t bits more than that use by the conventional two-pass Huffman scheme, independent of the alphabet size.
Abstract: A new one-pass algorithm for constructing dynamic Huffman codes is introduced and analyzed. We also analyze the one-pass algorithm due to Faller, Gallager, and Knuth. In each algorithm, both the sender and the receiver maintain equivalent dynamically varying Huffman trees, and the coding is done in real time. We show that the number of bits used by the new algorithm to encode a message containing t letters is

368 citations


Patent
13 Apr 1987
TL;DR: In this paper, a code converter has a network of logic circuits connected in reverse binary tree fashion with logic paths between leaf nodes and a common root node, and characters are compressed from standard codes to variable-length Huffman code by pulse applying connections to the paths from a decoder.
Abstract: A code converter has a network of logic circuits connected in reverse binary tree fashion with logic paths between leaf nodes and a common root node. Characters are compressed from standard codes to variable-length Huffman code by pulse applying connections to the paths from a decoder. An OR-gate is connected to "1" branches of the network to deliver the output code. Characters are decompressed from Huffman to standard codes by connection of the Huffman code to control the clocked logic circuits to deliver a pulse from the root node to one of the inputs of an encoder. A feedback loop connects the root node and the path end nodes to initiate the next conversion. Alternative embodiments have decoder staging to minimize delay and parallel compressed code output.

53 citations


Journal ArticleDOI
H. Tanaka1
TL;DR: The tree structure is presented by a two-dimensional array which can be applied for the decoding of Huffman codes as a state transition table of the finite-state decoding automaton.
Abstract: The data structure of Huffman codes and its application to efficient encoding and decoding of Huffman codes are studied in detail. The tree structure is presented by a two-dimensional array which can be applied for the decoding of Huffman codes as a state transition table of the finite-state decoding automaton. Inversion produces a one-dimensional state transition table of the semiautonomous finite-state sequential machine which can be used as a Huffman encoder with a push-down stack. The encoding and decoding procedures are simple and efficient. It is not only possible to implement by simple hardware but is also applicable to software implementation.

37 citations


Journal ArticleDOI
TL;DR: A necessary and sufficient condition for the most likely letter of any discrete source to be coded by a single symbol with a D -ary Huffman code, 2 \leq D , is derived and a lower bound on the redundancy of a D-ary HuffMan code is established.
Abstract: A necessary and sufficient condition for the most likely letter of any discrete source to be coded by a single symbol with a D -ary Huffman code, 2 \leq D , is derived. As a consequence, a lower bound on the redundancy of a D -ary Huffman code is established.

6 citations


Patent
14 May 1987
TL;DR: The code conversion circuit according to the subject-matter of the invention is used to convert decimal numbers from the special-four-field code shown in Figure 2 into the one-out-of-ten code as discussed by the authors.
Abstract: The code conversion circuit according to the subject-matter of the invention is used to convert decimal numbers from the special-four-field code shown in Figure 2 into the one-out-of-ten code shown in Figure 1. This special-four-field code is a non-negate-complementable code. Use of this special-four-field code has the advantage that both code conversion circuits are very simple. In comparison with the code conversion circuit described in P 3529224.5, the main difference in this code conversion circuit is that the figure 0 is read from the output of an AND circuit (1).

2 citations


Journal ArticleDOI
01 Mar 1987
TL;DR: Although for certain sources the code proposed is equal or nearly equal to the Huffman code regarding data compression, in general it is less efficient and its property of being self synchronising, and its relatively simple hardware realisation, make this code valuable for practical applications.
Abstract: An algorithm for obtaining a self-synchronising M-ary code (M ⩾ 2) enabling the compression of data from a stationary discrete memoryless source is proposed. After presenting the code algorithm, its properties are analysed and the implementation of the code is described. The code proposed is compared to the Huffman code with regard to the average code-word length, the possibility of self synchronisation and the complexity of hardware realisation. Although for certain sources the code proposed is equal or nearly equal to the Huffman code regarding data compression, in general it is less efficient. However, its property of being self synchronising, and its relatively simple hardware realisation, make this code valuable for practical applications.

2 citations


Patent
04 Jun 1987
TL;DR: A BLOCK of SYMBOLs CODE is protected by PRODUCT CODE or PSEUDO PRODUCT CODE and when the SYNDROME of CODE of two WORDS in Question are equalized, the FLAGS are equalization and the results of SUMMONS are decremented.
Abstract: A BLOCK OF SYMBOLS CODE IS PROTECTED BY PRODUCT CODE OR PSEUDO PRODUCT CODE. FIRST OF ALL, ALL SYMBOLS ARE SYNDROME AND ALL FORMS OF WORDS CODE SYNDROME DIFFERENT FROM ZERO ARE PROVIDED WITH A FLAG. EACH SYMBOL NO PART OF REDUNDANT FIRST WORD CODE AND ALSO A SECOND WORD CODE WHILE THE FLAGS OF NUMBERS ARE WE SEPARATELY FOR FIRST AND SECOND CODE WORD. WORDS ARE TREATED IN TURN CODE AND LOCATION OF ERROR IS DETERMINED. WHEN LOCATION IS PART OF ERROR BOTH OF FIRST DISTURBED CODE WORD AND A SECOND CODE WORD OF DISTURBED, THE ERROR IS CORRECTED; WHEN THE SECOND WORD CODE IS NOT DISTURBED, THE ERROR IS HOWEVER NOT CORRECTED. AFTER CORRECTION, SYNDROMES are equalized THIS IS WHEN THE SYNDROME OF CODE OF TWO WORDS IN QUESTION ARE EQUAL TO ZERO THE FLAGS are equalized THE RESULTS OF SUMMONS are decremented. REVERSE IN THE EVENT FLAGS ARE TEMPORARILY NOT CHANGED.

Proceedings ArticleDOI
01 Apr 1987
TL;DR: Two standard reversible coding algorithms, Ziv-Lempel and a dynamic Huffman algorithm, are applied to various types of speech data, and neither shows much promise on small amounts of data, but both performed well on large amounts.
Abstract: Two standard reversible coding algorithms, Ziv-Lempel and a dynamic Huffman algorithm, are applied to various types of speech data. The data tested were PCM, DPCM, and prediction residuals from LPC. Neither algorithm shows much promise on small amounts of data, but both performed well on large amounts. Typically the Ziv-Lempel required about 12 seconds of data (with 8000 samples per second) to reach a stable compression rate. The dynamic Huffman coding took much less time to "warm up", often needing something like 64 milliseconds. Approximately 66 seconds of PCM with 12 bits per samples was compressed 6.4% by the Ziv-Lempel coding and 20.7% by the dynamic Huffman coding. The same numbers for DPCM with 13 bits per sample are 17.7% and 35.6% respectively. The prediction residuals had compression rates very close to those of DPCM, regardless of whether 1, 2, 5, or 10 prediction coefficients were used.