scispace - formally typeset
Search or ask a question

Showing papers on "Canonical Huffman code published in 2004"


Patent
03 Aug 2004
TL;DR: In this paper, a method for decoding multiple-coded symbols from a coded input symbol stream in a single clock cycle is presented, where the Huffman look-up table is extended to decode multiple symbols in one clock cycle in a first embodiment and decodes multiple DCT coefficient symbols in an alternate embodiment.
Abstract: A method is disclosed for decoding multiple-coded symbols from a coded input symbol stream in a single clock cycle. The method constructs an original Huffman look-up table by extending the associated Huffman tree to decode multiple symbols in one clock cycle in a first embodiment and decodes multiple DCT coefficient symbols in an alternate embodiment. An advantage of the method is that the depth of the new Huffman tree is adjustable thereby making the method easily adaptable to various hardware architectures. A further advantage of the present invention is that the decoding process speed is significantly increased while the size of the lookup table is nominally increased.

43 citations


Journal ArticleDOI
TL;DR: This letter introduces a new technique for designing and decoding Huffman codes that is smaller than the ordinary Huffman table and which leads to fast decoding.
Abstract: This letter introduces a new technique for designing and decoding Huffman codes. The key idea is to define a condensed Huffman table (CHT) that is smaller than the ordinary Huffman table and which leads to fast decoding. For example, the new approach has been shown to reduce the memory consumption by a factor of eight, compared with the single-side grown Huffman table.

35 citations


Journal ArticleDOI
G. Lakhani1
TL;DR: A minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit a well-observed characteristic that, when a discrete cosine transform block is traversed in the zigzag order, ac coefficients generally decrease in size and the runs of zero coefficients increase in length.
Abstract: It is a well-observed characteristic that, when a discrete cosine transform block is traversed in the zigzag order, ac coefficients generally decrease in size and the runs of zero coefficients increase in length. This paper presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this characteristic. During the run-length coding, instead of pairing a nonzero ac coefficient with the run-length of the preceding zero coefficients, our encoder pairs it with the run-length of subsequent zeros. This small change makes it possible for our codec to code a pair using a separate Huffman code table optimized for the position of the nonzero coefficient denoted by the pair. These position-dependent code tables can be encoded efficiently without incurring a sizable overhead. Experimental results show that our encoder produces a further reduction in the ac coefficient Huffman code size by about 10%-15%.

32 citations


Patent
07 Oct 2004
TL;DR: A JPEG Huffman decoder is capable of simultaneously decoding multiple coefficients and/or symbols in a single table lookup as mentioned in this paper, but it is not known how to construct a table lookup table.
Abstract: Method and apparatus for use in Huffman decoding are described. In exemplary systems, a JPEG Huffman decoder is capable of simultaneously decoding multiple coefficients and/or symbols in a single table lookup. Methods for designing, building, and using such a table are included. Other embodiments are described and claimed.

20 citations


Patent
22 Oct 2004
TL;DR: In this article, a system and method for encoding information is disclosed, and the collection of bits are then mapped according to a diagonally shifted QAM constellation technique, where the high protection code is a turbo code and the low protection code may be a trellis coded modulation code.
Abstract: A system and method for encoding information is disclosed. In one embodiment, information is encoded using a high protection code for the least significant bit and a low protection code for the next three most significant bits. The remaining bits are uncoded. The high protection code may be a turbo code and the low protection code may be a trellis coded modulation code. In this embodiment, the collection of bits is then mapped according to a diagonally shifted QAM constellation technique.

15 citations


Proceedings ArticleDOI
23 Mar 2004
TL;DR: A bitwise KMP algorithm is proposed that can move one extra bit in the case of a mismatch, since the alphabet is binary, to overcome the problem of false matches in Huffman encoded texts.
Abstract: This paper presents a compressed pattern matching in Huffman encoded texts. A modified Knuth-Morris-Pratt (KMP) algorithm is used in order to overcome the problem of false matches. This paper also proposes a bitwise KMP algorithm that can move one extra bit in the case of a mismatch, since the alphabet is binary. The KMP algorithm is combined with two Huffman decoding algorithms called sk-kmp and win-kmp to handle more than a single bit per machine operation. However, skeleton trees are used for efficient decoding of Huffman encoded texts.

13 citations


Book ChapterDOI
05 Oct 2004
TL;DR: This paper implements word-based adaptive Huffman compression, showing that it obtains very competitive compression ratios, and shows how End-Tagged Dense Code can be turned into a faster and much simpler adaptive compression method which obtains almost the same compression ratios.
Abstract: One of the most successful natural language compression methods is word-based Huffman However, such a two-pass semi-static compressor is not well suited to many interesting real-time transmission scenarios A one-pass adaptive variant of Huffman exists, but it is character-oriented and rather complex In this paper we implement word-based adaptive Huffman compression, showing that it obtains very competitive compression ratios Then, we show how End-Tagged Dense Code, an alternative to word-based Huffman, can be turned into a faster and much simpler adaptive compression method which obtains almost the same compression ratios

13 citations


Patent
12 Mar 2004
TL;DR: In this article, a method of decoding a turbo product code (TPC) code word comprises iteratively decoding the TPC code word using an iterative decoder, and then terminating the iterative decoding when the code word satisfies a cyclic redundancy check (CRC).
Abstract: A method of decoding a turbo product code (TPC) code word comprises iteratively decoding the TPC code word using an iterative decoder. The method further comprises terminating the iterative decoding when the TPC code word satisfies a cyclic redundancy check (CRC). The TPC code word can include a plurality of square code blocks of user data, with CRC data bits appended to one of the plurality of code blocks instead of replacing user data within the code blocks. Apparatus for implementing the method are also provided.

12 citations


Journal ArticleDOI
TL;DR: The following simple compression problem is NP-hard: given a collection of documents, find the pair of Huffman dictionaries that minimizes the total compressed size of the collection, where the best dictionary from the pair is used to compress each document.
Abstract: We show that the following simple compression problem is NP-hard: given a collection of documents, find the pair of Huffman dictionaries that minimizes the total compressed size of the collection, where the best dictionary from the pair is used to compress each document. We also show the NP-hardness of finding optimal multiple preset dictionaries for LZ'77-based compression schemes. Our reductions make use of the catalog segmentation problem, a natural partitioning problem. Our results justify heuristic attacks used in practice.

12 citations


Proceedings ArticleDOI
C. Bauer1, M. Vinton1
01 Jan 2004
TL;DR: This paper presents two novel optimization techniques that for the first time find optimal solutions for the AAC optimization problem.
Abstract: The MPEG-4 AAC audio encoder achieves low bit rates by appropriately choosing the two encoding parameters Huffman code books and scale factors. Existing AAC implementations solve the implied optimization problem using an heuristic two loop search. This paper presents two novel optimization techniques that for the first time find optimal solutions for the AAC optimization problem.

11 citations


Journal ArticleDOI
TL;DR: Analytical and experimental results suggest that the new algorithm is very useful in improving the 0/1 balance property for Huffman codes and RVLCs.
Abstract: This letter proposes a novel algorithm to obtain a suboptimal solution for the balance of bit distribution after Huffman coding. The algorithm is simple, and can be embedded in the conventional Huffman coding process. In addition, the letter also discusses the bit-balance problem for reversible variable-length codes (RVLCs) based on Huffman coding. Analytical and experimental results suggest that the new algorithm is very useful in improving the 0/1 balance property for Huffman codes and RVLCs.

Proceedings ArticleDOI
23 Mar 2004
TL;DR: TSC is a variable-length sub-optimal code that supports minimal prefix property that is beneficial in many types of applications: speeding up string matching over compressed text, speeding decoding process, robustness of error detection and recovery during transmission, as well as in general-purpose integer representation code.
Abstract: In this paper, a new coding technique called tagged sub-optimal code (TSC) is proposed. TSC is a variable-length sub-optimal code that supports minimal prefix property. TSC technique is beneficial in many types of applications: speeding up string matching over compressed text, speeding decoding process, robustness of error detection and recovery during transmission, as well as in general-purpose integer representation code. The experimental results show that TSC is 8.9 times faster than string matching over compressed text using Huffman encoding, and 3 times faster in the decoding process.

Book ChapterDOI
Hirofumi Muratani1
23 May 2004
TL;DR: An improved construction of a binary fingerprinting code is evaluated and the code length of the improved construction becomes about a tenth of the original c-secure CRT code.
Abstract: An improved construction of a binary fingerprinting code is evaluated. The c-secure CRT code, a variant of the c-secure code, has shorter code length than the original construction by Boneh and Shaw. Recently, two improvements to this code have been proposed. We provide conditions determining the code length of the construction combined with these two improvements and provide the optimal setting of parameters. We compare the code length of the improved construction with that of the original construction. For any size of collusion, the code length is improved. In particular, for the collusion size c≥ 32, the code length of the improved code becomes about a tenth of the original c-secure CRT code.

Book ChapterDOI
TL;DR: An exact algorithm for constructing an optimal even code is described, which has complexity O(n 3 log n), where n is the number of symbols and the cost is at most 50% higher than the cost of a Huffman code, for the same probabilities.
Abstract: Even codes are Huffman based prefix codes with the additional property of being able to detect the occurrence of an odd number of 1-bit errors in the message. They have been defined motivated by a problem posed by Hamming in 1980. Even codes have been studied for the case in which the symbols have uniform probabilities. In the present work, we consider the general situation of arbitrary probabilities. We describe an exact algorithm for constructing an optimal even code. The algorithm has complexity O(n 3 log n), where n is the number of symbols. Further we describe an heuristics for constructing a nearly optimal even code, which requires O(n log n) time. The cost of an even code constructed by the heuristics is at most 50% higher than the cost of a Huffman code, for the same probabilities. That is, less than 50% higher than the cost of the corresponding optimal even code. However, computer experiments have shown that, for practical purposes, this value seems to be much less: at most 5%, for n large enough. This corresponds to the overhead in the size of the encoded message, for having the ability to detect an odd number of 1-bit errors.

Proceedings ArticleDOI
05 Apr 2004
TL;DR: Several comparisons between the new algorithms, based on adaptive codes of order one and Huffman codes, and the well-known Huffman encoder for various input data strings are made.
Abstract: We present new algorithms for data compression, based on adaptive codes of order one and Huffman codes. Several comparisons between our algorithms and the well-known Huffman encoder for various input data strings are made. Some comparisons are also made with the LZ encoder.

Patent
Katsumi Otsuka1
01 Apr 2004
TL;DR: In this article, the scale of a Huffman table used for decoding was reduced by reducing the number of Huffman tables to a fixed number of bits at the start of a queued variable-length code word.
Abstract: This invention reduces the scale of a Huffman table used for decoding. A queuing unit queues a variable-length code word from a received bitstream. A switch circuit discriminates the type of the code word in accordance with the pattern of a predetermined number of bits at the start of the queued variable-length code word, extracts data having a sufficient code word length from a predetermined bit position on the basis of the discrimination result, and outputs the result to a Huffman table. The Huffman table compares the data from the switch circuit with a variable-length code word stored in advance, and when the data and the variable-length code word coincide, outputs first symbol data. The Huffman table also generates a sum value for the first symbol data, and generates two second symbols from the sum result. A selection unit selects and outputs one of the first symbol and two second symbols in accordance with the type of the received code. A selection unit selects and outputs one of the symbol selected by the selection unit and a symbol from an FLC decoder on the basis of the data queued by the queuing unit.

Patent
Su-Hyun Kim1, Cha Soon Back1
21 Oct 2004
TL;DR: A method for constructing and searching an improved Huffman table which is capable of improving the efficiency of decoding using a Huffman code is presented in this paper. But it is limited to the decoding of binary codes.
Abstract: A method for constructing and searching an improved Huffman table which is capable of improving the efficiency over an existing Huffman table in a decoding method and apparatus using a Huffman code. The method includes creating an improved Huffman table including an increment of a Huffman code length and the number of times Huffman code lengths repeats; generating a new bit string by extracting bits by the increment of the Huffman code length and adding the extracted bits to the end of a previous bit string; and reading values of corresponding codes by the number of times Huffman code length repeats from the improved Huffman table and determining whether values identical to a value of the new bit string are present. According to the present invention, unnecessary consumption of system resources can be reduced by decreasing the number of times search and comparison routines of Huffman codes are used.

Journal ArticleDOI
TL;DR: A combined technique to reduce redundancy and provide error control is presented and it is shown that this scheme gives a reduction in overall complexity without loss of error performance as compared to that employing a trellis decoder of GAC code followed by a Huffman decoder.
Abstract: In this paper, a combined technique to reduce redundancy and provide error control is presented. The proposed technique involves the concatenation of Huffman code and Modified Generalised Array Code. It is shown that thisscheme gives a reduction in overall complexity without loss of error performance as compared to that employing a trellis decoder of GAC code followed by a Huffman decoder. An algorithm for the design of a combined trellis code is proposed. The proposed design is implemented in software and its error performances are compared with those of the separate Huffman and Modified Generalised Array Codes as well as the uncoded schemes. Furthermore, a comparative complexity study in terms of the total number of states and computations is carried for the combined and separate schemes. This proposed combined code is suitable for both information storage devices and data transmission.

Journal ArticleDOI
TL;DR: An algorithm for parallel construction of Huffman codes in O(\frac{n}{\sqrt{p}} \log p) time with p processors is presented, improving the previous result of Levcopoulos and Przytycka.
Abstract: We present an algorithm for parallel construction of Huffman codes in $O(\frac{n}{\sqrt{p}} \log p)$ time with p processors, where p>1, improving the previous result of Levcopoulos and Przytycka. We also show, that a greedy Huffman tree can be constructed in $O(\sqrt{n} \log n)$ time with n processors.

Proceedings ArticleDOI
Yuriy Reznik1
27 Jun 2004
TL;DR: The result proves that the variance of its redundancy distribution of block Shannon and Huffman code is distributed very uniformly between its codewords.
Abstract: This paper presents the second-order properties of minimum redundancy block codes. A block Huffman code constructed for a binary memoryless source with probabilities is considered. The result proves that the variance of its redundancy distribution of block Shannon and Huffman code is distributed very uniformly between its codewords.

Posted Content
TL;DR: Fibonacci connection between non-decreasing sequences of positive integers producing maximum height Huffman trees and the Wythoff array has been proved.
Abstract: Fibonacci connection between non-decreasing sequences of positive integers producing maximum height Huffman trees and the Wythoff array has been proved.

Proceedings ArticleDOI
05 Apr 2004
TL;DR: A new technique for optimally encoding a given source statistical properties of which are described by the first-order model is introduced, and the overall computational cost of the proposed method is lower than that for the Huffman code.
Abstract: A new technique for optimally encoding a given source statistical properties of which are described by the first-order model is introduced. The calculation of codeword lengths is based on construction of a new source with statistics that is determined by the consecutive redistribution of the probabilities of symbols in accordance with their original probabilities at each stage of the encoding. The proposed method performs equally well for different orders of symbol probabilities. While codewords are generated by a separate combinatorial procedure, the overall computational cost of the proposed method is lower than that for the Huffman code.

01 Jan 2004
TL;DR: Theoretical analysis and experimental study show that the variable-tail code can provide the higher compression efficiency than Golomb, and is suitable for different test sets and could provide the high compression ratio.
Abstract: Test resource partitioning is an efficient method to reduce the test cost.This paper presents a novel and efficient code,i.e.variable-tail code,for test data compression.Theoretical analysis and experimental study show that the variable-tail code can provide the higher compression efficiency than Golomb.It is suitable for different test sets and could provide the high compression ratio.The decoder of the variable-tail code is simple and easy to be implemented.In order to achieve higher compression ratio,an efficient test vectors reordering algorithm (ERA) incorporating a dynamic X-bit assignment procedure is presented.Experimental results demonstrated the efficiency of the proposed code and algorithm.

Patent
15 Mar 2004
TL;DR: In this paper, a computer system for identifying an individual using a biometric characteristic of the individual includes a sensor for generating a first code, and a controller for storing the first code and a dynamic binary code conversion algorithm.
Abstract: A computer system for identifying an individual using a biometric characteristic of the individual includes a biometric sensor for generating a first code, and a controller including a memory for storing the first code and a dynamic binary code conversion algorithm. When the controller receives a sensor code from the biometric sensor, it compares the sensor code with the first code stored in the memory, and if the identity between the sensor code and the first code is verified, the controller generates a first binary code by means of the dynamic binary code conversion algorithm and outputs the first binary code from which the computer system generates a second binary code by means of the dynamic binary code conversion algorithm. The computer system then verifies the identity of the individual if the second binary code matches the first binary code.

Journal Article
TL;DR: In this paper, an improved construction of a binary fingerprinting code is evaluated and compared with the original construction, and the optimal setting of parameters for any size of collusion is provided.
Abstract: An improved construction of a binary fingerprinting code is evaluated. The c-secure CRT code, a variant of the c-secure code, has shorter code length than the original construction by Boneh and Shaw. Recently, two improvements to this code have been proposed. We provide conditions determining the code length of the construction combined with these two improvements and provide the optimal setting of parameters. We compare the code length of the improved construction with that of the original construction. For any size of collusion, the code length is improved. In particular, for the collusion size c > 32, the code length of the improved code becomes about a tenth of the original c-secure CRT code.

01 Jan 2004
TL;DR: In this paper, the authors used static and adaptif Huffman algorithm to compress text data, and also compare it and found that the performance of adaptif Huffman algorithm is best.
Abstract: Text compression algorithms are normally defined in terms of a source alphabet of 8- bit ASCII codes. Huffman algorithm is the most popular methods of text compression. This research used static and adaptif Huffman algorithms to compress text data, and also compare it. Variation of character occurs will decrease compression ratio. Iteration time of static Huffman algorithm for compress and decompress is faster than adaptif Huffman algorithm, but performance of adaptif Huffman algorithm is best. Keywords: text compression, static and adaptif huffman algorithm.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: A new code design algorithm is proposed for robust RVLC by enhancing the free distance and exploiting properties of the Huffman code and the average length function satisfying the distance condition in order to improve the code performance.
Abstract: In recent years, reversible variable-length codes (RVLC) have been proposed to recover correct information from erroneous bitstreams. The free distance of a binary code indicates the strength of the code to transmission errors. Since the minimum free distance of RVLC is one, we cannot ensure that encoded data are little exposed. We propose a new code design algorithm for robust RVLC by enhancing the free distance. In the proposed algorithm, we exploit properties of the Huffman code and the average length function satisfying the distance condition in order to improve the code performance.

Proceedings ArticleDOI
20 Oct 2004
TL;DR: An SGH-based data structure is proposed to obtain a memory efficient and constant Huffman decoding algorithm that spends a constant time on decoding each symbol and needs less memory size.
Abstract: We proposed an SGH-based data structure to obtain a memory efficient and constant Huffman decoding algorithm. In cooperation with the Aggarwal and Narayan's algorithm, the proposed data structure spends a constant time on decoding each symbol and needs less memory size. In addition, we derive some properties to reduce the proposed codeword searching time. We believe that the proposed approach is useful for various multimedia coding applications, especially when the applications are conducted on memory constrained devices.

Patent
16 Feb 2004
TL;DR: In this paper, the Huffman table is reduced by searching for the start of a variable-length code word from an input bit stream, and then extracting data of a sufficient code length from a predetermined bit position, based on the decision result.
Abstract: PROBLEM TO BE SOLVED: To reduce the size of a Huffman table. SOLUTION: A searching part 101 searches for the start of a variable-length code word from an input bit stream. A switching circuit 102 decides the kind of the code word, according to a pattern of the first predetermined number of bits in the located variable-length code word, extracts data of a sufficient code length from a predetermined bit position, based on the decision result, and outputs the result to a Huffman table 104. The Huffman table 104 compares data from the switching circuit 102 with a variable-length code word stored, in advance, and outputs the data as first symbol data, when there is a match. Further, an added value is generated (107, 108) for a first symbol, and an addition result is generated as two second symbols. A selection part 106 selects and outputs either the first symbol or the two second symbols, according to the kind of an inputted code. Moreover, a selection part 109 selects and outputs either the symbol selected by the selection part 106 or a symbol from an FLC decoder 110, based on data searched by the searching part 101. COPYRIGHT: (C)2005,JPO&NCIPI

Posted Content
TL;DR: Fibonacci-like polynomials produced by m-ary Huffman codes for absolutely ordered sequences have been described.
Abstract: Fibonacci-like polynomials produced by m-ary Huffman codes for absolutely ordered sequences have been described.