scispace - formally typeset
Search or ask a question

Showing papers on "Canonical Huffman code published in 2016"


Proceedings ArticleDOI
18 Mar 2016
TL;DR: A Huffman Coding based data compression algorithm is proposed and tested in MATLAB environment which will significantly reduce the size of one dimensional data array.
Abstract: While dealing with large data array required in various applications, the memory required for the data storage and data transfer of that bulk data becomes difficult. If the array size can be reduced without losing the data, the problem of storage and data transfer can be avoided. In this paper, a Huffman Coding based data compression algorithm is proposed and tested in MATLAB environment which will significantly reduce the size of one dimensional data array. Though the algorithm is tested with number array only, the algorithm can be extended to be applied with character array with slight modification.

13 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: The technique of Huffman Coding and Double Huffman coding are discussed and their performance analysis is compared to achieve a better result.
Abstract: Huffman coding [11] is a most popular technique for generating prefix-free codes [7, 10]. It is an efficient algorithm in the field of source coding. It produces the lowest possible number of code symbols of a single source symbol [1]. Huffman coding is a most widely used lossless compression technique [2]. However, there are some limitations that arise in Huffman coding [20, 21]. This method produces a code of few bits for a symbol having high probability of occurrence and large number of bits for a symbol having low probability of occurrence [3]. Instead of this, in Double Huffman Coding when the code word of the symbol has been generated it will be compressed on binary basis. Through this technique a better result be achieved. In this paper we discussed the technique of Huffman Coding and Double Huffman coding and compare their performance analysis.

12 citations


Proceedings ArticleDOI
06 Apr 2016
TL;DR: The Huffman coding technique, which generally follows the Lossless Compression technique, is found to be an optimal solution of transportation of data.
Abstract: Code compression technique is used for the reduction of codes to allow transportation of digital data from the transmitter (source) to the receiver (destination). These fixed length codes are converted into variable length codes having varied number of bits. The Huffman coding technique is found to be an optimal solution of transportation of data. It generally follows the Lossless Compression technique where symbols in encoded data are converted into a binary symbol and this precomputed text availed is further divided into blocks having variable lengths and condensed under efficient algorithm encoding procedure using code words having limited value. To increase the compression ratio these code words could be reused for encoding different compatible blocks. An Android app is been created to portray this technique for the ease understanding.

4 citations


Proceedings ArticleDOI
18 Mar 2016
TL;DR: This paper aims at improving the strength of the key by making use of modified Huffman Code which is a special kind of prefix code which is optimal in nature and is prevalently used for lossless data compression.
Abstract: Cryptography is a technique by which the stored and transferred data in a particular form can be comprehended and processed by only those, who it is intended for. In the modern era, cryptography is most often associated with the deception of the plaintext into a cipher text using a process called Encryption then back to the original plaintext using a process called Decryption. Strong cryptography refers to those cryptographic systems or components that provide considerable immunity to cryptanalysis. The potency of any cryptographic algorithm is solely dependent on the strength of the key used. This paper aims at improving the strength of the key by making use of modified Huffman Code which is a special kind of prefix code which is optimal in nature and is prevalently used for lossless data compression. The outcome of Huffman code is a code of variable length which can be effectively used encoding a source character. The aforementioned concept makes use of the Blowfish algorithm for encryption and is implemented using Java.

4 citations


Proceedings ArticleDOI
13 Sep 2016
TL;DR: This paper provides the first combination of Huffman codes and word prediction, using both trigram and long short term memory (LSTM) language models, and results show a significant effect of the length of word prediction lists.
Abstract: Two approaches to reducing effort in switch-based text entry for augmentative and alternative communication devices are word prediction and efficient coding schemes, such as Huffman. However, character distributions that inform the latter have never accounted for the use of the former. In this paper, we provide the first combination of Huffman codes and word prediction, using both trigram and long short term memory (LSTM) language models. Results show a significant effect of the length of word prediction lists, and up to 41.46% switch-stroke savings using a trigram model.

2 citations


Proceedings ArticleDOI
01 Apr 2016
TL;DR: The implementation of Huffman algorithm in Julia language is implemented which can be visualized in an interactive platform called IJulia supported by Jupyter and it creates a learning platform that increases the understanding of the coding technique for students.
Abstract: In order to familiarize and comprehend the learning of Huffman algorithm in a simpler way, we have implemented Huffman algorithm in Julia language which can be visualized in an interactive platform called IJulia supported by Jupyter and it creates a learning platform that increases the understanding of the coding technique for students. It involves a set of commands that are understandable and familiar to the user. Data compression is an essential requirement for lossless data transmission. Message bits are encoded in the transmitter end so as to avoid errors and redundancy during the transmission of information. Huffman code is a prefix type code which compresses the message bits by knowing the frequency of occurrence of each character or probability of each symbol. A Huffman/binary tree is formed based on the occurrence of the symbols and symbols are then encoded. Huffman algorithm implements bottom-up approach. Each symbol encoded with the Huffman tree logic is decodable. Each encoded information can be decoded by tracing from the top of the tree to the required symbol. This paper introduces the implementation of this algorithm using Julia programming language supported by Jupyter platform. This encoding algorithm is the most efficient way of compressing data.

2 citations


Proceedings ArticleDOI
18 Mar 2016
TL;DR: A systematic method for video compression using a new technique: collaboration of fast curvelet transform, burrows wheeler transform and Huffman coding is narrated.
Abstract: Due to the improvement in quality of multimedia and video services, peoples are more experienced. Because of bandwidth requirements and resolution problem the designers still search for new robust coding technique. This paper narrates a systematic method for video compression using a new technique: collaboration of fast curve let transform, burrows wheeler transform and Huffman coding. Modify the number of element in each matrix at the output of fast curve let transform. Then we apply burrows wheeler and Shannon fano encoding. Burrows wheeler transform BWT is mainly used for compressing any category of data anyhow of its information content. The Huffman coding principle is, compact binary string is used to represent a compressed stream. Huffman codes can be properly reconstructed because no code can be placed before the another code. This technique is used for gray scale as well as color videos.

2 citations


Journal ArticleDOI
TL;DR: The results and analysis of the proposed system revealed that when a file is embedded in a cover video, the properties of the original video and the stego video are the same and the level of compression achieved is far above the average 20% normally obtained.
Abstract: The Technology as a product of knowledge has vulnerabilities so that its development is continuously undertaken. Researching steganography and cryptography relates to the perception of secrecy and privacy. Based on this perception some basic requirements of steganographic applications are often ignored. Beyond security, steganographic applications are required to provide high payload or embedding capacity as well as very good and appreciable level of robustness. Emphasis has always been placed on security to the point that capacity is often not mention or ignored. Most steganographic applications or software currently in the market increase the size of the resultant file after embedding. Conceptually, the resultant file size is supposed to increase when using an embedding technique. This is effectively so, because noise is being added to the low bits which will always increase the size. The main aim of this research is to ensure same file size output after embedding and also reduce the file size to be embedded drastically. To obtain same file size, the cover video was re-encoded and reconstructed using the techniques of video encoding. The file was then embedded in a converted frame using LSB. The high capacity or payload was achieved by employing RSA and Huffman code compression algorithms. The results and analysis of the proposed system revealed that when a file is embedded in a cover video, the properties of the original video and the stego video are the same and the level of compression achieved is far above the average 20% normally obtained General Terms Steganography, Cryptography, Payload, RSA, Huffman Code, LSB, Spatial Domain.

2 citations


Journal ArticleDOI
TL;DR: This remark presents a correction to Algorithm 673 (dynamic Huffman coding) [Vitter 1989] and its translation to MATLAB.
Abstract: This remark presents a correction to Algorithm 673 (dynamic Huffman coding) [Vitter 1989] and its translation to MATLAB.

1 citations



Proceedings ArticleDOI
06 Jul 2016
TL;DR: The technique of identifying clusters of JPEG-files on the storage medium based on binary patterns and the experimental results of an attempt to build a similar technique for sector identifying are offered.
Abstract: File fragmentation proves to be a major challenge for the majority of file carving techniques. Following the works of Simson Garfinkel, Nasir Memon and other authors we seek to find a technique to identify digital fragments (clusters or sectors) of JPEG-files on the digital storage medium or at least sort all the fragments of the storage based on their probability of being the part of JPEG-file from the most probable to the least probable. This paper offers the technique of identifying clusters of JPEG-files on the storage medium based on binary patterns and the experimental results of an attempt to build a similar technique for sector identifying.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This work proposes directive-based automatic code generation for a multiple-precision code from a C code with double precision, which enables users to check the accuracy dependency of many algorithms by adding a few directives to C codes withdouble precision.
Abstract: We propose directive-based automatic code generation for a multiple-precision code from a C code with double precision. The multiple-precision code uses the GNU Multiple Precision Arithmetic Library (GMP). Our code generation functions can be separated into binary operations by automatically creating a temporary variable, transforming C mathematical functions into corresponding GMP functions, and managing functions that return a double-precision value. Our proposed system enables users to check the accuracy dependency of many algorithms by adding a few directives to C codes with double precision.

Journal Article
TL;DR: The IW-DVC method is to exploit the special properties of the depth data to achieve a high compression ratio which preserves the quality of the captured images, and removes the existing redundant information between the depth frames to further increase compression efficiency without sacrificing image quality.
Abstract: With the recent development of 3D display technologies, there is an increasing demand for realistic 3D video. However, efficient transmission and storage of depth data still presents a challenging task to the research community in these applications. Consequently a new method, called 3D Image Warping Based Depth Video Compression (IWDVC) is proposed for fast and efficient compression of 3D video by using Huffman coding. The IW-DVC method is to exploit the special properties of the depth data to achieve a high compression ratio which preserves the quality of the captured images. This method combines the egomotion estimation and 3D image warping techniques and includes a lossless coding scheme which is capable of adapting to depth data with a high dynamic range. IWDVC operates in high-speed, suitable for real-time applications, and is able to attain an enhanced motion compensation accuracy compared with the conventional approaches. Also, it removes the existing redundant information between the depth frames to further increase compression efficiency without sacrificing image quality.

Proceedings ArticleDOI
25 Sep 2016
TL;DR: To further increase the hiding capacity of MP3 files, an improved algorithm is proposed in this paper which searches out additional 10 Huffman codeword pairs which meet the transparence requirements.
Abstract: To further increase the hiding capacity of MP3 files, an improved algorithm is proposed in this paper. Compare to previous work, this algorithm searches out additional 10 Huffman codeword pairs which meet the transparence requirements. With the newly found pairs, the hiding capacity is increased and the transparence is still preserved. Keywords-MP3; concealed message; Huffman codeword; codeword pair

Patent
11 Oct 2016
TL;DR: In this article, techniques for indicating reusability of an index that determines a Huffman codebook used to code data associated with a vector in a spherical harmonics domain are described.
Abstract: In general, techniques are described for indicating reusability of an index that determines a Huffman codebook used to code data associated with a vector in a spherical harmonics domain. The bitstream may comprise an indicator for whether to reuse, from a previous frame, at least one syntax element indicative of the index. The memory may be configured to store the bitstream.

Patent
23 Dec 2016
TL;DR: In this article, a Huffman tree of the selected type is produced by determining a number of nodes available to be allocated as leaves in each level of the Huffman Tree accounting for allocation of leaves in every level.
Abstract: A method for generating Huffman codewords to encode a dataset includes selecting a Huffman tree type from a plurality of different Huffman tree types. Each of the Huffman tree types specifies a different range of codeword length in a Huffman tree. A Huffman tree of the selected type is produced by: determining a number of nodes available to be allocated as leaves in each level of the Huffman tree accounting for allocation of leaves in each level of the Huffman tree; allocating nodes to be leaves such that the number of nodes allocated in a given level of the Huffman tree is constrained to be no more than the number of nodes available to be allocated in the given level; and assigning the leaves to symbols of the dataset based an assignment strategy selected from a plurality of assignment strategies to produce symbol codeword information.