Journal ArticleDOI

# Adapting the Knuth-Morris-Pratt algorithm for pattern matching in Huffman encoded texts

01 Mar 2006-Information Processing and Management (Pergamon Press, Inc.)-Vol. 42, Iss: 2, pp 429-439

TL;DR: A bitwise KMP algorithm is proposed that can move one extra bit in the case of a mismatch since the alphabet is binary, and two practical Huffman decoding schemes which handle more than a single bit per machine operation are combined.

AbstractIn the present work we perform compressed pattern matching in binary Huffman encoded texts [Huffman, D. (1952). A method for the construction of minimum redundancy codes, Proc. of the IRE, 40, 1098-1101]. A modified Knuth-Morris-Pratt algorithm is used in order to overcome the problem of false matches, i.e., an occurrence of the encoded pattern in the encoded text that does not correspond to an occurrence of the pattern itself in the original text. We propose a bitwise KMP algorithm that can move one extra bit in the case of a mismatch since the alphabet is binary. To avoid processing any bit of the encoded text more than once, a preprocessed table is used to determine how far to back up when a mismatch is detected, and is defined so that we are always able to align the start of the encoded pattern with the start of a codeword in the encoded text. We combine our KMP algorithm with two practical Huffman decoding schemes which handle more than a single bit per machine operation; skeleton trees defined by Klein [Klein, S. T. (2000). Skeleton trees for efficient decoding of huffman encoded texts. Information Retrieval, 3, 7-23], and numerical comparisons between special canonical values and portions of a sliding window presented in Moffat and Turpin [Moffat, A., & Turpin, A. (1997). On the implementation of minimum redundancy prefix codes. IEEE Transactions on Communications, 45, 1200-1207]. Experiments show rapid search times of our algorithms compared to the "decompress then search" method, therefore, files can be kept in their compressed form, saving memory space. When compression gain is important, these algorithms are better than cgrep [Ferragina, P., Tommasi, A., & Manzini, G. (2004). C Library to search over compressed texts, http://roquefort.di.unipi.it/~ferrax/CompressedSearch], which is only slightly faster than ours.

Topics: Huffman coding (60%), Pattern matching (52%), Bitwise operation (51%)

### Summary

• A modified Knuth-Morris-Pratt (KMP) algorithm is used in order to overcome the problem of false matches, i.e., an occurrence of the encoded pattern in the encoded text that does not correspond to an occurrence of the pattern itself in the original text.
• The authors propose a bitwise KMP algorithm that can move one extra bit in the case of a mismatch, since the alphabet is binary.
• To avoid processing any encoded text bit more than once, a preprocessed table is used to determine how far to back up when a mismatch is detected, and is defined so that the encoded pattern is always aligned with the start of a codeword in the encoded text.
• The authors call the combined algorithms sk-kmp and win-kmp respectively.
• The following table compares their algorithms with cgrep of Moura et al. [2] and agrep which searches the uncompressed text.
• Columns three and four compare the compression performance (size of the compressed text as a percentage of the uncompressed text) of the Huffman code (huff ) with cgrep.
• The next columns compare the processing time of pattern matching of these algorithms.
• The “decompress and search” methods, which decode using skeleton trees or Moffat and Turpin’s sliding window and search in parallel using agrep, are called sk-d and win-d respectively.
• The search times are average values for patterns ranging from infrequent to frequent ones.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Adapting the Knuth-Morris-Pratt Algorithm for Pattern
Matching in Huﬀman Encoded Texts
Ajay Daptardar and Dana Shapira
{amax/shapird}@cs.brandeis.edu
Computer Science Department, Brandeis University, Waltham, MA
We perform compressed pattern matching in Huﬀman encoded texts. A modiﬁed
Knuth-Morris-Pratt (KMP) algorithm is used in order to overcome the problem of
false matches, i.e., an occurrence of the encoded pattern in the encoded text that does
not correspond to an occurrence of the pattern itself in the original text. We propose
a bitwise KMP algorithm that can move one extra bit in the case of a mismatch,
since the alphabet is binary. To avoid processing any encoded text bit more than
once, a preprocessed table is used to determine how far to back up when a mismatch
is detected, and is deﬁned so that the encoded pattern is always aligned with the
start of a codeword in the encoded text. We combine our KMP algorithm with
two Huﬀman decoding algorithms which handle more than a single bit per machine
operation; Skeleton trees deﬁned by Klein [1], and numerical comparisons between
special canonical values and portions of a sliding window presented in Moﬀat and
Turpin [3]. We call the combined algorithms sk-kmp and win-kmp resp e ctively.
The following table compares our algorithms with cgrep of Moura et al. [2] and
agrep which searches the uncompressed text. Columns three and four compare the
compression performance (size of the compressed text as a percentage of the uncom-
pressed text) of the Huﬀman code (huﬀ ) with cgrep. The next columns compare
the processing time of pattern matching of these algorithms. The “decompress and
search” methods, which decode using skeleton trees or Moﬀat and Turpin’s sliding
window and search in parallel using agrep, are called sk-d and win-d respectively. The
search times are average values for patterns ranging from infrequent to frequent ones.
Files Size (bytes) Compression Search Times (sec)
cgrep huﬀ cgrep sk-kmp win-kmp sk-d win-d
world192.txt 2,473,400 50.88 32.20 0.07 0.13 0.08 0.21 0.13
bible.txt 4,047,392 49.70 26.18 0.05 0.22 0.13 0.36 0.22
books.txt 12,582,090 52.10 30.30 0.21 0.69 0.39 1.21 0.74
95-03-erp.txt 23,976,547 34.49 25.14 0.18 1.10 0.65 1.80 1.11
As can be seen, the KMP variants are faster than the methods corresponding to
“decompress and search” but slower than cgrep. However, when compression perfor-
mance is important or when one does not want to re-compress Huﬀman encoded ﬁles
in order to use cgrep, the proposed algorithms are the better choice.
References
[1] Klein S.T., Skeleton Trees for eﬃcient decoding of Huﬀman encoded texts, Infor-
mation Retrieval , 3, 7-23, 2000.
[2] Moura E.S., Navarro G., Ziviani N. and Baeza-Yates R., Fast and ﬂexible
word searching on compressed Text, ACM TOIS, 18(2), 113-139, 2000.
[3] Turpin A., Moffat A., Fast ﬁle search using text compression, 20th Proc. Aus-
tralian Computer Science Conference, 1-8, 1997.
##### Citations
More filters

Journal ArticleDOI
Abdulsalam Alarabeyyat
TL;DR: The experimental results show that the proposed algorithm could achieve an excellent compression ratio without losing data when compared to the standard compression algorithms.
Abstract: The development of multimedia and digital imaging has led to high quantity of data required to represent modern imagery. This requires large disk space for storage, and long time for transmission over computer networks, and these two are relatively expensive. These factors prove the need for images compression. Image compression addresses the problem of reducing the amount of space required to represent a digital image yielding a compact representation of an image, and thereby reducing the image storage/transmission time requirements. The key idea here is to remove redundancy of data presented within an image to reduce its size without affecting the essential information of it. We are concerned with lossless image compression in this paper. Our proposed approach is a mix of a number of already existing techniques. Our approach works as follows: first, we apply the well-known Lempel-Ziv-Welch (LZW) algorithm on the image in hand. What comes out of the first step is forward to the second step where the Bose, Chaudhuri and Hocquenghem (BCH) error correction and detected algorithm is used. To improve the compression ratio, the proposed approach applies the BCH algorithms repeatedly until “inflation” is detected. The experimental results show that the proposed algorithm could achieve an excellent compression ratio without losing data when compared to the standard compression algorithms.

26 citations

• ...An advantage of this technique is that it allows for higher compression ratio than the lossless [3,4]....

[...]

Journal ArticleDOI
01 Sep 2017
TL;DR: This research modeled a search process of the Knuth-Morris-Pratt algorithm in the form of easy-to-understand visualization, Knuth/Morris algorithm selection because this algorithm is easy to learn and easy to implement into many programming languages.
Abstract: In this research modeled a search process of the Knuth-Morris-Pratt algorithm in the form of easy-to-understand visualization, Knuth-Morris-Pratt algorithm selection because this algorithm is easy to learn and easy to implement into many programming languages.

24 citations

Journal ArticleDOI
TL;DR: The Wavelet tree is adapted, in this paper, to Fibonacci codes, so that in addition to supporting direct access to the fibonacci encoded file, it also increases the compression savings when compared to the original Fib onacci compressed file.
Abstract: A Wavelet tree is a data structure adjoined to a file that has been compressed by a variable length encoding, which allows direct access to the underlying file, resulting in the fact that the compressed file is not needed any more. We adapt, in this paper, the Wavelet tree to Fibonacci codes, so that in addition to supporting direct access to the Fibonacci encoded file, we also increase the compression savings when compared to the original Fibonacci compressed file. The improvements are achieved by means of a new pruning technique.

21 citations

### Cites methods from "Adapting the Knuth-Morris-Pratt alg..."

• ...This has also been applied on Huffman trees [20] producing a compact tree for efficient use, such as compressed pattern matching [29]....

[...]

Posted Content
TL;DR: This paper presents two efficient algorithms for the binary string matching problem adapted to completely avoid any reference to bits allowing to process pattern and text byte by byte.
Abstract: The binary string matching problem consists in finding all the occurrences of a pattern in a text where both strings are built on a binary alphabet. This is an interesting problem in computer science, since binary data are omnipresent in telecom and computer network applications. Moreover the problem finds applications also in the field of image processing and in pattern matching on compressed texts. Recently it has been shown that adaptations of classical exact string matching algorithms are not very efficient on binary data. In this paper we present two efficient algorithms for the problem adapted to completely avoid any reference to bits allowing to process pattern and text byte by byte. Experimental results show that the new algorithms outperform existing solutions in most cases.

14 citations

Journal ArticleDOI
TL;DR: The pruning procedure is improved and empirical evidence is given that when memory storage is of main concern, the suggested data structure outperforms other direct access techniques such as those due to Külekci, DACs and sampling, with a slowdown as compared to DAC’s and fixed length encoding.
Abstract: Given a file T, we suggest a data structure based on pruning a Huffman shaped Wavelet tree (WT) according to the underlying skeleton Huffman tree that enables direct access to the i-th element of T. This pruned WT is especially designed to support faster random access and save memory storage, at the price of less effective rank and select operations, as compared to the original Huffman shaped WT. The savings are significant only if the underlying alphabet is large enough. We give empirical evidence that when memory storage is of main concern, our suggested data structure generally outperforms other direct access techniques such as those due to Klekci, dacs and sampling, with a slowdown as compared to dacs and fixed length encoding. A Huffman shaped Wavelet tree is pruned by means of a Skeleton Huffman tree.The data structure supports faster random access and saves memory storage.It enables an enhanced rank and select operation rather than an exhaustive one.

14 citations

### Cites methods from "Adapting the Knuth-Morris-Pratt alg..."

• ...Skeleton trees have been used to accelerate compressed pattern matching in [23]....

[...]

##### References
More filters

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,013 citations

Journal ArticleDOI
TL;DR: An algorithm is presented which finds all occurrences of one given string within another, in running time proportional to the sum of the lengths of the strings, showing that the set of concatenations of even palindromes, i.e., the language $\{\alpha \alpha ^R\}^*$, can be recognized in linear time.
Abstract: An algorithm is presented which finds all occurrences of one given string within another, in running time proportional to the sum of the lengths of the strings. The constant of proportionality is low enough to make this algorithm of practical use, and the procedure can also be extended to deal with some more general pattern-matching problems. A theoretical application of the algorithm shows that the set of concatenations of even palindromes, i.e., the language $\{\alpha \alpha ^R\}^*$, can be recognized in linear time. Other algorithms which run even faster on the average are also considered.

3,023 citations

Journal ArticleDOI
TL;DR: T h e string-matching problem is a very c o m m o n problem; there are many extensions to t h i s problem; for example, it may be looking for a set of patterns, a pattern w i t h "wi ld cards," or a regular expression.
Abstract: T h e string-matching problem is a very c o m m o n problem. We are searching for a string P = PtP2. . "Pro i n s i d e a la rge t ex t f i le T = t l t2. . . t . , b o t h sequences of characters from a f i n i t e character set Z. T h e characters may be English characters in a text file, DNA base pairs, lines of source code, angles between edges in polygons, machines or machine parts in a production schedule, music notes and tempo in a musical score, and so fo r th . We w a n t to f i n d a l l occurrences of P i n T; n a m e l y , we are searching for the set of starting posit ions F = {i[1 --i--n m + 1 s u c h t h a t titi+ l " " t i + m 1 = P } " T h e two most famous algorithms for this problem are t h e B o y e r M o o r e algorithm [3] and t h e K n u t h Morris Pratt algorithm [10]. There are many extensions to t h i s problem; for example, we may be looking for a set of patterns, a pattern w i t h "wi ld cards," or a regular expression. String-matching tools are included in every reasonable text editor, word processor, and many other applications.

782 citations

Journal ArticleDOI
TL;DR: A fast compression technique for natural language texts that allows a large number of variations over the basic word and phrase search capability, such as sets of characters, arbitrary regular expressions, and approximate matching.
Abstract: We present a fast compression technique for natural language texts. The novelties are that (1) decompression of arbitrary portions of the text can be done very efficiently, (2) exact search for words and phrases can be done on the compressed text directly, using any known sequential pattern-matching algorithm, and (3) word-based approximate and extended search can also be done efficiently without any decoding. The compression scheme uses a semistatic word-based model and a Huffman code where the coding alphabet is byte-oriented rather than bit-oriented. We compress typical English texts to about 30% of their original size, against 40% and 35% for Compress and Gzip, respectively. Compression time is close to that of Compress and approximately half of the time of Gzip, and decompression time is lower than that of Gzip and one third of that of Compress. We present three algorithms to search the compressed text. They allow a large number of variations over the basic word and phrase search capability, such as sets of characters, arbitrary regular expressions, and approximate matching. Separators and stopwords can be discarded at search time without significantly increasing the cost. When searching for simple words, the experiments show that running our algorithms on a compressed text is twice as fast as running the best existing software on the uncompressed version of the same text. When searching complex or approximate patterns, our algorithms are up to 8 times faster than the search on uncompressed text. We also discuss the impact of our technique in inverted files pointing to logical blocks and argue for the possibility of keeping the text compressed all the time, decompressing only for displaying purposes.

271 citations

### "Adapting the Knuth-Morris-Pratt alg..." refers background in this paper

• ...[2] and agrep which searches the uncompressed text....

[...]

Journal ArticleDOI
Abstract: The current explosion of stored information necessitates a new model of pattern matching, that ofcompressed matching. In this model one tries to find all occurrences of a pattern in a compressed text in time proportional to the compressed text size,i.e., without decompressing the text. The most effective general purpose compression algorithms areadaptive, in that the text represented by each compression symbol is determined dynamically by the data. As a result, the encoding of a substring depends on its location. Thus the same substring may “look different” every time it appears in the compressed text. In this paper we consider pattern matching without decompression in the UNIX Z-compression. This is a variant of the Lempel?Ziv adaptive compression scheme. Ifnis the length of thecompressedtext andmis the length of the pattern, our algorithms find the first pattern occurrence in timeO(n+m2) orO(nlogm+m). We also introduce a new criterion to measure compressed matching algorithms, that ofextra space. We show how to modify our algorithms to achieve a trade-off between the amount of extra space used and the algorithm's time complexity.

222 citations