Journal ArticleDOI

# A comparison of approximate string matching algorithms

01 Dec 1996-Software - Practice and Experience (John Wiley & Sons, Inc.)-Vol. 26, Iss: 12, pp 1439-1458
TL;DR: It turns out that none of the algorithms is the best for all values of the problem parameters, and the speed differences between the methods can be considerable.
Abstract: Experimental comparisons of the running time of approximate string matching algorithms for the k differences problem are presented. Given a pattern string, a text string, and an integer k, the task is to find all approximate occurrences of the pattern in the text with at most k differences (insertions, deletions, changes). We consider seven algorithms based on different approaches including dynamic programming, Boyer-Moore string matching, suffix automata, and the distribution of characters. It turns out that none of the algorithms is the best for all values of the problem parameters, and the speed differences between the methods can be considerable.

### Introduction

• Experimental comparison of the running time of approximate string matching algorithms for the k differences problem is presented.
• Tarhio and Ukkonen8, 9 present an algorithm which is based on the Boyer-Moore approach and works in sublinear average time.
• The theoretical analyses given in the literature are helpful but it is important that the theory is completed with experimental comparisons extensive enough.
• The algorithm evaluates a modified form of tableD.

### 4 for J in 1 .. n loop

• For everyC-diagonal Algorithm GP performs an iteration that evaluates it from two previousC-diagonals (lines 7–38).
• The evaluation of each entry starts with evaluating the Col value (line 11).
• The sequence is updated on lines 28–35. ProcedureWithin(d) called on line 14 tests if text positiond is within some interval of thek first reference triples in the sequence.
• Instead of the wholeC defined above, tableC of the algorithm contains only three successiveC-diagonals.
• The use of this buffer of three diagonals is organized with variablesB1, B2, andB3.

### 7 DeQueue(Q, X);

• The scanning phase (lines 3–16) scans over the text and marks the parts that may contain approximate occurrences ofP .
• Parameterx of call EDP(x) tells how many columns should be evaluated for one marked diagonal.
• The minimum valuem for x is applicable for DC.
• The scanning phase is almost identical to the original algorithm.
• If (x) andq(x) are frequencies of characterx in the pattern and inQ, variableZ has the valueXx in Qmax(q(x) f(x); 0): The value ofZ is computed together with tableC which maintains the differencef(x) q(x) for everyx.

### 22 Next.Go To(P(I)) := R; Next := Next.Fail;

• In theformer case there is no approximate occurrence at the current alignmentand in the latter case a potential approximate occurrence has been found.
• For determining the length of the shift, i.e. what is the nextpo ential diagonal afterh for marking, the authors search for the first diagonal afterh, where at least one of the charactersth+m; th+m 1; : : : ; th+m k matches with the corresponding character ofP .

### 6 for I in m–k .. k loop

• The authors performed an extensive test program on all seven algorithms DP, EDP, GP, DC, UW, MM, and ABM described in the previous sections.
• In their tests, the authors used random patterns of varying lengths and random texts of length 100,000 characters over alphabets of different sizes.
• Because algorithms EDP, DC, MM, and ABM were better than the others, the authors studied relations of their execution times more carefully.
• The execution times of EDP and ABM on Sun (shown in Table II for some parameter values) were on the average 68 per cent and 60 per cent, respectively, of the corresponding times on Vaxstation.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report SOFTWARE—PRACTICE AND EXPERIENCE, VOL. 1(1), 1–4 (JANUARY 1988)
A Comparison of
Approximate String Matching Algorithms
PETTERI JOKINEN, JORMA TARHIO, AND ESKO UKKONEN
Department of Computer Science, P.O. Box 26 (Teollisuuskatu 23), FIN-00014 University of Helsinki, Finland
(email: tarhio@cs.helsinki.ﬁ)
SUMMARY
Experimental comparison of the running time of approximate string matching algorithms for the
k
dif-
ferences problem is presented. Given a pattern string, a text string, and integer
k
, the task is to nd all
approximate occurrences of the pattern in the text with at most
k
differences (insertions, deletions, changes).
Weconsider sevenalgorithms basedondifferentapproachesincludingdynamicprogramming,Boyer-Moore
string matching, sufﬁx automata, and the distribution of characters. It turns out that none of the algorithms
is the best for all values of the problem parameters, and the speed differences between the methods can be
considerable.
KEY WORDS String matching Edit distance k differences problem
INTRODUCTION
We consider the
k
differences problem, a version of the approximate string matching problem.
Given two strings, text
T
=
t
1
t
2
: : : t
n
and pattern
P
=
p
1
p
2
: : : p
m
and integer
k
to ﬁnd the end points of all approximate occurrences of
P
in
T
. An approximate occurrence
means asubstring
P
0
of
T
such thatat most
k
editing operations (insertions,deletions, changes)
are needed to convert
P
0
to
P
.
There are several algorithms proposed for this problem, see e.g. the survey of Galil and
Giancarlo.
1
The problem can be solved in time
O
(
mn
)
by dynamic programming.
2, 3
A very
simple improvement giving
O
(
k n
)
expected time solution for random strings is described by
Ukkonen.
3
Later, Landau and Vishkin,
4, 5
Galil and Park,
6
Ukkonen and Wood
7
give different
algorithms that consist of preprocessing the pattern in time
O
(
m
2
)
(or
O
(
m
)
) and scanning
the text in worst-case time
O
(
k n
)
. Tarhio and Ukkonen
8, 9
present an algorithm which is based
on the Boyer-Moore approach and works in sublinear average time. There are also several
other efﬁcient solutions
10-17
, and some
11-14
of them work in sublinear average time. Currently
O
(
k n
)
is the best worst-case bound known if the preprocessing time is allowed to be at most
O
(
m
2
)
.
There are also fast algorithms
9, 17-20
for the
k
mismatches problem, which is a reduced form
of
k
differences problem so that a change is the only editing operation allowed.
It is clear that with such a multitude of different solutions to the same problem it is difﬁcult
to select a proper method for each particular approximate string matching task. The theoretical
analyses given in the literature are helpful but it is important that the theory is completed with
experimental comparisons extensive enough.
CCC 0038–0644/88/010001–04 Received 1 March 1988
c
1988 by John Wiley & Sons, Ltd. Revised 25 March 1988 2 PETTERI JOKINEN, JORMA TARHIO, AND ESKO UKKONEN
We will present an experimental comparison
of the running times of seven algorithms for
the
k
differences problem. The tested algorithms are: two dynamic programming methods,
2, 3
Galil-Park algorithm,
6
Ukkonen-Wood algorithm,
7
an algorithm counting the distribution
of characters,
18
approximate Boyer-Moore algorithm,
9
and an algorithm based on maximal
matches between the pattern and the text.
10
(The last algorithm
10
is very similar to the linear
algorithm of Chang and Lawler,
11
although they have been invented independently.) We give
brief descriptions of the algorithms as well as an Ada code for their central parts. As our
emphasis is in the experiments, the reader is advised to consult the original references for
more detailed descriptions of the methods.
The paper is organized as follows. At ﬁrst, the framework based on edit distance is intro-
duced. Then the seven algorithms are presented. Finally, the comparison of the algorithms is
represented and its results are summarized.
THE K DIFFERENCES PROBLEM
We use the concept of edit distance
21, 22
to measure the goodness of approximate occurrences
of a pattern. The edit distance between two strings,
A
and
B
in alphabet Σ, can be deﬁned
as the minimum number of editing steps needed to convert
A
to
B
. Each editing step is a
rewriting step of the form
a
!
"
(a deletion),
"
!
b
(an insertion), or
a
!
b
(a change) where
a
,
b
are in Σ and
"
is the empty string.
The k differences problem is, given pattern
P
=
p
1
p
2
: : : p
m
and text
T
=
t
1
t
2
: : : t
n
in
alphabet Σ of size
, and integer
k
, to ﬁnd all such
j
that the edit distance (i.e., the number of
differences) between
P
and some substring of
T
ending at
t
j
is at most
k
. The basic solution
of the problem is the following dynamic programming method:
2, 3
Let
D
be an
m
+ 1 by
n
+
1 table such that
D
(
i; j
)
is the minimum edit distance between
p
1
p
2
: : : p
i
and any substring
of
T
ending at
t
j
. Then
D
(
0
; j
) =
0
;
0
j
n
;
D
(
i; j
) =
min
8
<
:
D
(
i
1
; j
) +
1
D
(
i
1
; j
1
) +
if
p
i
=
t
j
then 0 else 1
D
(
i; j
1
) +
1
Table
D
can be evaluated column-by-column in time
O
(
mn
)
. Whenever
D
(
m; j
)
is found
to be at most
k
for some
j
, there is an approximate occurrence of
P
ending at
t
j
with edit
distance
D
(
m; j
)
k
. Hence
j
is a solution to the
k
differences problem.
In Fig. 1 there is an example of table
D
for
T
= bcbacbbb and
P
= cacd. The pattern
occurs at positions 5 and 6 of the text with at most 2 differences.
All the algorithms presented work within this model, but they utilize different approaches
in restricting the number of entries that are necessary to evaluate in table
D
. Some of the algo-
rithms work in two phases: scanning and checking. The scanning phase searches for potential
occurrences of the pattern, and the checking phase veriﬁes if the suggested occurrences are
good or not. The checking is always done using dynamic programming.
The comparison was carried out in 1991. Some of the newer methods will likely be faster than the tested algorithms for certain
values of problem parameters. A COMPARISION OF APPROXIMATE STRING MATCHING ALGORITHMS 3
0 1 2 3 4 5 6 7 8
b c b a c b b b
0 0 0 0 0 0 0 0 0 0
1 c 1 1 0 1 1 0 1 1 1
2 a 2 2 1 1 1 1 1 2 2
3 c 3 3 2 2 2 1 2 2 3
4 d 4 4 3 3 3 2 2 3 3
Figure 1. Table
D
.
ALGORITHMS
Dynamic programming
We consider two different versions of dynamic programming for the
k
differences problem.
In the previous section we introduced the trivial solution which computes all entries of table
D
. The code of this algorithm is straight-forward,
2, 21
and we do not present it here. In the
following, we refer to this solution as Algorithm DP.
Diagonal
h
of
D
for
h
=
m
,
: : :
,
n
, consists of all
D
(
i; j
)
such that
j
i
=
h
. Considering
computation along diagonals gives a simple way to limit unnecessary computation. It is
easy to show that entries on every diagonal
h
are monotonically increasing.
22
Therefore
the computation along a diagonal can be stopped, when the threshold value of
k
+
1 is
reached, because the rest of the entries on that diagonal will be greater than
k
to Algorithm EDP (Enhanced Dynamic Programming) working in average time
3
O
(
k n
)
.
Algorithm EDP is shown in Fig. 2.
In algorithm EDP, the text and the pattern are stored in tables
T
and
P
. Table
D
is evaluated
a column at a time. The entries of the current column are stored in table
h
, and the value of
D
(
i
1
; j
1
)
is temporarily stored in variable
C
. A work space of
O
(
m
)
is enough, because
every
D
(
i; j
)
depends only on entries
D
(
i
1
; j
)
,
D
(
i; j
1
)
, and
D
(
i
1
; j
1
)
. Variable
Top tells the row where the topmost diagonal still under the threshold value
k
+
1 intersects the
current column. On line 12 an approximate occurrence is reported, when row
m
is reached.
Galil-Park
The
O
(
k n
)
algorithm presented by Galil and Park
6
is based on the diagonalwise monotonic-
ity of the entries of table
D
. It also uses so-called reference triples that represent matching
substrings of the pattern and the text. This approach was used already by Landau and Vishkin.
4
The algorithm evaluates a modiﬁed form of table
D
. The core of the algorithm is shown in
Fig. 3 as Algorithm GP.
In preprocessing of pattern
P
(procedure call Preﬁxes
(
P
)
on line 2), upper triangular table
Preﬁx
(
i; j
)
, 1
i < j
m
, is computed where Preﬁx
(
i; j
)
is the length of the longest
common preﬁx of
p
i
: : : p
m
and
p
j
: : : p
m
.
Reference triple (
u
,
v
,
w
) consists of start position
u
, end position
v
, and diagonal
w
such
that substring
t
u
: : : t
v
matches substring
p
u
w
: : : p
v
w
and
t
v
+
1
6
=
p
v
+
1
w
. Algorithm GP
manipulates several triples; the components of the
r
th
triple are presented as
U
(
r
)
,
V
(
r
)
, and
W
(
r
)
. 4 PETTERI JOKINEN, JORMA TARHIO, AND ESKO UKKONEN
1 begin
2 Top := k + 1;
3 for I in 0 .. m loop H(I) := I; end loop;
4 for J in 1 .. n loop
5 C := 0;
6 for I in 1 .. Top loop
7 if P(I) = T(J) then E :=C;
8 else E := Min((H(I 1), H(I), C)) + 1; end if;
9 C := H(I); H(I) := E;
10 end loop;
11 while H(Top)
>
k loop Top := Top 1; end loop;
12 if Top = m then Report Match(J);
13 else Top := Top + 1; end if;
14 end loop;
15 end;
Figure 2. Algorithm EDP.
For diagonal
d
and integer
e
, let
C
(
e; d
)
be the largest column
j
such that
D
(
j
d; j
) =
e
.
In other words, the entries of value
e
on diagonal
d
of
D
end at column
C
(
e; d
)
. Now
C
(
e; d
) =
C ol
+
J ump
(
C ol
+
1
d; C ol
+
1
)
holds where
C ol
=
max
f
C
(
e
1
; d
1
) +
1
; C
(
e
1
; d
) +
1
; C
(
e
1
; d
+
1
)
g
and Jump
(
i; j
)
is the length of the longest common preﬁx of
p
i
: : : p
m
and
t
j
: : : t
n
for all
i
,
j
.
Let C-diagonal
g
consist of entries
C
(
e; d
)
such that
e
+
d
=
g
. For every
C
-diagonal
Algorithm GP performs an iteration that evaluates it from two previous
C
-diagonals (lines
7–38). The evaluation of each entry starts with evaluating the Col value (line 11). The rest
of the loop (lines 12–35) effectively ﬁnds the value Jump
(
C ol
+
1
d; C ol
+
1
)
using the
reference triples and table Preﬁx. A new
C
-value is stored on line 24.
The algorithm maintains an ordered sequence of reference triples. The sequence is updated
on lines 28–35. Procedure Within
(
d
)
called on line 14 tests if text position
d
is within some
interval of the
k
ﬁrst reference triples in the sequence. In the positive case, variable
R
is
updated to express the index of the reference triple whose interval contains text position
d
.
A match is reported on line 26.
C
deﬁned above, table
C
ofthe algorithmcontains onlythree successive
C
-diagonals. The use of this buffer of three diagonals is organized with variables
B
1,
B
2,
and
B
3.
Ukkonen-Wood
Another
O
(
k n
)
algorithm, given by Ukkonen and Wood,
7
has an overall structure identical
to the algorithm of Galil and Park. However, no reference triples are used. Instead, to nd
the necessary values Jump
(
i; j
)
, the text is scanned with a modiﬁed sufﬁx automaton for A COMPARISION OF APPROXIMATE STRING MATCHING ALGORITHMS 5
1 begin
2 Preﬁxes(P);
3 for I in –1 .. k loop
4 C(I, 1) := –Inﬁnity; C(I, 2) := –1;
5 end loop;
6 B1:=0; B2:=1; B3:=2;
7 for J in 0 .. n m + k loop
8 C(–1, B1) := J; R := 0;
9 for E in 0 .. k loop
10 H := J E;
11 Col := Max((C(E–1, B2) + 1, C(E–1, B3) + 1, C(E–1, B1)));
12 Se := Col + 1; Found := false;
14 if Within(Col + 1) then
15 F := V(R) Col; G := Preﬁx(Col+1–H, Col+1–W(R));
16 if F = G then Col := Col + F;
17 else Col := Col + Min(F, G); Found := true; end if;
18 else
19 if Col H
<
m and then P(Col+1–H) = T(Col+1) then
20 Col := Col + 1;
21 else Found := true; end if;
22 end if;
23 end loop;
24 C(E, B1) := Min(Col, m+H);
25 if C(E, B1) = H + m and then C(E–1, B2)
<
m + H then
26 Report
Match((H + m));
27 end if;
28 if V(E)
>
=
C(E, B1) then
29 if E = 0 then U(E) := J + 1;
30 else U(E) := Max(U(E), V(E–1) + 1); end if;
31 else
32 V(E) := C(E, B1); W(E) := H;
33 if E = 0 then U(E) := J + 1;
34 else U(E) := Max(Se, V(E–1) + 1); end if;
35 end if;
36 end loop;
37 B := B1; B1 := B3; B3 := B2; B2 := B;
38 end loop;
39 end;
Figure 3. Algorithm GP.

##### Citations
More filters
Journal ArticleDOI
TL;DR: This work surveys the current techniques to cope with the problem of string matching that allows errors, and focuses on online searching and mostly on edit distance, explaining the problem and its relevance, its statistical behavior, its history and current developments, and the central ideas of the algorithms.
Abstract: We survey the current techniques to cope with the problem of string matching that allows errors. This is becoming a more and more relevant issue for many fast growing areas such as information retrieval and computational biology. We focus on online searching and mostly on edit distance, explaining the problem and its relevance, its statistical behavior, its history and current developments, and the central ideas of the algorithms and their complexities. We present a number of experiments to compare the performance of the different algorithms and show which are the best choices. We conclude with some directions for future work and open problems.

2,723 citations

### Cites background or methods from "A comparison of approximate string ..."

• ...There exist other surveys on approxi­mate string matching, which are however too old for this fast moving area [Hall and Dowling 1980; Sankoff and Kruskal 1983; Apostolico and Galil 1985; Galil and Giancarlo 1988; Jokinen et al. 1996] (the last one was in its de.nitive form in 1991)....

[...]

• ...2 Ukkonen (1983). In 1983, Ukkonen [1985a] presented an algorithm able to compute the edit distance between two strings x and y in O(ed (x, y)2) time, or to check in time O(k2) whether that distance was ≤k or not....

[...]

• ...Key: TU93 = [Tarhio and Ukkonen 1993], JTU96 = [Jokinen et al. 1996], Nav97a = [Navarro 1997a], CL94 = [Chang and Lawler 1994], Ukk92 = [Ukkonen 1992], BYN99 = [Baeza-Yates and Navarro 1999], WM92b = [Wu and Manber 1992b], BYP96 = [Baeza-Yates and Perleberg 1996], Shi96 = [Shi 1996], NBY99c = [Navarro and Baeza-Yates 1999c], Tak94 = [Takaoka 1994], CM94 = [Chang and Marr 1994], NBY98a = [Navarro and Baeza-Yates 1998a], NR00 = [Navarro and Raf.not 2000], ST95 = [Sutinen and Tarhio 1995], and GKHO97 = [Giegerich et al. 1997]. used Boyer Moore Horspool techniques We divide this area in two parts: moder­[Boyer and Moore 1977; Horspool 1980] ate and very long patterns....

[...]

• ...On the left, the automaton of Ukkonen [1985b] where each column is...

[...]

• ...Key: BYN99 = [Baeza-Yates and Navarro 1999], NBY98a = [Navarro and Baeza-Yates 1998a], JTU96 = [Jokinen et al. 1996], Ukk92 = [Ukkonen 1992], CL94 = [Chang and Lawler 1994], WM92b = [Wu and Manber 1992b], TU93 = [Tarhio and Ukkonen 1993], Tak94 = [Takaoka 1994], Shi96 = [Shi 1996], ST95 =…...

[...]

Book
27 May 2002
TL;DR: This book presents a practical approach to string matching problems, focusing on the algorithms and implementations that perform best in practice, and includes all of the most significant new developments in complex pattern searching.
Abstract: This book presents a practical approach to string matching problems, focusing on the algorithms and implementations that perform best in practice. It covers searching for simple, multiple, and extended strings, as well as regular expressions, exactly and approximately. It includes all of the most significant new developments in complex pattern searching. The clear explanations, step-by-step examples, algorithms pseudo-code, and implementation efficiency maps will enable researchers, professionals, and students in bioinformatics, computer science, and software engineering to choose the most appropriate algorithms for their applications.

463 citations

Proceedings ArticleDOI
18 Dec 2006
TL;DR: The characteristics of personal names are discussed and potential sources of variations and errors are presented and a comprehensive number of commonly used, as well as some recently developed name matching techniques are overview.
Abstract: Finding and matching personal names is at the core of an increasing number of applications: from text and Web mining, search engines, to information extraction, dedupli- cation and data linkage systems Variations and errors in names make exact string matching problematic, and ap- proximate matching techniques have to be applied When compared to general text, however, personal names have different characteristics that need to be considered In this paper we discuss the characteristics of personal names and present potential sources of variations and errors We then overview a comprehensive number of commonly used, as well as some recently developed name matching techniques Experimental comparisons using four large name data sets indicate that there is no clear best matching technique

351 citations

Patent
28 Sep 2001
TL;DR: In this paper, a data structure for annotating data files within a database is provided, which comprises a phoneme and word lattice which allows the quick and efficient searching of data files in response to a user's input query.
Abstract: A data structure is provided for annotating data files within a database. The annotation data comprises a phoneme and word lattice which allows the quick and efficient searching of data files within the database in response to a user's input query. The structure of the annotation data is such that it allows the input query to be made by voice and can be used for annotating various kinds of data files, such as audio data files, video data files, multimedia data files etc. The annotation data may be generated from the data files themselves or may be input by the user either from a voiced input or from a typed input.

314 citations

Journal ArticleDOI
TL;DR: This paper presents an overview of techniques that allow the linking of databases between organizations while at the same time preserving the privacy of these data, and presents a taxonomy of PPRL techniques to characterize these techniques along 15 dimensions.
Abstract: The process of identifying which records in two or more databases correspond to the same entity is an important aspect of data quality activities such as data pre-processing and data integration. Known as record linkage, data matching or entity resolution, this process has attracted interest from researchers in fields such as databases and data warehousing, data mining, information systems, and machine learning. Record linkage has various challenges, including scalability to large databases, accurate matching and classification, and privacy and confidentiality. The latter challenge arises because commonly personal identifying data, such as names, addresses and dates of birth of individuals, are used in the linkage process. When databases are linked across organizations, the issue of how to protect the privacy and confidentiality of such sensitive information is crucial to successful application of record linkage. In this paper we present an overview of techniques that allow the linking of databases between organizations while at the same time preserving the privacy of these data. Known as 'privacy-preserving record linkage' (PPRL), various such techniques have been developed. We present a taxonomy of PPRL techniques to characterize these techniques along 15 dimensions, and conduct a survey of PPRL techniques. We then highlight shortcomings of current techniques and discuss avenues for future research.

241 citations

##### References
More filters
Journal ArticleDOI
TL;DR: An algorithm is presented which solves the string-to-string correction problem in time proportional to the product of the lengths of the two strings.
Abstract: The string-to-string correction problem is to determine the distance between two strings as measured by the minimum cost sequence of “edit operations” needed to change the one string into the other. The edit operations investigated allow changing one symbol of a string into another single symbol, deleting one symbol from a string, or inserting a single symbol into a string. An algorithm is presented which solves this problem in time proportional to the product of the lengths of the two strings. Possible applications are to the problems of automatic spelling correction and determining the longest subsequence of characters common to two strings.

3,252 citations

Journal ArticleDOI
TL;DR: The algorithm has the unusual property that, in most cases, not all of the first i.” in another string, are inspected.
Abstract: An algorithm is presented that searches for the location, “il” of the first occurrence of a character string, “pat,” in another string, “string.” During the search operation, the characters of pat are matched starting with the last character of pat. The information gained by starting the match at the end of the pattern often allows the algorithm to proceed in large jumps through the text being searched. Thus the algorithm has the unusual property that, in most cases, not all of the first i characters of string are inspected. The number of characters actually inspected (on the average) decreases as a function of the length of pat. For a random English pattern of length 5, the algorithm will typically inspect i/4 characters of string before finding a match at i. Furthermore, the algorithm has been implemented so that (on the average) fewer than i + patlen machine instructions are executed. These conclusions are supported with empirical evidence and a theoretical analysis of the average behavior of the algorithm. The worst case behavior of the algorithm is linear in i + patlen, assuming the availability of array space for tables linear in patlen plus the size of the alphabet.

2,542 citations

Journal ArticleDOI
TL;DR: T h e string-matching problem is a very c o m m o n problem; there are many extensions to t h i s problem; for example, it may be looking for a set of patterns, a pattern w i t h "wi ld cards," or a regular expression.
Abstract: T h e string-matching problem is a very c o m m o n problem. We are searching for a string P = PtP2. . "Pro i n s i d e a la rge t ex t f i le T = t l t2. . . t . , b o t h sequences of characters from a f i n i t e character set Z. T h e characters may be English characters in a text file, DNA base pairs, lines of source code, angles between edges in polygons, machines or machine parts in a production schedule, music notes and tempo in a musical score, and so fo r th . We w a n t to f i n d a l l occurrences of P i n T; n a m e l y , we are searching for the set of starting posit ions F = {i[1 --i--n m + 1 s u c h t h a t titi+ l " " t i + m 1 = P } " T h e two most famous algorithms for this problem are t h e B o y e r M o o r e algorithm  and t h e K n u t h Morris Pratt algorithm . There are many extensions to t h i s problem; for example, we may be looking for a set of patterns, a pattern w i t h "wi ld cards," or a regular expression. String-matching tools are included in every reasonable text editor, word processor, and many other applications.

806 citations

Journal ArticleDOI
TL;DR: An improved algorithm that works in time and in space O and algorithms that can be used in conjunction with extended edit operation sets, including, for example, transposition of adjacent characters.
Abstract: The edit distance between strings a 1 … a m and b 1 … b n is the minimum cost s of a sequence of editing steps (insertions, deletions, changes) that convert one string into the other. A well-known tabulating method computes s as well as the corresponding editing sequence in time and in space O ( mn ) (in space O (min( m, n )) if the editing sequence is not required). Starting from this method, we develop an improved algorithm that works in time and in space O ( s · min( m, n )). Another improvement with time O ( s · min( m, n )) and space O ( s · min( s, m, n )) is given for the special case where all editing steps have the same cost independently of the characters involved. If the editing sequence that gives cost s is not required, our algorithms can be implemented in space O (min( s, m, n )). Since s = O (max( m, n )), the new methods are always asymptotically as good as the original tabulating method. As a by-product, algorithms are obtained that, given a threshold value t , test in time O ( t · min( m, n )) and in space O (min( t, m, n )) whether s ⩽ t . Finally, different generalized edit distances are analyzed and conditions are given under which our algorithms can be used in conjunction with extended edit operation sets, including, for example, transposition of adjacent characters.

672 citations

Journal ArticleDOI
06 Jan 1992
TL;DR: Two string distance functions that are computable in linear time give a lower bound for the edit distance (in the unit cost model), which leads to fast hybrid algorithms for the edited distance based string matching.
Abstract: We study approximate string matching in connection with two string distance functions that are computable in linear time. The first function is based on the so-called $q$-grams. An algorithm is given for the associated string matching problem that finds the locally best approximate occurences of pattern $P$, $|P|=m$, in text $T$, $|T|=n$, in time $O(n\log (m-q))$. The occurences with distance $\leq k$ can be found in time $O(n\log k)$. The other distance function is based on finding maximal common substrings and allows a form of approximate string matching in time $O(n)$. Both distances give a lower bound for the edit distance (in the unit cost model), which leads to fast hybrid algorithms for the edit distance based string matching.

665 citations