scispace - formally typeset
Search or ask a question

Showing papers by "Richard Cole published in 1993"


Proceedings ArticleDOI
03 Nov 1993
TL;DR: An algorithm that computes a deterministic sample of a sufficiently long substring in constant time for string matching, solving the main open problem remaining in string matching.
Abstract: All algorithms below are optimal alphabet-independent parallel CRCW PRAM algorithms. In one dimension: Given a pattern string of length m for the string-matching problem, we design an algorithm that computes a deterministic sample of a sufficiently long substring in constant time. This problem used to be a bottleneck in the pattern preprocessing for one- and two-dimensional pattern matching. The best previous time bound was O(log/sup 2/ m/log log m). We use this algorithm to obtain the following results. 1. Improving the preprocessing of the constant-time text search algorithm from O(log/sup 2/ m/log log m) to n(log log m), which is now best possible. 2. A constant-time deterministic string-matching algorithm in the case that the text length n satisfies n=/spl Omega/(m/sup 1+/spl epsiv//) for a constant /spl epsiv/>0. 3. A simple probabilistic string-matching algorithm that has constant time with high probability for random input. 4. A constant expected time Las-Vegas algorithm for computing the period of the pattern and all witnesses and thus string matching itself, solving the main open problem remaining in string matching. >

61 citations


Proceedings ArticleDOI
01 Jun 1993
TL;DR: If faulty nodes are allowed to communicate, but not compute, then an N-node one-dimensional array can tolerate logO(lJ N worst-case faults and still emulate a fault-free array with constant slowdown, and this bound is tight.
Abstract: In this paper we study the ability of array-based networks to tolerate faults. We show that an N x N twodimensional array can sustain N1 -‘ worst-case faults, for any fixed c >0, and still emulate a fully functioning N x N array with only constant slowdown. We also observe that even if every node fails with some fixed probability, p, with high probability the array can still emulate a fully functioning array with constant slowdown. Previously, no connected bounded-degree network was known to be able to tolerate constantprobability node failures without suffering more than a constant-factor loss in performance. Finally, we observe that if faulty nodes are allowed to communicate, but not compute, then an N-node one-dimensional array can tolerate logO(lJ N worst-case faults and still emulate a fault-free array with constant slowdown, and this bound is tight.

24 citations


Journal ArticleDOI

8 citations


01 Jan 1993
TL;DR: The paper considers the exact number of character comparisons needed to find all occurrences of a pattern of length m in a text of length n using on-line and general algorithms and finds a lower bound of about (1 + &) .
Abstract: The paper considers the exact number of character comparisons needed to find all occurrences of a pattern of length m in a text of length n using on-line and general algorithms. For on-line algorithms, a lower bound of about (1 + &) . n character comparisons is obtained. For general algorithms, a lower bound of about (1 + &) . n character comparisons is obtained. These lower bounds complement an on-line upper bound of about (1 ,+ &)) n comparisons obtained recently by Cole and Hariharan. The lower bounds are obtained by finding pattems with interesting combinatorial properties (these are the hard to find patterns). It is also shown that for some patterns off-line algorithms can be more efficient than on-line algorithms.

5 citations


Proceedings ArticleDOI
07 Jun 1993
TL;DR: The paper considers the exact number of character comparisons needed to find all occurrences of a pattern of length m in a text of length n using on-line and general algorithms and finds the lower bounds are obtained by finding patterns with interesting combinatorial properties.
Abstract: The paper considers the exact number of character comparisons needed to find all occurrences of a pattern of length m in a text of length n using on-line and general algorithms. For on-line algorithms, a lower bound of about (1+/sup 9///sub 4(m+1)/).n character comparisons is obtained. For general algorithms, a lower bound of about (1+/sup 2///sub m+3/).n character comparisons is obtained. These lower bound complement an on-line upper bound of about (1+/sup 8///sub 3(m+1)/).n comparisons obtained recently by Cole and Hariharan (1992). The lower bounds are obtained by finding patterns with interesting combinatorial properties (these are the hard to find patterns). It is also shown that for some patterns off-line algorithms can be more efficient than on-line algorithms. >

1 citations