scispace - formally typeset
Search or ask a question

Showing papers by "Gerth Stølting Brodal published in 2006"


Proceedings ArticleDOI
22 Jan 2006
TL;DR: In this article, a static cache-oblivious dictionary structure for string prefix queries is presented, which performs prefix queries in O(logBn + |P|/B) I/Os, where n is the number of leaves in the trie, P is the query string, and B is the block size.
Abstract: We present static cache-oblivious dictionary structures for strings which provide analogues of tries and suffix trees in the cache-oblivious model. Our construction takes as input either a set of strings to store, a single string for which all suffixes are to be stored, a trie, a compressed trie, or a suffix tree, and creates a cache-oblivious data structure which performs prefix queries in O(logBn + |P|/B) I/Os, where n is the number of leaves in the trie, P is the query string, and B is the block size. This query cost is optimal for unbounded alphabets. The data structure uses linear space.

53 citations


Journal ArticleDOI
TL;DR: This paper presents techniques for speeding up the canonical neighbor-joining method, and shows that the running time of the algorithms evolve as Θ(n2) on the examined instance collection, already for medium sized instances.
Abstract: Background The neighbor-joining method by Saitou and Nei is a widely used method for constructing phylogenetic trees. The formulation of the method gives rise to a canonical Θ(n3) algorithm upon which all existing implementations are based.

45 citations


Proceedings ArticleDOI
21 Oct 2006
TL;DR: This work develops the first linear-space data structures for dynamic planar point location in general subdivisions that achieve logarithmic query time and poly-logarithsmic update time.
Abstract: We develop the first linear-space data structures for dynamic planar point location in general subdivisions that achieve logarithmic query time and poly-logarithmic update time

37 citations


Journal Article
TL;DR: In this article, the authors presented an output-dependent expected running time of O((m + nl ) log log a + Sort) and O(m) space, where l is the length of an LCIS, a is the size of the alphabet, and Sort is the time to sort each input sequence.
Abstract: We present algorithms for finding a longest common increasing subsequence of two or more input sequences. For two sequences of lengths m and n, where m > n, we present an algorithm with an output-dependent expected running time of O((m + nl ) log log a + Sort) and O(m) space, where l is the length of an LCIS, a is the size of the alphabet, and Sort is the time to sort each input sequence. For k ≥ 3 length-n sequences we present an algorithm which improves the previous best bound by more than a factor k for many inputs. In both cases, our algorithms are conceptually quite simple but rely on existing sophisticated data structures. Finally, we introduce the problem of longest common weakly-increasing (or non-decreasing) subsequences (LCWIS), for which we present an O(m + n log n)-time algorithm for the 3-letter alphabet case. For the extensively studied longest common subsequence problem, comparable speedups have not been achieved for small alphabets.

25 citations


01 Jan 2006
TL;DR: The problem of longest common weakly-increasing (or non-decreasing) subsequences (LCWIS), for which an O(min{m+nlogn,mloglogm})-time algorithm is presented for the 3-letter alphabet case, for which comparable speedups have not been achieved for small alphabets.

20 citations


Book ChapterDOI
05 Jul 2006
TL;DR: The problem of longest common weakly-increasing (or non-decreasing) subsequences (LCWIS), for which an O(m+nlogn)-time algorithm is presented for the 3-letter alphabet case, for which comparable speedups have not been achieved for small alphabets.
Abstract: We present algorithms for finding a longest common increasing subsequence of two or more input sequences. For two sequences of lengths m and n, where m≥n, we present an algorithm with an output-dependent expected running time of $O((m+n\ell) \log\log \sigma + {\ensuremath{\mathit{Sort}}})$ and O(m) space, where l is the length of an LCIS, σ is the size of the alphabet, and ${\ensuremath{\mathit{Sort}}}$ is the time to sort each input sequence. For k≥3 length-n sequences we present an algorithm which improves the previous best bound by more than a factor k for many inputs. In both cases, our algorithms are conceptually quite simple but rely on existing sophisticated data structures. Finally, we introduce the problem of longest common weakly-increasing (or non-decreasing) subsequences (LCWIS), for which we present an O(m+nlogn)-time algorithm for the 3-letter alphabet case. For the extensively studied longest common subsequence problem, comparable speedups have not been achieved for small alphabets.

14 citations


Book ChapterDOI
11 Sep 2006
TL;DR: This paper presents an experimental study of various memory layouts of static skewed binary search trees, where each element in the tree is accessed with a uniform probability, and shows that for many of the memory layouts this class of skewedbinary search trees can perform better than perfect balanced search trees.
Abstract: It is well-known that to minimize the number of comparisons a binary search tree should be perfectly balanced. Previous work has shown that a dominating factor over the running time for a search is the number of cache faults performed, and that an appropriate memory layout of a binary search tree can reduce the number of cache faults by several hundred percent. Motivated by the fact that during a search branching to the left or right at a node does not necessarily have the same cost, e.g. because of branch prediction schemes, we in this paper study the class of skewed binary search trees. For all nodes in a skewed binary search tree the ratio between the size of the left subtree and the size of the tree is a fixed constant (a ratio of 1/2 gives perfect balanced trees). In this paper we present an experimental study of various memory layouts of static skewed binary search trees, where each element in the tree is accessed with a uniform probability. Our results show that for many of the memory layouts we consider skewed binary search trees can perform better than perfect balanced search trees. The improvements in the running time are on the order of 15%.

6 citations


Book ChapterDOI
11 Sep 2006
TL;DR: A purely functional implementation of search trees that requires O(logn) time for search and update operations and supports the join of two trees in worst case constant time was presented in this paper.
Abstract: We present a purely functional implementation of search trees that requires O(logn) time for search and update operations and supports the join of two trees in worst case constant time. Hence, we solve an open problem posed by Kaplan and Tarjan as to whether it is possible to envisage a data structure supporting simultaneously the join operation in O(1) time and the search and update operations in O(logn) time.

2 citations