scispace - formally typeset
Search or ask a question

Showing papers by "Gerth Stølting Brodal published in 2007"


Book ChapterDOI
08 Oct 2007
TL;DR: This model focuses on the design of resilient dictionaries, i.e., dictionaries which are able to operate correctly (at least) on the set of uncorrupted keys, and proposes an optimal deterministic static dictionary supporting searches in Θ(log n+δ) time in the worst case, and shows how to use it in a dynamic setting in order to support updates in O( log n + δ) amortized time.
Abstract: We investigate the problem of computing in the presence of faults that may arbitrarily (i.e., adversarially) corrupt memory locations. In the faulty memory model, any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted ones. An upper bound δ on the number of corruptions and O(1) reliable memory cells are provided. In this model, we focus on the design of resilient dictionaries, i.e., dictionaries which are able to operate correctly (at least) on the set of uncorrupted keys.We first present a simple resilient dynamic search tree, based on random sampling, with O(log n+δ) expected amortized cost per operation, and O(n) space complexity. We then propose an optimal deterministic static dictionary supporting searches in Θ(log n+δ) time in the worst case, and we show how to use it in a dynamic setting in order to support updates in O(log n + δ) amortized time. Our dynamic dictionary also supports range queries in O(log n+δ+t) worst case time, where t is the size of the output. Finally, we show that every resilient search tree (with some reasonable properties) must take Ω(log n + δ) worst-case time per search.

48 citations


Proceedings ArticleDOI
09 Jun 2007
TL;DR: It is found that most matrices with kN nonzeros require this number of I/Os, even if the program may depend on the structure of the matrix, and this complexity up to a constant factor for large ranges of the parameters.
Abstract: We analyze the problem of sparse-matrix dense-vector multiplication (SpMV) in the I/O-model. The task of SpMV is to compute y := Ax, where A is a sparse N x N matrix and x and y are vectors. Here, sparsity is expressed by the parameter k that states that A has a total of at most kN nonzeros, i.e., an average number of k nonzeros per column. The extreme choices for parameter k are well studied special cases, namely for k=1 permuting and for k=N dense matrix-vector multiplication.We study the worst-case complexity of this computational task, i.e., what is the best possible upper bound on the number of I/Os depending on k and N only. We determine this complexity up to a constant factor for large ranges of the parameters. By our arguments, we find that most matrices with kN nonzeros require this number of I/Os, even if the program may depend on the structure of the matrix. The model of computation for the lower bound is a combination of the I/O-models of Aggarwal and Vitter, and of Hong and Kung.We study two variants of the problem, depending on the memory layout of A.If A is stored in column major layout, SpMV has I/O complexity Θ(min{kNB(1+logM/BNmax{M,k}), kN}) for k ≤ N1-e and any constant 1> e > 0. If the algorithm can choose the memory layout, the I/O complexity of SpMV is Θ(min{kNB(1+logM/BNkM), kN]) for k ≤ 3√N.In the cache oblivious setting with tall cache assumption M ≥ B1+e, the I/O complexity is Ο(kNB(1+logM/BNk)) for A in column major layout.

41 citations


Book ChapterDOI
26 Aug 2007
TL;DR: This paper designs an optimal O(n+k) time algorithm and uses this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m2 ċ n+ k) time, where the input is an m × n matrix with m ≤ n.
Abstract: Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n+k) time algorithm for the k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m2 ċ n+k) time, where the input is an m × n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n2d-1+ k) time. The space usage of all the algorithms can be reduced to O(nd-1+ k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space.

25 citations


Book ChapterDOI
26 Aug 2007
TL;DR: A data structure is developed which maintains the set of vertices that participate in a maximum matching in O(log2 |V|) amortized time per update and reports the status of a vertex (matched or unmatched) in constant worst-case time.
Abstract: We consider the problem of maintaining a maximum matching in a convex bipartite graph G = (V,E) under a set of update operations which includes insertions and deletions of vertices and edges. It is not hard to show that it is impossible to maintain an explicit representation of a maximum matching in sub-linear time per operation, even in the amortized sense. Despite this difficulty, we develop a data structure which maintains the set of vertices that participate in a maximum matching in O(log2 |V|) amortized time per update and reports the status of a vertex (matched or unmatched) in constant worst-case time. Our structure can report the mate of a matched vertex in the maximum matching in worst-case O(min{k log2 |V |+log |V|, |V| log |V|}) time, where k is the number of update operations since the last query for the same pair of vertices was made. In addition, we give an O(√|V| log2 |V|)-time amortized bound for this pair query.

19 citations


Book ChapterDOI
25 Jun 2007
TL;DR: The ComBack method extends the well-known hash compaction method such that full coverage of the state space is guaranteed and allows hash collisions to be resolved on-the-fly during state space exploration using backtracking to reconstruct the full state descriptors.
Abstract: This paper presents the ComBack method for explicit state space exploration. The ComBack method extends the well-known hash compaction method such that full coverage of the state space is guaranteed. Each encountered state is mapped into a compressed state descriptor (hash value) as in hash compaction. The method additionally stores for each state an integer representing the identity of the state and a backedge to a predecessor state. This allows hash collisions to be resolved on-the-fly during state space exploration using backtracking to reconstruct the full state descriptors when required for comparison with newly encountered states. A prototype implementation of the ComBack method is used to evaluate the method on several example systems and compare its performance to related methods. The results show a reduction in memory usage at an acceptable cost in exploration time.

15 citations


Proceedings ArticleDOI
01 Jan 2007
TL;DR: The algorithm developed herein has running time O(d9n logn)) which makes it the first algorithm for computing the quartet distance between non-binary trees which has a sub-quadratic worst case running time.
Abstract: We present an algorithm for calculating the quartet distance between two evolutionary trees of bounded degree on a common set of n species. The previous best algorithm has running time O(d2n2) when considering trees, where no node is of more than degree d. The algorithm developed herein has running time O(d9n logn)) which makes it the first algorithm for computing the quartet distance between non-binary trees which has a sub-quadratic worst case running time.

15 citations



Journal ArticleDOI
01 Jan 2007
TL;DR: Two optimal resilient resilient static dictionaries are proposed, a randomized one and a deterministic one, which is optimal, and updates in O(log n + + k) worst case time, where k is the size of the output.
Abstract: . In the resilient memory model any memory cell can get cor- rupted at any time, and corrupted cells cannot be distinguished from uncorrupted cells. An upper bound, , on the number of corruptions and O(1) reliable memory cells are provided. In this model, a data structure is denoted resilient if it gives the correct output on the set of uncor- rupted elements. We propose two optimal resilient static dictionaries, a randomized one and a deterministic one. The randomized dictionary supports searches in O(log n + ) expected time using O(log ) random bits in the worst case, under the assumption that corruptions are not performed by an adaptive adversary. The deterministic static dictionary supports searches in O(log n + ) time in the worst case. We also in- troduce a deterministic dynamic resilient dictionary supporting searches in O(log n + ) time in the worst case, which is optimal, and updates in O(log n + ) amortized time. Our dynamic dictionary supports range queries in O(log n + + k) worst case time, where k is the size of the output.

4 citations


Proceedings ArticleDOI
01 Jan 2007
TL;DR: Two algorithms for calculating the quartet distance between all pairs of trees in a set of binary evolutionary trees on a common set of species perform significantly better on large sets of trees compared to performing distinct pairwise distance calculations.
Abstract: We present two algorithms for calculating the quartet distance between all pairs of trees in a set of binary evolutionary trees on a common set of species. The algorithms exploit common substructure among the trees to speed up the pairwise distance calculations thus performing significantly better on large sets of trees compared to performing distinct pairwise distance calculations, as we illustrate experimentally, where we see a speedup factor of around 130 in the best case.

3 citations