scispace - formally typeset
Search or ask a question

Showing papers by "Gerth Stølting Brodal published in 2011"


Journal ArticleDOI
TL;DR: This work describes a simple algorithm which needs O(nlogk+klogn) time to answer k median queries, and improves previous algorithms by a logarithmic factor and matches a comparison lower bound for k=O(n).

48 citations


Proceedings ArticleDOI
23 Jan 2011
TL;DR: A linear-size data structure is constructed that achieves a query bound of O(n + fK/B) I/Os and it is shown that it is the requirement that the K smallest elements be reported in sorted order which makes the problem hard.
Abstract: We study the following problem: Given an array A storing N real numbers, preprocess it to allow fast reporting of the K smallest elements in the subarray A[i, j] in sorted order, for any triple (i, j, K) with 1 ≤ i ≤ j ≤ N and 1 ≤ K ≤ j − i + 1 We are interested in scenarios where the array A is large, necessitating an I/O-efficient solutionFor a parameter f with 1 ≤ f ≤ logmn, we construct a data structure that uses O((N/f) logmn) space and achieves a query bound of O(logBN + fK/B) I/Os, where B is the block size, M is the size of the main memory, n:= N/B, and m:= M/B Our main contribution is to show that this solution is nearly optimal To be precise, we show that achieving a query bound of O(logαn + fK/B) I/Os, for any constant α, requires Ω(Nf−1 logMn/log(f−1 logMn)) space, assuming B = Ω(log N) For M ≥ B1+e, this is within a log logmn factor of the upper bound The lower bound assumes indivisibility of records and holds even if we assume K is always set to j − 1 + 1We also show that it is the requirement that the K smallest elements be reported in sorted order which makes the problem hard If the K smallest elements in the query range can be reported in any order, then we can obtain a linear-size data structure with a query bound of O(logBN + K/B) I/Os

34 citations


Book ChapterDOI
04 Jul 2011
TL;DR: This work describes two data structures that support the reporting of the t maximal points that dominate a given query point, and allow for insertions and deletions of points in P, and presents a linear space data structure with first sublogarithmic worst case bounds for all operations in the RAM model.
Abstract: We consider the dynamic two-dimensional maxima query problem. Let P be a set of n points in the plane. A point is maximal if it is not dominated by any other point in P. We describe two data structures that support the reporting of the t maximal points that dominate a given query point, and allow for insertions and deletions of points in P. In the pointer machine model we present a linear space data structure with O(log n + t) worst case query time and O(log n) worst case update time. This is the first dynamic data structure for the planar maxima dominance query problem that achieves these bounds in the worst case. The data structure also supports the more general query of reporting the maximal points among the points that lie in a given 3-sided orthogonal range unbounded from above in the same complexity. We can support 4-sided queries in O(log2 n+t) worst case time, and O(log2 n) worst case update time, using O(n log n) space, where t is the size of the output. This improves the worst case deletion time of the dynamic rectangular visibility query problem from O(log3 n) to O(log2 n). We adapt the data structure to the RAM model with word size w, where the coordinates of the points are integers in the range U={0,..., 2w-1}. We present a linear space data structure that supports 3-sided range maxima queries in O(log n/log log n + t) worst case time and updates in O(logn/log log n) worst case time. These are the first sublogarithmic worst case bounds for all operations in the RAM model.

28 citations


Book ChapterDOI
15 Aug 2011
TL;DR: This work gives comparison-based and RAM data structures that achieve optimal and significantly lower query times than when updating the edge-weights is allowed for the dynamic version of the path minima problem on trees.
Abstract: In the path minima problem on trees each tree edge is assigned a weight and a query asks for the edge with minimum weight on a path between two nodes. For the dynamic version of the problem on a tree, where the edge-weights can be updated, we give comparison-based and RAM data structures that achieve optimal query time. These structures support inserting a node on an edge, inserting a leaf, and contracting edges. When only insertion and deletion of leaves in a tree are needed, we give two data structures that achieve optimal and significantly lower query times than when updating the edge-weights is allowed. One is a semigroup structure for which the edge-weights are from an arbitrary semigroup and queries ask for the semigroup-sum of the edge-weights on a given path. For the other structure the edge-weights are given in the word RAM. We complement these upper bounds with lower bounds for different variants of the problem.

24 citations


Journal ArticleDOI
TL;DR: The paper shows that no cache-oblivious search structure can guarantee a search performance of fewer than lg elog BN memory transfers between any two levels of the memory hierarchy, and shows that as k grows, the search costs of the optimal k-level DAM search structure and the optimal caching structure rapidly converge.
Abstract: This paper gives tight bounds on the cost of cache-oblivious searching. The paper shows that no cache-oblivious search structure can guarantee a search performance of fewer than lg elog B N memory transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes are limited to be powers of 2. The paper gives modified versions of the van Emde Boas layout, where the expected number of memory transfers between any two levels of the memory hierarchy is arbitrarily close to [lg e+O(lg lg B/lg B)]log B N+O(1). This factor approaches lg e≈1.443 as B increases. The expectation is taken over the random placement in memory of the first element of the structure. Because searching in the disk-access machine (DAM) model can be performed in log B N+O(1) block transfers, this result establishes a separation between the (2-level) DAM model and cache-oblivious model. The DAM model naturally extends to k levels. The paper also shows that as k grows, the search costs of the optimal k-level DAM search structure and the optimal cache-oblivious search structure rapidly converge. This result demonstrates that for a multilevel memory hierarchy, a simple cache-oblivious structure almost replicates the performance of an optimal parameterized k-level DAM structure.

19 citations


Journal ArticleDOI
TL;DR: In this article, an output-dependent expected running time of O((m+n@?)loglog@s+sort) and O(m) space was given for the 3-letter alphabet case.

17 citations


Book ChapterDOI
23 May 2011
TL;DR: This work provides a space-optimal counter which supports increment and decrement operations by reading at most n - 1 bits and writing at most 3 bits in the worst-case, and is the first such representation which supports these operations by always reading strictly less than n bits.
Abstract: We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both in the worst-case and in the average-case. A counter is space-optimal if it represents any number in the range [0, . . . , 2n - 1] using exactly n bits. We provide a space-optimal counter which supports increment and decrement operations by reading at most n - 1 bits and writing at most 3 bits in the worst-case. To the best of our knowledge, this is the first such representation which supports these operations by always reading strictly less than n bits. For redundant counters where we only need to represent numbers in the range [0, . . . , L] for some integer L < 2n - 1 using n bits, we define the efficiency of the counter as the ratio between L + 1 and 2n. We present various representations that achieve different trade-offs between the read and write complexities and the efficiency. We also give another representation of integers that uses n + O(log n) bits to represent integers in the range [0, . . . , 2n - 1] that supports efficient addition and subtraction operations, improving the space complexity of an earlier representation by Munro and Rahman [Algorithmica, 2010].

7 citations


Posted ContentDOI
TL;DR: This is the first implicit dictionary supporting predecessor and successor searches in the working-set bound, supporting insert and delete(e) in O(logn) time and search-e inO(log min(‘p(e),‘e,‘s(e))) time.
Abstract: In this paper we present an implicit dynamic dictionary with the working-set property, supporting insert(e) and delete(e) in O(log n) time, predecessor(e) in O(log l_{p(e)}) time, successor(e) in O(log l_{s(e)}) time and search(e) in O(log min(l_{p(e)},l_{e}, l_{s(e)})) time, where n is the number of elements stored in the dictionary, l_{e} is the number of distinct elements searched for since element e was last searched for and p(e) and s(e) are the predecessor and successor of e, respectively. The time-bounds are all worst-case. The dictionary stores the elements in an array of size n using no additional space. In the cache-oblivious model the log is base B and the cache-obliviousness is due to our black box use of an existing cache-oblivious implicit dictionary. This is the first implicit dictionary supporting predecessor and successor searches in the working-set bound. Previous implicit structures required O(log n) time.

2 citations


Book ChapterDOI
08 Sep 2011
TL;DR: This work presents the first randomized paging approach that both has optimal competitiveness and selects victim pages in subquadratic time and takes also O(k) space, but only O(logk) time in the worst case per page request.
Abstract: In the field of online algorithms paging is one of the most studied problems. For randomized paging algorithms a tight bound of Hk on the competitive ratio has been known for decades, yet existing algorithms matching this bound have high running times. We present the first randomized paging approach that both has optimal competitiveness and selects victim pages in subquadratic time. In fact, if k pages fit in internal memory the best previous solution required O(k2) time per request and O(k) space, whereas our approach takes also O(k) space, but only O(logk) time in the worst case per page request.

2 citations