scispace - formally typeset
Search or ask a question

Showing papers by "Alejandro López-Ortiz published in 2009"


Book ChapterDOI
05 Dec 2009
TL;DR: A data structure in the RAM model is presented supporting queries that given two indices i ≤ j and an integer k report the k smallest elements in the subarray A[i..j] in sorted order in optimal O(k) time.
Abstract: We study the following one-dimensional range reporting problem: On an array A of n elements, support queries that given two indices i ≤ j and an integer k report the k smallest elements in the subarray A[i..j] in sorted order. We present a data structure in the RAM model supporting such queries in optimal O(k) time. The structure uses O(n) words of space and can be constructed in O(n logn) time. The data structure can be extended to solve the online version of the problem, where the elements in A[i..j] are reported one-by-one in sorted order, in O(1) worst-case time per element. The problem is motivated by (and is a generalization of) a problem with applications in search engines: On a tree where leaves have associated rank values, report the highest ranked leaves in a given subtree. Finally, the problem studied generalizes the classic range minimum query (RMQ) problem on arrays.

54 citations


Journal ArticleDOI
TL;DR: This paper gives a finer separation of several known paging algorithms using a new technique called relative interval analysis, and shows that look-ahead is beneficial for a paging algorithm.

36 citations


Book ChapterDOI
10 Sep 2009
TL;DR: This paper defines a measure for locality that is based on Denning’s working set model and expresses the performance of well known algorithms in term of this parameter, which introduces parameterized-style analysis to online algorithms.
Abstract: It is well-established that input sequences for paging and list update have locality of reference. In this paper we analyze the performance of algorithms for these problems in terms of the amount of locality in the input sequence. We define a measure for locality that is based on Denning’s working set model and express the performance of well known algorithms in term of this parameter. This introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their intuitive relative strengths. Also it reflects the intuition that a larger cache leads to a better performance. We obtain similar separation for list update algorithms. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results.

23 citations


Book ChapterDOI
05 Dec 2009
TL;DR: This work considers the line-separable discrete unit disk cover problem (the set of disk centres can be separated from the set of points by a line) and presents an O(m 2 n)-time algorithm that finds an exact solution.
Abstract: Given m unit disks and n points in the plane, the discrete unit disk cover problem is to select a minimum subset of the disks to cover the points. This problem is NP-hard [11] and the best previous practical solution is a 38-approximation algorithm by Carmi et al. [4]. We first consider the line-separable discrete unit disk cover problem (the set of disk centres can be separated from the set of points by a line) for which we present an O(m 2 n)-time algorithm that finds an exact solution. Combining our line-separable algorithm with techniques from the algorithm of Carmi et al. [4] results in an O(m 2 n 4) time 22-approximate solution to the discrete unit disk cover problem.

20 citations


Book ChapterDOI
05 Dec 2009
TL;DR: This work presents the first adaptive data structure for two-dimensional orthogonal range search, and presents a novel algorithm of independent interest to decompose a point set into a minimum number of untangled, similarly directed monotonic chains in O(k^2n+nlogn) time.
Abstract: We present the first adaptive data structure for two-dimensional orthogonal range search. Our data structure is adaptive in the sense that it gives improved search performance for data that is better than the worst case (Demaine et al., 2000) [8]; in this case, data with more inherent sortedness. Given n points on the plane, the linear space data structure can answer range queries in O(logn+k+m) time, where m is the number of points in the output and k is the minimum number of monotonic chains into which the point set can be decomposed, which is O(n) in the worst case. Our result matches the worst-case performance of other optimal-time linear space data structures, or surpasses them when k=o(n). Our data structure can be made implicit, requiring no extra space beyond that of the data points themselves (Munro and Suwanda, 1980) [16], in which case the query time becomes O(klogn+m). We also present a novel algorithm of independent interest to decompose a point set into a minimum number of untangled, similarly directed monotonic chains in O(k^2n+nlogn) time.

16 citations


Book ChapterDOI
04 Jun 2009
TL;DR: This paper performs an experimental comparison of various list update algorithms both as stand alone compression mechanisms and as a second stage of the BWT-based compression and shows MTF outperforms other list updategorithms in practice after BWT.
Abstract: List update algorithms have been widely used as subroutines in compression schemas, most notably as part of Burrows-Wheeler compression. The Burrows-Wheeler transform (BWT), which is the basis of many state-of-the-art general purpose compressors applies a compression algorithm to a permuted version of the original text. List update algorithms are a common choice for this second stage of BWT-based compression. In this paper we perform an experimental comparison of various list update algorithms both as stand alone compression mechanisms and as a second stage of the BWT-based compression. Our experiments show MTF outperforms other list update algorithms in practice after BWT. This is consistent with the intuition that BWT increases locality of reference and the predicted result from the locality of reference model of Angelopoulos et al. [1]. Lastly, we observe that due to an often neglected difference in the cost models, good list update algorithms may be far from optimal for BWT compression and construct an explicit example of this phenomena. This is a fact that had yet to be supported theoretically in the literature.

14 citations


Proceedings Article
11 Jul 2009
TL;DR: This paper addresses the problem of designing an interruptible system in a setting in which n problem instances, all equally important, must be solved and proposes a schedule which is optimal for the case of a single processor.
Abstract: In this paper we address the problem of designing an interruptible system in a setting in which n problem instances, all equally important, must be solved. The system involves scheduling executions of contract algorithms (which offer a trade-off between allowable computation time and quality of the solution) in m identical parallel processors. When an interruption occurs, the system must report a solution to each of the n problem instances. The quality of this output is then compared to the best-possible algorithm that has foreknowledge of the interruption time and must, likewise, produce solutions to all n problem instances. This extends the well-studied setting in which only one problem instance is queried at interruption time. We propose a schedule which we prove is optimal for the case of a single processor. For multiple processors, we show that the quality of the schedule is within a small factor from optimal.

13 citations


Book ChapterDOI
24 Jul 2009
TL;DR: This work describes polynomial-time approximations for both the minimization and decision versions of the Hausdorff core problem, and provides an argument supporting the hardness of the problem.
Abstract: Given a simple polygon P , we consider the problem of finding a convex polygon Q contained in P that minimizes H (P ,Q ), where H denotes the Hausdorff distance. We call such a polygon Q a Hausdorff core of P . We describe polynomial-time approximations for both the minimization and decision versions of the Hausdorff core problem, and we provide an argument supporting the hardness of the problem.

4 citations


Proceedings ArticleDOI
19 Apr 2009
TL;DR: It is proved that VLB is no longer optimal on unrestricted topologies, and can require more capacity than shortest path routing to serve all traffic matrices on some topologies.
Abstract: Valiant load balancing (VLB), also called two-stage load balancing, is gaining popularity as a routing scheme that can serve arbitrary traffic matrices. To date, VLB network design is well understood on a logical full-mesh topology, where VLB is optimal even when nodes can fail. In this paper, we address the design and capacity provisioning of arbitrary VLB network topologies. First, we introduce an algorithm to determine if VLB can serve all traffic matrices when a fixed number of arbitrary links fail, and we show how to find a min-cost expansion of the network - via link upgrades and installs - so that it is resilient to these failures. Additionally, we propose a method to design a new VLB network under the fixed-charge network design cost model. Finally, we prove that VLB is no longer optimal on unrestricted topologies, and can require more capacity than shortest path routing to serve all traffic matrices on some topologies. These results rely on a novel theorem that characterizes the capacity VLB requires of links crossing each cut, i.e., a partition, of the network's nodes.

4 citations