scispace - formally typeset
Search or ask a question

Showing papers by "Alejandro López-Ortiz published in 2011"


Book ChapterDOI
18 Feb 2011
TL;DR: This paper provides an algorithm with constant approximation factor 18 to find a minimum cardinality subset D* ⊆ D such that unit disks in D* cover all the points in P.
Abstract: Given a set P of n points and a set D of m unit disks on a 2-dimensional plane, the discrete unit disk cover (DUDC) problem is (i) to check whether each point in P is covered by at least one disk in D or not and (ii) if so, then find a minimum cardinality subset D* ⊆ D such that unit disks in D* cover all the points in P. The discrete unit disk cover problem is a geometric version of the general set cover problem which is NP-hard [14]. The general set cover problem is not approximable within c log |P|, for some constant c, but the DUDC problem was shown to admit a constant factor approximation. In this paper, we provide an algorithm with constant approximation factor 18. The running time of the proposed algorithm is O(n log n+m log m+mn). The previous best known tractable solution for the same problem was a 22-factor approximation algorithm with running time O(m2n4).

54 citations


Journal ArticleDOI
TL;DR: The paper shows that no cache-oblivious search structure can guarantee a search performance of fewer than lg elog BN memory transfers between any two levels of the memory hierarchy, and shows that as k grows, the search costs of the optimal k-level DAM search structure and the optimal caching structure rapidly converge.
Abstract: This paper gives tight bounds on the cost of cache-oblivious searching. The paper shows that no cache-oblivious search structure can guarantee a search performance of fewer than lg elog B N memory transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes are limited to be powers of 2. The paper gives modified versions of the van Emde Boas layout, where the expected number of memory transfers between any two levels of the memory hierarchy is arbitrarily close to [lg e+O(lg lg B/lg B)]log B N+O(1). This factor approaches lg e≈1.443 as B increases. The expectation is taken over the random placement in memory of the first element of the structure. Because searching in the disk-access machine (DAM) model can be performed in log B N+O(1) block transfers, this result establishes a separation between the (2-level) DAM model and cache-oblivious model. The DAM model naturally extends to k levels. The paper also shows that as k grows, the search costs of the optimal k-level DAM search structure and the optimal cache-oblivious search structure rapidly converge. This result demonstrates that for a multilevel memory hierarchy, a simple cache-oblivious structure almost replicates the performance of an optimal parameterized k-level DAM structure.

19 citations


Proceedings ArticleDOI
10 Mar 2011
TL;DR: A new algorithm for computing the schedule with minimal completion times and makespan is presented which improves substantially over the best known algorithm with complexity O(mn^2).
Abstract: Consider the problem of scheduling a set of tasks of length p without preemption on $m$ identical machines with given release and deadline times. We present a new algorithm for computing the schedule with minimal completion times and makespan. The algorithm has time complexity O(min(1,p/m)n^2) which improves substantially over the best known algorithm with complexity O(mn^2).

10 citations


Journal ArticleDOI
TL;DR: This paper gives algorithms to find all projections of a convex polyhedron such that a given set of edges, faces and/or vertices appear on the silhouette, and presents an algorithm to solve this problem in O(k^2) time for k edges.
Abstract: Silhouettes of polyhedra are an important primitive in application areas such as machine vision and computer graphics. In this paper, we study how to select view points of convex polyhedra such that the silhouette satisfies certain properties. Specifically, we give algorithms to find all projections of a convex polyhedron such that a given set of edges, faces and/or vertices appear on the silhouette. We present an algorithm to solve this problem in O(k^2) time for k edges. For orthogonal projections, we give an improved algorithm that is fully adaptive in the number l of connected components formed by the edges, and has a time complexity of O(klogk+kl). We then generalize this algorithm to edges and/or faces appearing on the silhouette.

9 citations


Proceedings ArticleDOI
04 Jun 2011
TL;DR: This paper derives bounds on the competitive ratios of natural strategies to manage the cache, and shows that the offline problem is NP-complete, but that it admits an algorithm that runs in polynomial time in the length of the request sequences.
Abstract: Paging for multicore processors extends the classical paging problem to a setting in which several processes simultaneously share the cache. Recently, Hassidim [6] studied cache eviction policies for multicores under the traditional competitive analysis metric, showing that LRU is not competitive against an offline policy that has the power of arbitrarily delaying request sequences to its advantage. In this paper we study caching under the more conservative model in which requests must be served as they arrive. We derive bounds on the competitive ratios of natural strategies to manage the cache, and we show that the offline problem is NP-complete, but that it admits an algorithm that runs in polynomial time in the length of the request sequences.

4 citations


Book ChapterDOI
17 Oct 2011
TL;DR: This paper introduces COCA Filters, a new type of Bloom filters which exploits the co-occurrence probability of words in documents to reduce the false positive error.
Abstract: We propose an indexing data structure based on a novel variation of Bloom filters. Signature files have been proposed in the past as a method to index large text databases though they suffer from a high false positive error problem. In this paper we introduce COCA Filters, a new type of Bloom filters which exploits the co-occurrence probability of words in documents to reduce the false positive error. We show experimentally that by using this technique we can reduce the false positive error by up to 21.6 times for the same index size. Furthermore Bloom filters can be replaced by COCA filters wherever the co-occurrence of any two members of the universe is identifiable.

3 citations


Book ChapterDOI
08 Sep 2011
TL;DR: It is proved that Move-to-Front (MTF) is the best list update algorithm under any such distribution and the working set property is studied, which indicates the good performance of an online algorithm on sequences with locality of reference.
Abstract: In this paper we study the performance of list update algorithms under arbitrary distributions that exhibit strict locality of reference and prove that Move-to-Front (MTF) is the best list update algorithm under any such distribution. Furthermore, we study the working set property of online list update algorithms. The working set property indicates the good performance of an online algorithm on sequences with locality of reference. We show that no list update algorithm has the working set property. Nevertheless, we can distinguish among list update algorithms by comparing their performance in terms of the working set bound. We prove bounds for several well known list update algorithms and conclude that MTF attains the best performance in this context as well.

3 citations


Journal ArticleDOI
TL;DR: 3D reconstructing convex polygons and convex polyhedra given the number of visible edges and visible faces in some orthogonal projections is studied and it is shown that the problem becomes NP-hard when the directions are covered by three or more planes.
Abstract: We study the problem of reconstructing convex polygons and convex polyhedra given the number of visible edges and visible faces in some orthogonal projections. In 2D, we find necessary and sufficient conditions for the existence of a feasible polygon of size N and give an algorithm to construct one, if it exists. When N is not known, we give an algorithm to find the maximum and minimum sizes of a feasible polygon. In 3D, when the directions are covered by a single plane we show that a feasible polyhedron can be constructed from a feasible polygon. We also give an algorithm to construct a feasible polyhedron when the directions are covered by two planes. Finally, we show that the problem becomes NP-hard when the directions are covered by three or more planes.

3 citations


Book ChapterDOI
15 Aug 2011
TL;DR: An asymptotically optimal strategy which is within a multiplicative factor of Θ(log(m - t)) from the optimal search cost is presented, which incorporates three fundamental search paradigms, namely uniform search, doubling and hyperbolic dovetailing.
Abstract: We consider the problem of exploring m concurrent rays using a single searcher. The rays are disjoint with the exception of a single common point, and in each ray a potential target may be located. The objective is to design efficient search strategies for locating t targets (with t ≤ m). This setting generalizes the extensively studied ray search (or star search) problem, in which the searcher seeks a single target. In addition, it is motivated by applications such as the interleaved execution of heuristic algorithms, when it is required that a certain number of heuristics have to successfully terminate. We apply two different measures for evaluating the efficiency of the search strategy. The first measure is the standard metric in the context of ray-search problems, and compares the total search cost to the cost of an optimal algorithm that has full information on the targets. We present a strategy that achieves optimal competitive ratio under this metric. The second measure is based on a weakening of the optimal cost as proposed by Kirkpatrick [ESA 2009] and McGregor et al. [ESA 2009]. For this model, we present an asymptotically optimal strategy which is within a multiplicative factor of Θ(log(m - t)) from the optimal search cost. Interestingly, our strategy incorporates three fundamental search paradigms, namely uniform search, doubling and hyperbolic dovetailing. Moreover, for both measures, our results demonstrate that the problem of locating t targets in m rays is essentially as difficult as the problem of locating a single target in m - (t - 1) rays.

Posted Content
TL;DR: Over the last fteen years, web searching has seen tremendous improvements, but one of the main remaining challenges is to satisfy the users’ needs when they provide a poorly formulated query.
Abstract: Over the last fteen years, web searching has seen tremendous improvements. Starting from a nearly random collection of matching pages in 1995, today, search engines tend to satisfy the user’s informational need on well-formulated queries. One of the main remaining challenges is to satisfy the users’ needs when they provide a poorly formulated query. When the pages matching the user’s original keywords are judged to be unsatisfactory, query expansion techniques are used to alter the result set. These techniques nd