scispace - formally typeset
Search or ask a question

Showing papers by "Alejandro López-Ortiz published in 2008"


Book ChapterDOI
07 Apr 2008
TL;DR: It is proved that Move-to-Front (MTF) is the unique optimal algorithm for list update and an open problem of Martinez and Roura is resolved, namely proposing a measure which can successfully separate MTF from all other list-update algorithms.
Abstract: It is known that in practice, request sequences for the list update problem exhibit a certain degree of locality of reference. Motivated by this observation we apply the locality of reference model for the paging problem due to Albers et al. [STOC 2002/JCSS 2005] in conjunction with bijective analysis [SODA 2007] to list update. Using this framework, we prove that Move-to-Front (MTF) is the unique optimal algorithm for list update. This addresses the open question of defining an appropriate model for capturing locality of reference in the context of list update [Hester and Hirschberg ACM Comp. Surv. 1985]. Our results hold both for the standard cost function of Sleator and Tarjan [CACM 1985] and the improved cost function proposed independently by Martinez and Roura [TCS 2000] and Munro [ESA 2000]. This result resolves an open problem of Martinez and Roura, namely proposing a measure which can successfully separate MTF from all other list-update algorithms.

35 citations


Journal ArticleDOI
13 Jul 2008
TL;DR: This paper addresses the setting in which if an interruption occurs at time t, then the system is given an additional window of time $$w(t)\le c \cdot t$$w (t)≤c·t, for constant c, within which the simulation must be completed, and formulate extensions to performance measures of schedules under this setting.
Abstract: A contract algorithm is an algorithm which is given, as part of its input, a specified amount of allowable computation time. In contrast, interruptible algorithms may be interrupted throughout their execution, at which point they must report their current solution. Simulating interruptible algorithms by means of schedules of executions of contract algorithms in parallel processors is a well-studied problem with significant applications in AI. In the classical case, the interruptions are hard deadlines in which a solution must be reported immediately at the time the interruption occurs. In this paper we study the more general setting of scheduling contract algorithms at the presence of soft deadlines. This is motivated by the observation of practitioners that soft deadlines are as common an occurrence as hard deadlines, if not more common. In our setting, at the time t of interruption the algorithm is given an additional window of time w(t) ≤ c ċ t to continue the contract or, indeed, start a new contract (for some fixed constant c). We explore this variation using the acceleration ratio, which is the canonical measure of performance for these schedules, and derive schedules of optimal acceleration ratio for all functions w.

16 citations


Proceedings ArticleDOI
14 Jun 2008
TL;DR: This paper proposes a model of low degree parallelism (LoPRAM) which builds upon the RAM and PRAM models yet better reflects recent advances in parallel (multi-core) architectures and shows that in many instances it naturally leads to work-optimal parallel algorithms via simple modifications to sequential algorithms.
Abstract: Over the last five years, major microprocessor manufacturers have released plans for a rapidly increasing number of cores per microprossesor, with upwards of 64 cores by 2015. In this setting, a sequential RAM computer will no longer accurately reflect the architecture on which algorithms are being executed. In this paper we propose a model of low degree parallelism (LoPRAM) which builds upon the RAM and PRAM models yet better reflects recent advances in parallel (multi-core) architectures. This model supports a high level of abstraction that simplifies the design and analysis of parallel programs. More importantly we show that in many instances it naturally leads to work-optimal parallel algorithms via simple modifications to sequential algorithms.

14 citations


Book ChapterDOI
07 Feb 2008
TL;DR: A new measure derived from first principles and introduced by [Angelopoulos, Dorrigiv and Lopez-Ortiz, SODA 2007] better corresponds to observed practice is derived that generalizes to all other online analysis settings.
Abstract: We compare the theory and practice of online algorithms, and show that in certain instances there is a large gap between the predictions from theory and observed practice. In particular, the competitive ratio which is the main technique for analysis of online algorithms is known to produce unrealistic measures of performance in certain settings. Motivated by this we examine first the case of paging. We present a study of the reasons behind this apparent failure of the theoretical model. We then show that a new measure derived from first principles and introduced by [Angelopoulos, Dorrigiv and Lopez-Ortiz, SODA 2007] better corresponds to observed practice. Using these ideas, we derive a new framework termed the cooperative ratio that generalizes to all other online analysis settings and illustrate with examples in list update.

9 citations


Book ChapterDOI
07 Feb 2008
TL;DR: It is proved that locality of reference is useful under some other cost models, which suggests that a new model combining aspects of both proposed models can be preferable.
Abstract: The competitive ratio is the most common metric in online algorithm analysis Unfortunately, it produces pessimistic measures and often fails to distinguish between paging algorithms that have vastly differing performance in practice An apparent reason for this is that the model does not take into account the locality of reference evidenced by actual input sequences Therefore many alternative measures have been proposed to overcome the observed shortcomings of competitive analysis in the context of paging algorithms While a definitive answer to all the concerns has yet to be found, clear progress has been made in identifying specific flaws and possible fixes for them In this paper we consider two previously proposed models of locality of reference and observe that even if we restrict the input to sequences with high locality of reference in them the performance of every on-line algorithm in terms of the competitive ratio does not improve Then we prove that locality of reference is useful under some other cost models, which suggests that a new model combining aspects of both proposed models can be preferable We also propose a new model for locality of reference and prove that the randomized marking algorithm has better fault rate on sequences with high locality of reference Finally we generalize the existing models to several variants of the caching problem

6 citations


Proceedings ArticleDOI
25 Mar 2008
TL;DR: An experimental comparison of various list update algorithms both as stand alone compression mechanisms and as a second stage of the BWT-based compression showed that TR and FC usually outperform MTF and TS if the authors do not use BWT.
Abstract: List update algorithms have been widely used as subroutines in compression schemas, most notably as part of Burrows-Wheeler compression. We performed an experimental comparison of various list update algorithms both as stand alone compression mechanisms and as a second stage of the BWT-based compression. We considered the following list update algorithms: move-to-front (MTF), sort-by-rank (SBR), frequency-count (FC), timestamp (TS), and transpose (TR) on text files of the Calgary Corpus. Our results showed that TR and FC usually outperform MTF and TS if we do not use BWT. This is in contrast with competitive analysis in which MTF and TS are superior to TS and FC. After BWT, MTF and TS have comparable performance and always outperform FC and TR. Our experiments are consistent with the intuition that BWT increases locality of reference and the predicted result from the locality of reference model of the work of Angelopoulos et al. (2008).

3 citations


Proceedings Article
01 Jan 2008
TL;DR: This paper addresses one more shortcoming of the standard model of geometric searching and considers for example a vacuuming robot—such as Roomba(TM), such a robot explores the environment using sophisticated motion planning algorithms with the goal of attaining complete coverage of space.
Abstract: Searching in a geometric space is an active area of research, predating computer technology. The applications are varied ranging from robotics, to search-andrescue operations in the high seas [24, 23] as well as in land, such as in an avalanche [5] or an office space [12, 7, 13], to scheduling of heuristic algorithms for solvers searching an abstract solution space for a specific solution [16, 17, 22, 2, 19]. Within academia, the field has seen two marked boosts in activity. The first was motivated by the loss of weaponry off the coast of Spain in 1966 in what is known as the Palomares incident and of the USS Thresher and Scorpion submarines in 1963 and 1966 respectively [24, 26]. A second renewed thrust took place in the late 1980s when the applications for autonomous robots became apparent. Geometric searching has proved a fertile ground within computational geometry for the design and analysis of search and recognition strategies under various initial conditions [14, 12, 6, 7, 8, 18, 20]. The basic search scenarios consist of exploring a one dimensional object, such as a path or office corridor, usually modeled as the real line, and of exploring a two dimensional scene, such as a room or a factory floor, usually modelled as a polygonal scene. However, in spite of numerous advances in the theoretical understanding of both of these scenarios, so far such solutions have generally had a limited impact in practice. Over the years various efforts have been made to address this situation, both in terms of isolated research papers attempting to narrow the gap, as well as in organized efforts such as the Algorithmic Foundations of Robotics conference and the Dagstuhl seminars on on-line robotics which bring together theoreticians and practitioners. From these it is apparent that the cost model and hence the solutions obtained from theoretical analysis do not fully reflect real life constraints. Several efforts have been made to resolve this, such as including the turn cost, the scanning cost, and error in navigation and reckoning [9, 10, 15, 20, 18]. In this paper we address one more shortcoming of the standard model. Consider for example a vacuuming robot—such as Roomba(TM). Such a robot explores the environment using sophisticated motion planning algorithms with the goal of attaining complete coverage of

2 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: In this paper, the authors give lower bounds on the value of k and the size of an equiprojective polyhedron, and show that there is no 3- or 4-equiprojectively polyhedra.
Abstract: A convex polyhedron P is k-equiprojective if for all of its orthogonal projections, except those parallel to the faces of P, the number of vertices in the shadow boundary is k. Finding an algorithm to construct all equiprojective polyhedra is an open problem first posed in 1968. In this paper we give lower bounds on the value of k and the size of an equiprojective polyhedron. We prove that there is no 3- or 4-equiprojective polyhedra and a triangular prism is the only 5-equiprojective polyhedron. We also discover some new equiprojective polyhedra.

2 citations


Proceedings Article
01 Jan 2008
TL;DR: Low-degree PRAM as mentioned in this paper is based on the PRAM architecture and inherits many of its interesting theoretical properties, such as bounded number of processors or cores, the synchronization model is looser and the use of parallelism is at a higher level unless explicitly specified otherwise.
Abstract: We propose a new model with small degreee of parallelism that reflects current and future multicore architectures in practice. The model is based on the PRAM architecture and hence it inherits many of its interesting theoretical properties. The key observations and differences are that the degree of parallelism (i.e. number of processors or cores) is bounded by O(log n), the synchronization model is looser and the use of parallelism is at a higher level unless explicitly specified otherwise. Surprisingly, these three rather minor variants result in a model in which obtaining work optimal algorithms is significantly easier than for the PRAM. The new model is called Low-degree PRAM or LoPRAM for short. Lastly we observe that there are thresholds in complexity of programming at p=O(log n) and p=O(sqrt(n)) and provide references for specific problems for which this threshold has been formally proven.

1 citations