scispace - formally typeset
Search or ask a question
Author

Alejandro López-Ortiz

Bio: Alejandro López-Ortiz is an academic researcher from University of Waterloo. The author has contributed to research in topics: Competitive analysis & List update problem. The author has an hindex of 33, co-authored 193 publications receiving 3719 citations. Previous affiliations of Alejandro López-Ortiz include Open Text Corporation & University of New Brunswick.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper surveys the field of Algorithmic Foundations of the Internet, which is a new area within theoretical computer science and considers six sample topics that illustrate the techniques and challenges in this field.
Abstract: In this paper we survey the field of Algorithmic Foundations of the Internet, which is a new area within theoretical computer science. We consider six sample topics that illustrate the techniques and challenges in this field.

8 citations

Journal ArticleDOI
TL;DR: This paper gives matching (i.e., optimal) upper and lower bounds for the acceleration ratio under a simulation of contract algorithms using iterative deepening techniques, and shows how to evaluate the average acceleration ratio of the class of exponential strategies in the setting of n problem instances and m parallel processors.
Abstract: A contract algorithm is an algorithm which is given, as part of the input, a specified amount of allowable computation time. The algorithm must then complete its execution within the allotted time. An interruptible algorithm, in contrast, can be interrupted at an arbitrary point in time, at which point it must report its currently best solution. It is known that contract algorithms can simulate interruptible algorithms using iterative deepening techniques. This simulation is done at a penalty in the performance of the solution, as measured by the so-called acceleration ratio. In this paper we give matching (i.e., optimal) upper and lower bounds for the acceleration ratio under such a simulation. We assume the most general setting in which n problem instances must be solved by means of scheduling executions of contract algorithms in m identical parallel processors. This resolves an open conjecture of Bernstein, Finkelstein, and Zilberstein who gave an optimal schedule under the restricted setting of round robin and length-increasing schedules, but whose optimality in the general unrestricted case remained open. Lastly, we show how to evaluate the average acceleration ratio of the class of exponential strategies in the setting of n problem instances and m parallel processors. This is a broad class of schedules that tend to be either optimal or near-optimal, for several variants of the basic problem.

8 citations

Book ChapterDOI
19 Jun 2005
TL;DR: Much sharper upper and lower bounds are presented for a known polynomial-time approximation scheme due to Li, Ma and Wang for the Consensus-Pattern problem, an abstraction of motif finding, a common bioinformatics discovery task.
Abstract: We present sharper upper and lower bounds for a known polynomial-time approximation scheme due to Li, Ma and Wang [7] for the Consensus-Pattern problem. This NP-hard problem is an abstraction of motif finding, a common bioinformatics discovery task. The PTAS due to Li et al. is simple, and a preliminary implementation [8] gave reasonable results in practice. However, the previously known bounds on its performance are useless when runtimes are actually manageable. Here, we present much sharper lower and upper bounds on the performance of this algorithm that partially explain why its behavior is so much better in practice than what was previously predicted in theory. We also give specific examples of instances of the problem for which the PTAS performs poorly in practice, and show that the asymptotic performance bound given in the original proof matches the behaviour of a simple variant of the algorithm on a particularly bad instance of the problem.

8 citations

Journal ArticleDOI
TL;DR: It is shown that for graphs of size N and treewidth α, there is an online algorithm that receives O (n(log α + log log N)* bits of advice and optimally serves any sequence of length n and it is proved that if a graph admits a system of μ collective tree (q, r)-spanners, then there is a (q + r)-competitive algorithm which requires O ( n(log μ +log log N)) bits of Advice.
Abstract: We consider the k-Server problem under the advice model of computation when the underlying metric space is sparse. On one side, we introduce ź(1)-competitive algorithms for a wide range of sparse graphs. These algorithms require advice of (almost) linear size. We show that for graphs of size N and treewidth ź, there is an online algorithm that receives O (n(log ź + log log N))* bits of advice and optimally serves any sequence of length n. We also prove that if a graph admits a system of μ collective tree (q, r)-spanners, then there is a (q + r)-competitive algorithm which requires O (n(log μ + log log N)) bits of advice. Among other results, this gives a 3-competitive algorithm for planar graphs, when provided with O (n log log N) bits of advice. On the other side, we prove that advice of size Ω(n) is required to obtain a 1-competitive algorithm for sequences of length n even for the 2-server problem on a path metric of size N ź 3. Through another lower bound argument, we show that at least n2(logźź1.22)$\frac {n}{2}(\log \alpha - 1.22)$ bits of advice is required to obtain an optimal solution for metric spaces of treewidth ź, where 4 ≤ ź < 2k.

8 citations

01 Jan 2007
TL;DR: This paper introduces cooperative analysis as an alternative general framework for the analysis of on-line algorithms, and shows that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the cooperative case, which matches experimental results.
Abstract: On-line algorithms are usually analyzed using competitive analysis, in which the performance of an on-line algorithm on a sequence is normalized by the performance of the optimal off-line algorithm on that sequence. In this paper we introduce cooperative analysis as an alternative general framework for the analysis of on-line algorithms. The idea is to normalize the performance of an on-line algorithm by a measure other than the performance of the off-line optimal algorithm OPT. We show that in many instances the perform of OPT on a sequence is a coarse approximation of the difficulty or complexity of a given input. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise indistinguishable under the classical model. This creates a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the cooperative case, which matches experimental results. This confirms that the ability of the on-line cooperative algorithm to ignore pathological worst cases can lead to algorithms that are more efficient in practice. ∗Cheriton School of Computer Science, University of Waterloo, Waterloo, Ont., N2L 3G1, Canada. {rdorrigiv, alopez-o}@uwaterloo.ca.

7 citations


Cited by
More filters
Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Proceedings ArticleDOI
26 Aug 2001
TL;DR: An efficient algorithm for mining decision trees from continuously-changing data streams, based on the ultra-fast VFDT decision tree learner is proposed, called CVFDT, which stays current while making the most of old data by growing an alternative subtree whenever an old one becomes questionable, and replacing the old with the new when the new becomes more accurate.
Abstract: Most statistical and machine-learning algorithms assume that the data is a random sample drawn from a stationary distribution. Unfortunately, most of the large databases available for mining today violate this assumption. They were gathered over months or years, and the underlying processes generating them changed during this time, sometimes radically. Although a number of algorithms have been proposed for learning time-changing concepts, they generally do not scale well to very large databases. In this paper we propose an efficient algorithm for mining decision trees from continuously-changing data streams, based on the ultra-fast VFDT decision tree learner. This algorithm, called CVFDT, stays current while making the most of old data by growing an alternative subtree whenever an old one becomes questionable, and replacing the old with the new when the new becomes more accurate. CVFDT learns a model which is similar in accuracy to the one that would be learned by reapplying VFDT to a moving window of examples every time a new example arrives, but with O(1) complexity per example, as opposed to O(w), where w is the size of the window. Experiments on a set of large time-changing data streams demonstrate the utility of this approach.

1,790 citations

Proceedings ArticleDOI
05 Nov 2003
TL;DR: This work study and evaluate link estimator, neighborhood table management, and reliable routing protocol techniques, and narrow the design space through evaluations on large-scale, high-level simulations to 50-node, in-depth empirical experiments.
Abstract: The dynamic and lossy nature of wireless communication poses major challenges to reliable, self-organizing multihop networks. These non-ideal characteristics are more problematic with the primitive, low-power radio transceivers found in sensor networks, and raise new issues that routing protocols must address. Link connectivity statistics should be captured dynamically through an efficient yet adaptive link estimator and routing decisions should exploit such connectivity statistics to achieve reliability. Link status and routing information must be maintained in a neighborhood table with constant space regardless of cell density. We study and evaluate link estimator, neighborhood table management, and reliable routing protocol techniques. We focus on a many-to-one, periodic data collection workload. We narrow the design space through evaluations on large-scale, high-level simulations to 50-node, in-depth empirical experiments. The most effective solution uses a simple time averaged EWMA estimator, frequency based table management, and cost-based routing.

1,735 citations

Journal ArticleDOI
TL;DR: Data Streams: Algorithms and Applications surveys the emerging area of algorithms for processing data streams and associated applications, which rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity.
Abstract: In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the input size or time significantly less than the input size. In the past few years, a new theory has emerged for reasoning about algorithms that work within these constraints on space, time, and number of passes. Some of the methods rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity. The applications for this scenario include IP network traffic analysis, mining text message streams and processing massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [1].

1,598 citations

Book
01 Jan 2006
TL;DR: Researchers from other fields should find in this handbook an effective way to learn about constraint programming and to possibly use some of the constraint programming concepts and techniques in their work, thus providing a means for a fruitful cross-fertilization among different research areas.
Abstract: Constraint programming is a powerful paradigm for solving combinatorial search problems that draws on a wide range of techniques from artificial intelligence, computer science, databases, programming languages, and operations research. Constraint programming is currently applied with success to many domains, such as scheduling, planning, vehicle routing, configuration, networks, and bioinformatics. The aim of this handbook is to capture the full breadth and depth of the constraint programming field and to be encyclopedic in its scope and coverage. While there are several excellent books on constraint programming, such books necessarily focus on the main notions and techniques and cannot cover also extensions, applications, and languages. The handbook gives a reasonably complete coverage of all these lines of work, based on constraint programming, so that a reader can have a rather precise idea of the whole field and its potential. Of course each line of work is dealt with in a survey-like style, where some details may be neglected in favor of coverage. However, the extensive bibliography of each chapter will help the interested readers to find suitable sources for the missing details. Each chapter of the handbook is intended to be a self-contained survey of a topic, and is written by one or more authors who are leading researchers in the area. The intended audience of the handbook is researchers, graduate students, higher-year undergraduates and practitioners who wish to learn about the state-of-the-art in constraint programming. No prior knowledge about the field is necessary to be able to read the chapters and gather useful knowledge. Researchers from other fields should find in this handbook an effective way to learn about constraint programming and to possibly use some of the constraint programming concepts and techniques in their work, thus providing a means for a fruitful cross-fertilization among different research areas. The handbook is organized in two parts. The first part covers the basic foundations of constraint programming, including the history, the notion of constraint propagation, basic search methods, global constraints, tractability and computational complexity, and important issues in modeling a problem as a constraint problem. The second part covers constraint languages and solver, several useful extensions to the basic framework (such as interval constraints, structured domains, and distributed CSPs), and successful application areas for constraint programming. - Covers the whole field of constraint programming - Survey-style chapters - Five chapters on applications Table of Contents Foreword (Ugo Montanari) Part I : Foundations Chapter 1. Introduction (Francesca Rossi, Peter van Beek, Toby Walsh) Chapter 2. Constraint Satisfaction: An Emerging Paradigm (Eugene C. Freuder, Alan K. Mackworth) Chapter 3. Constraint Propagation (Christian Bessiere) Chapter 4. Backtracking Search Algorithms (Peter van Beek) Chapter 5. Local Search Methods (Holger H. Hoos, Edward Tsang) Chapter 6. Global Constraints (Willem-Jan van Hoeve, Irit Katriel) Chapter 7. Tractable Structures for CSPs (Rina Dechter) Chapter 8. The Complexity of Constraint Languages (David Cohen, Peter Jeavons) Chapter 9. Soft Constraints (Pedro Meseguer, Francesca Rossi, Thomas Schiex) Chapter 10. Symmetry in Constraint Programming (Ian P. Gent, Karen E. Petrie, Jean-Francois Puget) Chapter 11. Modelling (Barbara M. Smith) Part II : Extensions, Languages, and Applications Chapter 12. Constraint Logic Programming (Kim Marriott, Peter J. Stuckey, Mark Wallace) Chapter 13. Constraints in Procedural and Concurrent Languages (Thom Fruehwirth, Laurent Michel, Christian Schulte) Chapter 14. Finite Domain Constraint Programming Systems (Christian Schulte, Mats Carlsson) Chapter 15. Operations Research Methods in Constraint Programming (John Hooker) Chapter 16. Continuous and Interval Constraints(Frederic Benhamou, Laurent Granvilliers) Chapter 17. Constraints over Structured Domains (Carmen Gervet) Chapter 18. Randomness and Structure (Carla Gomes, Toby Walsh) Chapter 19. Temporal CSPs (Manolis Koubarakis) Chapter 20. Distributed Constraint Programming (Boi Faltings) Chapter 21. Uncertainty and Change (Kenneth N. Brown, Ian Miguel) Chapter 22. Constraint-Based Scheduling and Planning (Philippe Baptiste, Philippe Laborie, Claude Le Pape, Wim Nuijten) Chapter 23. Vehicle Routing (Philip Kilby, Paul Shaw) Chapter 24. Configuration (Ulrich Junker) Chapter 25. Constraint Applications in Networks (Helmut Simonis) Chapter 26. Bioinformatics and Constraints (Rolf Backofen, David Gilbert)

1,527 citations