scispace - formally typeset
Search or ask a question

Showing papers by "Alejandro López-Ortiz published in 2007"


Proceedings ArticleDOI
07 Jan 2007
TL;DR: In this article, it was shown that LRU is the unique optimum strategy for paging under a deterministic model, and the authors provided full theoretical backing to the empirical observation that LRUs is preferable in practice.
Abstract: It has been experimentally observed that LRU and variants thereof are the preferred strategies for on-line paging. However, under most proposed performance measures for on-line algorithms the performance of LRU is the same as that of many other strategies which are inferior in practice. In this paper we first show that any performance measure which does not include a partition or implied distribution of the input sequences of a given length is unlikely to distinguish between any two lazy paging algorithms as their performance is identical in a very strong sense. This provides a theoretical justification for the use of a more refined measure. Building upon the ideas of concave analysis by Albers et al. [AFG05], we prove strict separation between LRU and all other paging strategies. That is, we show that LRU is the unique optimum strategy for paging under a deterministic model. This provides full theoretical backing to the empirical observation that LRU is preferable in practice.

73 citations


Proceedings ArticleDOI
15 Oct 2007
TL;DR: It is theoretically prove that flooding is effective for regular random graphs which is consistent with the experimental results, and the proposed hybrid algorithm has much better performance on power-law graphs.
Abstract: We study the performance of several search algorithms on unstructured peer-to-peer networks, both using classic search algorithms such as flooding and random walk, as well as a new hybrid algorithm proposed in this paper. This hybrid algorithm first uses flooding to find sufficient number of nodes and then starts random walks from these nodes. We compare the performance of the search algorithms on several graphs corresponding to common topologies proposed for peer- to-peer networks. In particular, we consider binomial random graphs, regular random graphs, power-law graphs, and clustered topologies. Our experiments show that for binomial random graphs and regular random graphs all algorithms have similar performance. For power-law graphs, flooding is effective for small number of messages, but for large number of messages our hybrid algorithm outperforms it. Flooding is ineffective for clustered topologies in which random walk is the best algorithm. For these topologies, our hybrid algorithm provides a compromise between flooding and random walk. We also compare the proposed hybrid algorithm with the fc-walker algorithm on power-law and clustered topologies. Our experiments show that while they have close performance on clustered topologies, the hybrid algorithm has much better performance on power-law graphs. We theoretically prove that flooding is effective for regular random graphs which is consistent with our experimental results.

24 citations


Book ChapterDOI
17 Dec 2007
TL;DR: This paper gives a finer separation of several known paging algorithms and shows that lookahead is beneficial for a paging algorithm, a fact that is well known in practice but it was, until recently, not verified by theory.
Abstract: In this paper we give a finer separation of several known paging algorithms. This is accomplished using a new technique that we call relative interval analysis. The technique compares the fault rate of two paging algorithms across the entire range of inputs of a given size rather than in the worst case alone. Using this technique we characterize the relative performance of LRU and LRU-2, as well as LRU and FWF, among others. We also show that lookahead is beneficial for a paging algorithm, a fact that is well known in practice but it was, until recently, not verified by theory.

11 citations


01 Jan 2007
TL;DR: This paper introduces cooperative analysis as an alternative general framework for the analysis of on-line algorithms, and shows that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the cooperative case, which matches experimental results.
Abstract: On-line algorithms are usually analyzed using competitive analysis, in which the performance of an on-line algorithm on a sequence is normalized by the performance of the optimal off-line algorithm on that sequence. In this paper we introduce cooperative analysis as an alternative general framework for the analysis of on-line algorithms. The idea is to normalize the performance of an on-line algorithm by a measure other than the performance of the off-line optimal algorithm OPT. We show that in many instances the perform of OPT on a sequence is a coarse approximation of the difficulty or complexity of a given input. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise indistinguishable under the classical model. This creates a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the cooperative case, which matches experimental results. This confirms that the ability of the on-line cooperative algorithm to ignore pathological worst cases can lead to algorithms that are more efficient in practice. ∗Cheriton School of Computer Science, University of Waterloo, Waterloo, Ont., N2L 3G1, Canada. {rdorrigiv, alopez-o}@uwaterloo.ca.

7 citations


Book ChapterDOI
12 Dec 2007
TL;DR: It is shown that the problem of constructing convex polygons and convex polyhedra given the number of visible edges and visible faces from some orthogonal projections becomes NP-complete for three or more planes.
Abstract: We study the problem of constructing convex polygons and convex polyhedra given the number of visible edges and visible faces from some orthogonal projections In 2D, we find necessary and sufficient conditions for the existence of a feasible polygon of size N and give an algorithm to construct one, if it exists When N is not known, we give an algorithm to find the maximum and minimum size of a feasible polygon In 3D, when the directions span a single plane we show that a feasible polyhedron can be constructed from a feasible polygon We also give an algorithm to construct a feasible polyhedron when the directions are covered by two planes Finally, we show that the problem becomes NP-complete for three or more planes

6 citations


Book ChapterDOI
14 Aug 2007
TL;DR: The present survey surveys the main techniques currently in use to compute the provisioning capacities required in a resilient high QoS network and observes that for data critical applications, a substantial amount of overprovisioning is in fact a fundamental step of any safe and acceptable solution to QoS and resiliency requirements.
Abstract: The two main alternatives for achieving high QoS on the public internet are (i) admission control and (ii) capacity overprovisioning. In the study of these alternatives the implicit (and sometimes explicit) message is that ideally, QoS issues should be dealt with by means of sophisticated admission control (AC) algorithms, and only because of their complexity providers fall on the simpler, perhaps more cost-effective, yet "wasteful" solution of capacity overprovisioning (CO) (see e.g. Olifer and Olifer [Wiley&Sons, 2005], Parekh [IWQoS'2003], Milbrandt et al. [J.Comm. 2007]). In the present survey we observe that these two alternatives are far from being mutually exclusive. Rather, for data critical applications, a substantial amount of "overprovisioning" is in fact a fundamental step of any safe and acceptable solution to QoS and resiliency requirements. We observe from examples in real life that in many cases large amounts of overprovisioning are already silently deployed within the internet domain and that in some restricted network settings they have become accepted practice even in the academic literature. Then we survey the main techniques currently in use to compute the provisioning capacities required in a resilient high QoS network.

6 citations


Book ChapterDOI
14 Aug 2007
TL;DR: This talk gives an overview of the state of the art on networks and routing schemes with this property and the need for backbone capacity that can support all traffic matrices.
Abstract: At any given time, the traffic on the network can be described using a traffic matrix. Entry ai,j in the matrix denotes the traffic originating in i with destination j currently in the network. As traffic demands are dynamic, the matrix itself is ever changing. Traditionally network capacity has been deployed so that it can support any traffic matrix with high probability, given the known traffic distribution patterns. Recently the need for resilience and reliabilibility of the network for mission critical data has brought the need for backbone capacity that can support all traffic matrices. In this talk we give an overview of the state of the art on networks and routing schemes with this property.

3 citations


Proceedings Article
01 Jan 2007
TL;DR: In this paper, the adaptive/cooperative analysis (ACA) framework is used to analyze on-line algorithms for paging and list update problems, and it is shown that the ability of the adaptive algorithm to ignore pathological worst cases can lead to more efficient in practice.
Abstract: On-line algorithms are usually analyzed using competitive analysis, in which the performance of on-line algorithm on a sequence is normalized by the performance of the optimal on-line algorithm on that sequence. In this paper we introduce adaptive/cooperative analysis as an alternative general framework for the analysis of on-line algorithms. This model gives promising results when applied to two well known on-line problems, paging and list update. The idea is to normalize the performance of an on-line algorithm by a measure other than the performance of the on-line optimal algorithm OPT. We show that in many instances the perform of OPT on a sequence is a coarse approximation of the difficulty or complexity of a given input. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise undistinguishable under the classical model. This createas a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the adaptive case. This confirms that the ability of the on-line adaptive algorithm to ignore pathological worst cases can lead to algorithms that are more efficient in practice.

2 citations