scispace - formally typeset
Search or ask a question
Author

Merkourios Karaliopoulos

Bio: Merkourios Karaliopoulos is an academic researcher from National and Kapodistrian University of Athens. The author has contributed to research in topics: Centrality & Network topology. The author has an hindex of 9, co-authored 19 publications receiving 238 citations. Previous affiliations of Merkourios Karaliopoulos include ETH Zurich & Information Technology Institute.

Papers
More filters
Book ChapterDOI
01 Jan 2009
TL;DR: A detailed survey and taxonomy of routing metrics is presented, with emphasis on their strengths and weaknesses and their application to various types of network scenarios.
Abstract: Routing in wireless mesh networks has been an active area of research for many years, with many proposed routing protocols selecting shortest paths that minimize the path hop count. Whereas minimum hop count is the most popular metric in wired networks, in wireless networks interference- and energy- related considerations give rise to more complex trade-offs. Therefore, a variety of routing metrics has been proposed especially for wireless mesh networks providing routing algorithms with high flexibility in the selection of best path as a compromise among throughput, end-to-end delay, and energy consumption. In this paper, we present a detailed survey and taxonomy of routing metrics. These metrics may have broadly different optimization objectives (e.g., optimize application performance, maximize battery lifetime, maximize network throughput), different methods to collect the required information to produce metric values, and different ways to derive the end-to-end route quality out of the individual link quality metrics. The presentation of the metrics is highly comparative, with emphasis on their strengths and weaknesses and their application to various types of network scenarios. We also discuss the main implications for practitioners and identify open issues for further research in the area.

60 citations

Journal ArticleDOI
TL;DR: This paper proposes a simple yet accurate analytical model for the effect of interference on data reception probability, based only on passive measurements and information locally available at the node, and uses this model to design an efficient interference-aware routing protocol that performs as well as probing-based protocols, yet avoids all pitfalls related to active probe measurements.
Abstract: Interference is an inherent characteristic of wireless (multihop) communications. Adding interference-awareness to important control functions, e.g., routing, could significantly enhance the overall network performance. Despite some initial efforts, it is not yet clearly understood how to best capture the effects of interference in routing protocol design. Most existing proposals aim at inferring its effect by actively probing the link. However, active probe measurements impose an overhead and may often misrepresent the link quality due to their interaction with other networking functions. Therefore, in this paper we follow a different approach and: 1) propose a simple yet accurate analytical model for the effect of interference on data reception probability, based only on passive measurements and information locally available at the node; 2) use this model to design an efficient interference-aware routing protocol that performs as well as probing-based protocols, yet avoids all pitfalls related to active probe measurements. To validate our proposal, we have performed experiments in a real testbed, setup in our indoor office environment. We show that the analytical predictions of our interference model exhibit good match with both experimental results as well as more complicated analytical models proposed in related literature. Furthermore, we demonstrate that a simple probeless routing protocol based on our model performs at least as good as well-known probe-based routing protocols in a large set of experiments including both intraflow and interflow interference.

48 citations

Proceedings ArticleDOI
06 Sep 2011
TL;DR: The proposed iterative service migration algorithm, called cDSMA, is extensively evaluated over both synthetic and real-world network topologies and achieves remarkable accuracy and robustness, clearly outperforming typical local-search heuristics for service migration.
Abstract: As social networking sites provide increasingly richer context, user-centric service development is expected to explode following the example of User-Generated Content. A major challenge for this emerging paradigm is how to make these exploding in numbers, yet individually of vanishing demand, services available in a cost-effective manner; central to this task is the determination of the optimal service host location. We formulate this problem as a facility location problem and devise a distributed and highly scalable heuristic to solve it. Key to our approach is the introduction of a novel centrality metric. Wherever the service is generated, this metric helps to a) identify a small subgraph of candidate service host nodes with high service demand concentration capacity; b) project on them a reduced yet accurate view of the global demand distribution; and, ultimately, c) pave the service migration path towards the location that minimizes its aggregate access cost over the whole network. The proposed iterative service migration algorithm, called cDSMA, is extensively evaluated over both synthetic and real-world network topologies. In all cases, it achieves remarkable accuracy and robustness, clearly outperforming typical local-search heuristics for service migration. Finally, we outline a realistic cDSMA protocol implementation with complexity up to two orders of magnitude lower than that of centralized solutions.

19 citations

Book ChapterDOI
09 May 2013
TL;DR: The paper assesses how well the egocentric metrics approximate the original sociocentric ones, determined under perfect network-wide information, and suggests that rank-correlation is a poor indicator for the approximability of centrality metrics.
Abstract: In many networks with distributed operation and self-organization features, acquiring their global topological information is impractical, if feasible at all. Internet protocols drawing on node centrality indices may instead approximate them with their egocentric counterparts, computed out over the nodes' ego-networks. Surprisingly, however, in router-level topologies the approximative power of localized ego-centered measurements has not been systematically evaluated. More importantly, it is unclear how to practically interpret any positive correlation found between the two centrality metric variants. The paper addresses both issues using different datasets of ISP network topologies. We first assess how well the egocentric metrics approximate the original sociocentric ones, determined under perfect network-wide information. To this end we use two measures: their rank-correlation and the overlap in the top-k node lists the two centrality metrics induce. Overall, the rank-correlation is high, in the order of 0.8-0.9, and, intuitively, becomes higher as we relax the ego-network definition to include the ego's r-hop neighborhood. On the other hand, the top-k node overlap is low, suggesting that the high rank-correlation is mainly due to nodes of lower rank. We then let the node centrality metrics drive elementary network operations, such as local search strategies. Our results suggest that, even under high rank-correlation, the locally-determined metrics can hardly be effective aliases for the global ones. The implication for protocol designers is that rank-correlation is a poor indicator for the approximability of centrality metrics.

19 citations

Proceedings ArticleDOI
25 Mar 2012
TL;DR: The paper proposes an innovative method for the performance analysis of opportunistic forwarding protocols over files logging mobile node encounters (contact traces) that is modular and evolves in three steps, and draws on graph expansion techniques to capture forwarding contacts into sparse space-time graph constructs.
Abstract: The paper proposes an innovative method for the performance analysis of opportunistic forwarding protocols over files logging mobile node encounters (contact traces). The method is modular and evolves in three steps. It first carries out contact filtering to isolate contacts that constitute message forwarding opportunities for givenmessage coordinates and forwarding rules. It then draws on graph expansion techniques to capture these forwarding contacts into sparse space-time graph constructs. Finally, it runs standard shortest path algorithms over these constructs and derives typical performance metrics such as message delivery delay and path hopcount. The method is flexible in that it can easily assess the protocol operation under various expressions of imperfect node cooperation. We describe it in detail, analyze its complexity, and evaluate it against discrete event simulations for three representative randomized forwarding schemes. The match with the simulation results is excellent and obtained with run times up to three orders of size smaller than the duration of the simulations, thus rendering our method a valuable tool for the performance analysis of opportunistic forwarding schemes.

13 citations


Cited by
More filters
01 Jan 2012

3,692 citations

Book ChapterDOI
01 Jan 1977
TL;DR: In the Hamadryas baboon, males are substantially larger than females, and a troop of baboons is subdivided into a number of ‘one-male groups’, consisting of one adult male and one or more females with their young.
Abstract: In the Hamadryas baboon, males are substantially larger than females. A troop of baboons is subdivided into a number of ‘one-male groups’, consisting of one adult male and one or more females with their young. The male prevents any of ‘his’ females from moving too far from him. Kummer (1971) performed the following experiment. Two males, A and B, previously unknown to each other, were placed in a large enclosure. Male A was free to move about the enclosure, but male B was shut in a small cage, from which he could observe A but not interfere. A female, unknown to both males, was then placed in the enclosure. Within 20 minutes male A had persuaded the female to accept his ownership. Male B was then released into the open enclosure. Instead of challenging male A , B avoided any contact, accepting A’s ownership.

2,364 citations

Journal ArticleDOI
TL;DR: In this article, the authors propose a method for solving the p-center problem on trees and demonstrate the duality of covering and constraining p-Center problems on trees.
Abstract: Ingredients of Locational Analysis (J. Krarup & P. Pruzan). The p-Median Problem and Generalizations (P. Mirchandani). The Uncapacitated Facility Location Problem (G. Cornuejols, et al.). Multiperiod Capacitated Location Models (S. Jacobsen). Decomposition Methods for Facility Location Problems (T. Magnanti & R. Wong). Covering Problems (A. Kolen & A. Tamir). p-Center Problems (G. Handler). Duality: Covering and Constraining p-Center Problems on Trees (B. Tansel, et al.). Locations with Spatial Interactions: The Quadratic Assignment Problem (R. Burkard). Locations with Spatial Interactions: Competitive Locations and Games (S. Hakimi). Equilibrium Analysis for Voting and Competitive Location Problems (P. Hansen, et al.). Location of Mobile Units in a Stochastic Environment (O. Berman, et al.). Index.

451 citations

Book ChapterDOI
21 May 2012
TL;DR: A centrality-based caching algorithm is proposed by exploiting the concept of (ego network) betweenness centrality to improve the caching gain and eliminate the uncertainty in the performance of the simplistic random caching strategy.
Abstract: Ubiquitous in-network caching is one of the key aspects of information-centric networking (ICN) which has recently received widespread research interest. In one of the key relevant proposals known as Networking Named Content (NNC), the premise is that leveraging in-network caching to store content in every node it traverses along the delivery path can enhance content delivery. We question such indiscriminate universal caching strategy and investigate whether caching less can actually achieve more . Specifically, we investigate if caching only in a subset of node(s) along the content delivery path can achieve better performance in terms of cache and server hit rates. In this paper, we first study the behavior of NNC's ubiquitous caching and observe that even naive random caching at one intermediate node within the delivery path can achieve similar and, under certain conditions, even better caching gain. We propose a centrality-based caching algorithm by exploiting the concept of (ego network) betweenness centrality to improve the caching gain and eliminate the uncertainty in the performance of the simplistic random caching strategy. Our results suggest that our solution can consistently achieve better gain across both synthetic and real network topologies that have different structural properties.

360 citations