scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Peer-to-Peer Computing in 2009"


Proceedings ArticleDOI
09 Oct 2009
TL;DR: The key features of peer-to-peer (P2P) systems are scalability and dynamism, so simulation is crucial in P2P research.
Abstract: The key features of peer-to-peer (P2P) systems are scalability and dynamism. The evaluation of a P2P protocol in realistic environments is very expensive and difficult to reproduce, so simulation is crucial in P2P research.

617 citations


Proceedings ArticleDOI
09 Oct 2009
TL;DR: The problem space for an ALTO approach to enable P2P applications to obtain information regarding network layer topology, taking into account recent developments in the IETF ALTO Working Group is comprises.
Abstract: Today, most P2P applications do not consider locality on the underlying network topology when choosing their neighbors on the P2P routing layer. As a result, participating peers may experience long delays and peers' ISPs suffer from a large amount of (costly) inter-ISP traffic. One potential solution to mitigate these problems is to have ISPs or third parties convey information regarding the underlying network topology to P2P-clients through a dedicated service. Following this approach, the IETF has recently formed an Application Layer Traffic Optimization (ALTO) working group for standardizing a protocol to enable P2P applications to obtain information regarding network layer topology. This paper comprises the problem space for such an ALTO approach, taking into account recent developments in the IETF ALTO Working Group. In particular, we will describe requirements for an ALTO protocol identified in the IETF, concrete protocols which have been proposed so far, and the overall challenges. In addition, we will discuss related issues such as privacy considerations, the relationship of an ALTO service with existing caching solutions, discovery mechanisms for an ALTO service, and security considerations.

105 citations


Proceedings ArticleDOI
09 Oct 2009
TL;DR: OverSim as discussed by the authors facilitates rapid prototyping of new overlay protocols by providing functions common to most overlay protocols, and allows to use the same implementations for scalable simulations and in real networks.
Abstract: OverSim facilitates rapid prototyping of new overlay protocols by providing functions common to most overlay protocols. The framework allows to use the same implementations for scalable simulations and in real networks. OverSim benefits from features like an efficient event scheduler and strong GUI support. The large number of implemented overlay protocols and the availability to collect various statistical data make OverSim a powerful tool and reference platform for the peer-to-peer research community. OverSim is welldocumented, actively developed as an open source project on http://www.oversim.org/, has an active mailing list with a strong user base and is open to contributions.

95 citations


Proceedings ArticleDOI
09 Oct 2009
TL;DR: A general and unified mathematical framework is presented to analyze a large class of chunk selection policies and provides some interesting observations on the optimal chunk selection policy: it is of shaped and becomesmore greedy as the upload capacity of the server increases.
Abstract: Data-driven P2P streaming systems can potentially provide good playback rate to a large number of viewers. One important design problem in such P2P systems is to determine the optimal chunk selection policy that provides high continuity playback under the server's upload capacity constraint. We present a general and unified mathematical framework to analyze a large class of chunk selection policies. The analytical framework is asymptotically exact when the number of viewers is large. More importantly, we provide some interesting observations on the optimal chunk selection policy: it is of shaped and becomesmore greedy as the upload capacity of the server increases. This insight helps content providers to deploy large scale streaming systems with a QoS-guarantee under a given cost constraint.

66 citations


Proceedings ArticleDOI
09 Oct 2009
TL;DR: The birth and evolution of Peer-to-Peer protocols have, for the most part, been about peer discovery, but only recently have the authors seen a real push to completely decentralized peer discovery to increase scalability and resilience.
Abstract: The birth and evolution of Peer-to-Peer (P2P) protocols have, for the most part, been about peer discovery. Napster, one of the first P2P protocols, was basically FTP/HTTP plus a way of finding hosts willing to send you the file. Since then, both the transfer and peer discovery mechanisms have improved, but only recently have we seen a real push to completely decentralized peer discovery to increase scalability and resilience.

49 citations


Proceedings ArticleDOI
09 Oct 2009
TL;DR: This paper's monitoring and management framework captures the live status of a peer-to-peer network in an exhaustive statistical representation and ensures that preset quality goals are reached and kept automatically.
Abstract: The peer-to-peer paradigm shows the potential to provide the same functionality and quality like client/server based systems, but with much lower costs. In order to control the quality of peer-to-peer systems, monitoring and management mechanisms need to be applied. Both tasks are challenging in large-scale networks with autonomous, unreliable nodes. In this paper we present a monitoring and management framework for structured peer-to-peer systems. It captures the live status of a peer-to-peer network in an exhaustive statistical representation. Using principles of autonomic computing, a preset system state is approached through automated system re-configuration in the case that a quality deviation is detected. Evaluation shows that the monitoring is very precise and lightweight and that preset quality goals are reached and kept automatically.

44 citations


Proceedings ArticleDOI
09 Oct 2009
TL;DR: This paper considers the predominant approach of Biased Neighbor Selection and compares it with Biased Unchoking, which is an alternative locality aware peer selection strategy that is proposed in this paper and shows that both mechanisms complement each other for the BitTorrent file sharing application and achieve the best performance when combined.
Abstract: Locality promotion in P2P content distribution networks is currently a major research topic. One of the goals of all discussed approaches is to reduce the interdomain traffic that causes high costs for ISPs. However, the focus of the work in this field is generally on the type of locality information that is provided to the overlay and on the entities that exchange this information. An aspect that is mostly neglected is how this information is used by the peers. In this paper, we consider the predominant approach of Biased Neighbor Selection and compare it with Biased Unchoking, which is an alternative locality aware peer selection strategy that we propose in this paper. We show that both mechanisms complement each other for the BitTorrent file sharing application and achieve the best performance when combined.

36 citations


Proceedings ArticleDOI
09 Oct 2009
TL;DR: This paper studies the lookup performance of locating nodes responsible for replicated information in Kad — one of the largest DHT networks existing currently and proposes solutions which either exploit the high routing table similarity or avoid the duplicate returns using multiple target keys.
Abstract: A Distributed Hash Table (DHT) is a structured overlay network service that provides a decentralized lookup for mapping objects to locations. In this paper, we study the lookup performance of locating nodes responsible for replicated information in Kad — one of the largest DHT networks existing currently. Throughout the measurement study, we found that Kad lookups locate only 18% of nodes storing replicated data. This failure leads to limited reliability and an inefficient use of resources during lookups. Ironically, we found that this poor performance is due to the high level of routing table similarity, despite the relatively high churn rate in the network. We propose solutions which either exploit the high routing table similarity or avoid the duplicate returns using multiple target keys.

34 citations


Proceedings ArticleDOI
09 Oct 2009
TL;DR: A model of a swarm in BitTorrent where peers have arbitrary upload and download bandwidths is presented, which captures the effects of BitTorrent's well-known ‘tit-for-tat’ mechanism in bandwidth-inhomogeneous swarms and provides an accurate mathematical description of the resulting dynamics.
Abstract: A number of analytical models exists that capture various properties of the BitTorrent protocol. However, until now virtually all of these models have been based on the assumption that the peers in the system have homogeneous bandwidths. As this is highly unrealistic in real swarms, these models have very limited applicability. Most of all, these models implicitly ignore BitTorrent's most important property: peer selection based on the highest rate of reciprocity. As a result, these models are not suitable for understanding or predicting the properties of real BitTorrent networks. Furthermore, they are hardly of use in the design of realistic BitTorrent simulators and new P2P protocols. In this paper, we extend existing work by presenting a model of a swarm in BitTorrent where peers have arbitrary upload and download bandwidths. In our model we group peers with (roughly) the same bandwidth in classes, and then analyze the allocation of upload slots from peers in one class to peers in another class. We show that our model accurately predicts the bandwidth clustering phenomenon observed experimentally in other work, and we analyze the resulting data distribution in swarms. We validate our model with experiments using real BitTorrent clients. Our model captures the effects of BitTorrent's well-known ‘tit-for-tat’ mechanism in bandwidth-inhomogeneous swarms and provides an accurate mathematical description of the resulting dynamics.

33 citations


Proceedings ArticleDOI
09 Oct 2009
TL;DR: Analytical models to capture the performance of BitTorrent-like P2P systems with the presence of homogeneous and heterogeneous NAT peers are built and biased optimistic unchoke strategies are proposed in order to improve the overall system performance considerably.
Abstract: BitTorrent nowadays is one of the most popular peer-to-peer (P2P) applications on the Internet; on the other hand, Network Address Translation (NAT) has become pervasive in almost all networking scenarios. Despite the effort of NAT traversal, it is still very likely that P2P applications cannot receive incoming connection requests properly if they are behind NAT. Although this phenomenon has been widely observed, so far there is no quantitative study in the literature examining the impact of NAT on P2P applications. In this paper, we build analytical models to capture the performance of BitTorrent-like P2P systems with the presence of homogeneous and heterogeneous NAT peers. We further propose biased optimistic unchoke strategies in order to improve the overall system performance considerably. The analytical models have been validated by simulation results, which also reveal some interesting facts about the coexistence of NAT and public peers in P2P systems.

32 citations


Proceedings ArticleDOI
09 Oct 2009
TL;DR: The effects of ID repetitions under simplified settings are analyzed and find that ID repetition degrades Kad's performance on publishing and searching, but has insignificant effect on lookup process.
Abstract: ID uniqueness is essential in DHT-based systems as peer lookup and resource searching rely on ID-matching. Many previous works and measurements on Kad do not take into account that IDs among peers may not be unique. We observe that a significant portion of peers, 19.5% of the peers in routing tables and 4.5% of the active peers (those who respond to Kad protocol), do not have unique IDs. These repetitions would mislead the measurements of Kad network. We further observe that there are a large number of peers that frequently change their UDP ports, and there are a few IDs that repeat for a large number of times and all peers with these IDs do not respond to Kad protocol. We analyze the effects of ID repetitions under simplified settings and find that ID repetition degrades Kad's performance on publishing and searching, but has insignificant effect on lookup process. These measurement and analysis are useful in determining the sources of repetitions and are also useful in finding suitable parameters for publishing and searching.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: This paper addresses the shortcoming of a distributed PKI for P2P networks which allows to push security mechanisms to the edges of the network but relies on unaffordable maintenance operations using byzantine agreements, and proposes efficient maintenance operations without any agreements.
Abstract: In decentralized P2P networks, many security mechanisms still rely on a central authority. This centralization creates a single point of failure and does not comply with the P2P principles. We previously proposed a distributed PKI for P2P networks which allows to push security mechanisms to the edges of the network but relies on unaffordable maintenance operations using byzantine agreements. In this paper, we address this shortcoming and propose efficient maintenance operations without any agreements. Our improvements allow a real deployment of this P2P PKI.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: The ideas behind ModelNet are introduced that have made it a successful experimental platform and the latest additions to the methodology to test the next generation of network protocols and applications are highlighted.
Abstract: ModelNet is a network emulator designed for repeatable, large-scale experimentation with real networked systems. This talk introduces the ideas behind ModelNet that have made it a successful experimental platform. Beyond these core concepts, the talk highlights the latest additions to our methodology to test the next generation of network protocols and applications. Many of these developments address the datacenter compute environment: high-capacity networks, sophisticated infrastructure software (storage and virtualization), and complex network load. While these efforts significantly extend ModelNet's capabilities, there remain a number of open challenges, including incorporating new performance objectives (energy) and multicore architectures.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: This paper proposes a new protocol for multi-view P2P streaming, called Divide-and-Conquer (DAC), which efficiently solves the inter-channel bandwidth competition problem using a divide-andconquer strategy at the channel level, and thus is flexible to work with various streaming protocols.
Abstract: Multi-view peer-to-peer (P2P) live streaming systems have recently emerged, where a user can simultaneously watch multiple channels. Previous work on multi-view P2P streaming solves the fundamental inter-channel bandwidth competition problem at the individual peer level, and thus can be used with very limited types of streaming protocols. In this paper, we propose a new protocol for multi-view P2P streaming, called Divide-and-Conquer (DAC), which efficiently solves the inter-channel bandwidth competition problem using a divide-andconquer strategy at the channel level, and thus is flexible to work with various streaming protocols. This makes DAC more suitable for upgrading current single-view P2P live streaming systems to multi-view P2P live streaming systems. Our extensive packetlevel simulations show that DAC is efficient in allocating the overall system bandwidth among competing channels, is flexible in working with various streaming protocols, and is scalable in supporting a large number of users and channels.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: This paper proposes and studies analytical models that assess the bandwidth consumption and the probability to lose data of storage systems that use erasure coded redundancy and proposes a new stochastic model based on a fluid approximation that better captures the system behavior.
Abstract: Peer-to-peer storage systems aim to provide a reliable long-term storage at low cost. In such systems, peers fail continuously, hence, the necessity of self-repairing mechanisms to achieve high durability. In this paper, we propose and study analytical models that assess the bandwidth consumption and the probability to lose data of storage systems that use erasure coded redundancy. We show by simulations that the classical stochastic approach found in the literature, that models each block independently, gives a correct approximation of the system average behavior, but fails to capture its variations over time. These variations are caused by the simultaneous loss of multiple data blocks that results from a peer failing (or leaving the system). We then propose a new stochastic model based on a fluid approximation that better captures the system behavior. In addition to its expectation, it gives a correct estimation of its standard deviation. This new model is validated by simulations.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: This paper devise a streaming scheme which minimizes the maximum end-to-end streaming delay for a mesh-based overlay network paradigm, and presents a polynomial-time approximation algorithm which is bounded by a ratio of O.
Abstract: Peer-to-peer (P2P) technology provides a scalable solution in multimedia streaming. Many streaming applications, such as IPTV and video conferencing, have rigorous constraints on end-to-end delays. Obtaining assurances on meeting those delay constraints in dynamic and heterogenous network environments is a challenge. In this paper, we devise a streaming scheme which minimizes the maximum end-to-end streaming delay for a mesh-based overlay network paradigm. We first formulate the minimum-delay P2P streaming problem, called the MDPS problem, and prove its NP-completeness. We then present a polynomial-time approximation algorithm to this problem, and show that the performance of our algorithm is bounded by a ratio of O. Our simulation study reveals the effectiveness of our algorithm, and shows a reasonable message overhead.

Proceedings ArticleDOI
Santosh Kulkarni1
09 Oct 2009
TL;DR: The Badumna Network Suite is presented, a network engine for Massively Multiplayer Online (MMO) applications that comprises of a distributed network engine that interfaces with existing MMO platforms and the issues involved in integrating such a technology with commercial gaming platforms are discussed.
Abstract: We present Badumna Network Suite, a network engine for Massively Multiplayer Online (MMO) applications. MMO applications such as World of Warcraft and Second Life use client-server architecture. This architecture has several drawbacks such as high deployment costs, single point of failure and lack of scalability. Badumna's goal is to scale to truly massive player counts using minimal operator owned infrastructure and network resources. The key to achieving this goal is in forming a peer-to-peer network and distributing the processing on this network. By doing this, Badumna reduces hosting costs significantly and it also increases the maximum number of users that can be allowed in a given region. The technology comprises of a distributed network engine that interfaces with existing MMO platforms. This paper presents the technology and discusses the issues involved in integrating such a technology with commercial gaming platforms — the expectations and the challenges. We then present results from commercial trials and conclude by discussing the role peer-to-peer computing will play in defining the future of MMO technology.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: The SimGrid simulation framework is presented whose goal is to provide a generic evaluation tool for large-scale distributed computing and employs a modular simulation kernel that supports the addition and use of new resource models without changes in the user code.
Abstract: We presented the SimGrid simulation framework whose goal is to provide a generic evaluation tool for large-scale distributed computing. Its main components are: two APIs for researchers who study algorithm and need to prototype simulations quickly, and two for developers who can develop applications in the comfort of the simulated world before deploying them seamlessly in the real world. SimGrid employs a modular simulation kernel that supports the addition and use of new resource models without changes in the user code. We used this feature ourselves to implement several simulation models and even to integrate the GTNetS packet-level simulator.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: This paper proposes a novel solution that enables all honest peers to protect themselves from sybils with high probability in large structured P2P systems and shows the effectiveness of the proposed system in defending against Sybil attack both analytically and experimentally.
Abstract: Sybil attack is one of the most challenging problems that plague current decentralized Peer-to-Peer systems. In Sybil attack, a single malicious user creates multiple peer identities known as sybils. These sybils are employed to target honest peers and hence subvert the system. In this paper, we propose a novel solution that enables all honest peers to protect themselves from sybils with high probability in large structured P2P systems. In our proposed sybil defense system, we associate every peer with another non-sybil peer known as SyMon. A given peer's SyMon is chosen dynamically such that the chances of both of them being sybils are very low. The chosen SyMon is entrusted with the responsibility of moderating the transactions involving the given peer and hence makes it almost impossible for sybils to compromise the system. We show the effectiveness of our proposed system in defending against Sybil attack both analytically and experimentally.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: PlanetSim, a discrete event-based simulation framework for peer-to-peer overlay networks and services, is implemented in Java and provides good qualities for both researchers and developers, such as modularity, flexibility and clarity on its design and implementation.
Abstract: We introduce PlanetSim, a discrete event-based simulation framework for peer-to-peer overlay networks and services. It is implemented in Java and provides good qualities for both researchers and developers, such as a strong system development background, as well as modularity, flexibility and clarity on its design and implementation. All this is corroborated by the important community using and supporting PlanetSim.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: Sorcery is presented, a novel active challenge-response mechanism based on the notion that one side of interaction with dominant information can detect whether the other side is telling a lie that can effectively address the problem of deceptive behaviors, and work better than the existing reputation models.
Abstract: Deceptive behaviors of peers in Peer-to-Peer (P2P) content sharing systems have become a serious problem due to the features of P2P overlay networks such as anonymity, self-organization, etc. This paper presents Sorcery, a novel active challenge-response mechanism based on the notion that one side of interaction with dominant information can detect whether the other side is telling a lie. To make each client obtain the dominant information, our approach introduces social network to the P2P content sharing system; thus, the client can establish friend-relationships with peers who are either acquaintances in reality or those reliable online friends. Using the confidential voting histories of friends as own dominant information, the client can challenge the content providers with the overlapping votes of both his friends and the content provider, thus detecting whether the content provider is a deceiver. Moreover, Sorcery provides the punishment mechanism which can reduce the impact brought by deceptive behaviors, and our work also discusses some key practical issues. The experimental results illustrate that Sorcery can effectively address the problem of deceptive behaviors, and work better than the existing reputation models.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: T-SIZE is described, a peer counting protocol that is based on gossip-based aggregation that can handle extreme levels of churn, and automatically ensures that all participating nodes learn the outcome of the peer counting.
Abstract: This paper describes T-SIZE, a peer counting protocol that is based on gossip-based aggregation. Peer counting has become increasingly important as the size of the network is often a crucial parameter used to guarantee robustness, small diameter, load-balance, or to generally optimize the system. Our work improves the previous work by providing a protocol that is eventually accurate, i.e. the estimate will eventually converge to the true peer count in absence of churn. The protocol can handle extreme levels of churn, and automatically ensures that all participating nodes learn the outcome of the peer counting.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: This demonstration shows the component oriented design and the evaluation of two P2P systems implemented in Kompics: Chord and Cyclon and demonstrates how component-oriented design enables seamless switching between alternative protocols.
Abstract: We present a framework for building and evaluating P2P systems in simulation, local execution, and distributed deployment Such uniform system evaluations increase confidence in the obtained results We briefly introduce the Kompics component model and its P2P framework We describe the component architecture of a Kompics P2P system and show how to define experiment scenarios for large dynamic systems The same experiments are conducted in reproducible simulation, in real-time execution on a single machine, and distributed over a local cluster or a wide area network This demonstration shows the component oriented design and the evaluation of two P2P systems implemented in Kompics: Chord and Cyclon We simulate the systems and then we execute them in real time During real-time execution we monitor the dynamic behavior of the systems and interact with them through their web-based interfaces We demonstrate how component-oriented design enables seamless switching between alternative protocols

Proceedings ArticleDOI
09 Oct 2009
TL;DR: GO (Gossip Objects) is a pernode gossip platform that is developed in support of gossip protocols, and the heuristic based on the observations that multiple rumors can often be squeezed into a single IP packet, and that indirect routing of rumors can speed up delivery.
Abstract: Gossip-based protocols are increasingly popular in large-scale distributed applications that disseminate updates to replicated or cached content. GO (Gossip Objects) is a pernode gossip platform that we developed in support of this class of protocols. In addition to making it easy to develop new gossip protocols and applications, GO allows nodes to join multiple gossip groups without losing the appealing fixed bandwidth guarantee of gossip protocols, and the platform optimizes rumor delivery latency in a principled manner. Our heuristic is based on the observations that multiple rumors can often be squeezed into a single IP packet, and that indirect routing of rumors can speed up delivery. We formalize these observations and develop a theoretical analysis of this heuristic. We have also implemented GO, and study the effectiveness of the heuristic by comparing it to the more standard random dissemination gossip strategy via simulation. We also evaluate GO on a trace from a popular distributed application.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: The possibility of using accountability to secure gossip-based dissemination protocols based on asymmetric exchanges and the fact that gossip protocols are dynamic and randomized makes the approach robust against collusion and alleviates the need for cryptography.
Abstract: Peer-to-peer content dissemination applications suffer immensely from freeriders, i.e., nodes that do not provide their fair share. The Tit-for-Tat (TfT) incentives have received much attention as they help make such systems more robust against freeriding. However, these rely on an asymmetric component, namely opportunistic pushes, that let peers receive content without sending anything in return. Opportunistic push constitutes the Achilles' heel of TfT-based protocols as illustrated by the fact that all known attacks against them exploit it. This problem becomes even more serious when used by colluding freeriders. In this paper, we discuss the possibility of using accountability to secure gossip-based dissemination protocols based on asymmetric exchanges. The fact that gossip protocols are dynamic and randomized makes our approach robust against collusion and alleviates the need for cryptography. We present the challenges raised by an auditing approach and give insights into how to build a freerider-tracking protocol for gossip-based content dissemination.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: The experimental results show that routing algorithms can obtain important benefits from reputation — even when peer lifetimes are short and the fraction of bad users is moderate, and a new routing protocol is proposed for Chord.
Abstract: Recently, it has been argued that reputation mechanisms could be used to improve routing by conditioning next-hop decisions to the past behavior of peers. However, churn may severely hinder the applicability of reputationsmechanisms. In particular, short peer lifetimes imply that reputations are typically generated from a small number of transactions and are few reliable. To examine how high rates of churn affect reputation systems, we present an analytical model to study the potential damage done by malicious peers together with churn. With our model, we show that it cannot be expected in general that reputations are reliable. We then analyze the impact of this result by proposing a new routing protocol for Chord. Mainly, the protocol exploits reputation to improve the decision about which neighbor select as next-hop peer. Our experimental results show that routing algorithms can obtain important benefits from reputation — even when peer lifetimes are short and the fraction of bad users is moderate.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: This work ties offers to monotonic counters in such a way that any attempt to not report an offer, or report it falsely, will be detected, and gives a solution to this problem based on Trusted Computing.
Abstract: Peer-to-peer (P2P) based marketplaces have a number of advantages over traditional centralized systems (such as eBay). Peers form a distributed hash table and store sale offers for other peers. A key problem in such a system is ensuring that the peers store and report all sale offers fairly, and do not for instance favor their own offers. We give a solution to this problem based on Trusted Computing, but unlike other approaches we do not measure and restrict all firmware and software running on a peer. Instead, we tie offers to monotonic counters in such a way that any attempt to not report an offer, or report it falsely, will be detected.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: The simulation results suggest that PIS/NC has the possibility of dramatically improving the lookup latency of DHTs, and presents Canary, a P IS/NC-based CAN whose d-dimensional logical space corresponds to that of Vivaldi.
Abstract: Network coordinates (NCs) construct a logical space which enables efficient and accurate estimation of network latency. Although many researchers have proposed NCbased strategies to reduce the lookup latency of distributed hash tables (DHTs), these strategies are limited in the improvement of the lookup latency; the nearest node to which a query should be forwarded is not always included in the consideration scope of a node. This is because conventional DHTs assign node IDs independent of the underlying physical network. In this paper, we propose an NC-based method of constructing a topology-aware DHT by Proximity Identifier Selection strategy (PIS/NC). PIS/NC assigns an ID to each node based on NC of the node. This paper presents Canary, a PIS/NC-based CAN whose d-dimensional logical space corresponds to that of Vivaldi. Our simulation results suggest that PIS/NC has the possibility of dramatically improving the lookup latency of DHTs. Whereas DHash++ is only able to reduce the median lookup latency by 15% of the original Chord, Canary reduces it by 70% of the original CAN.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: In this paper, the authors proposed a solution to the piece rarity problem in BitTorrent by unifying a piece rarity factor with the BitTorrent Unchoking algorithm, which can optimize incentives in a swarm and help its constituent peers in achieving the equilibrium facilitating truly co-operative behavior.
Abstract: BitTorrent is an extensively adopted p2p Content Distribution System on the Internet. In spite of its proincentive approach and ease of implementation, recent research has empirically shown BitTorrent to be vulnerable to strategic manipulation by its constituent peers in a swarm. Moreover, Honest Piece Revelation and Free-Riding is becoming an increasing concern. Our findings indicate that till date, it is the orthogonal treatment of piece rarity and unchoking, that has encouraged strategic manipulation enabling unfair maximization of incentives in p2p systems. In this paper, we propose that solution to such concerns lies in unifying a Piece Rarity factor with the BitTorrent Unchoking Algorithm. We also discuss a new Discount Parameter attack that compromises most Tit-for-Tat mechanisms. Our analysis demonstrates how underreporting, as a Piece Revelation Strategy in auction based choking algorithms, could result in Starvation. prTorrent, a novel approach based on BitTorrent shows how strategic formulation of the Piece Rarity parameter can optimize incentives in a swarm, and help its constituent peers in achieving the equilibrium facilitating truly co-operative behavior.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: This work detects and analyzes the two basic methodologies used to achieve load-balancing: Iterative key re-distribution between neighbors and node migration and proposes a hybrid method that adaptively utilizes these two extremes to achieve both fast and cost-effective load- Balancing in distributed systems that support range queries.
Abstract: Distributed systems such as Peer-to-Peer overlays have been shown to efficiently support the processing of range queries over large numbers of participating hosts. In such systems, uneven load allocation has to be effectively tackled in order to minimize overloaded peers and optimize their performance. In this work, we detect and analyze the two basic methodologies used to achieve load-balancing: Iterative key re-distribution between neighbors and node migration. Based on this analysis, we propose a hybrid method that adaptively utilizes these two extremes to achieve both fast and cost-effective load-balancing in distributed systems that support range queries. As a case study, we offer an implementation on top of a Skip Graph, where we validate our findings in a variety of workloads. Our experimental analysis shows that the hybrid method converges 10% faster than simple neighbor item exchanges and is more than 70% bandwidth efficient compared to simple node migrations.