scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Peer-to-Peer Computing in 2013"


Proceedings ArticleDOI
19 Dec 2013
TL;DR: This paper analyzes how Bitcoin uses a multi-hop broadcast to propagate transactions and blocks through the network to update the ledger replicas, and verifies the conjecture that the propagation delay in the network is the primary cause for blockchain forks.
Abstract: Bitcoin is a digital currency that unlike traditional currencies does not rely on a centralized authority. Instead Bitcoin relies on a network of volunteers that collectively implement a replicated ledger and verify transactions. In this paper we analyze how Bitcoin uses a multi-hop broadcast to propagate transactions and blocks through the network to update the ledger replicas. We then use the gathered information to verify the conjecture that the propagation delay in the network is the primary cause for blockchain forks. Blockchain forks should be avoided as they are symptomatic for inconsistencies among the replicas in the network. We then show what can be achieved by pushing the current protocol to its limit with unilateral changes to the client's behavior.

1,116 citations


Proceedings ArticleDOI
19 Dec 2013
TL;DR: A concept is presented that addresses this drawback of Bitcoin and allows it to be used for fast transactions and modified a snack vending machine to accept Bitcoin payments and make use of fast transaction confirmations.
Abstract: Cashless payments are nowadays ubiquitous and decentralized digital currencies like Bitcoin are increasingly used as means of payment. However, due to the delay of the transaction confirmation in Bitcoin, it is not used for payments that rely on quick transaction confirmation. We present a concept that addresses this drawback of Bitcoin and allows it to be used for fast transactions. We evaluate the performance of the concept using double-spending attacks and show that, employing our concept, the success of such attacks diminishes to less than 0.09%. Moreover, we present a real world application: We modified a snack vending machine to accept Bitcoin payments and make use of fast transaction confirmations.

152 citations


Proceedings ArticleDOI
19 Dec 2013
TL;DR: This paper presents an efficient methodology for estimating the number of active users in the BitTorrent Mainline DHT based on modeling crawling inaccuracies as a Bernoulli process, which guarantees a very accurate estimation and is able to provide the estimate in about 5 seconds.
Abstract: Peer-to-peer networks have been quite thoroughly measured over the past years, however it is interesting to note that the BitTorrent Mainline DHT has received very little attention even though it is by far the largest of currently active overlay systems, as our results show. As Mainline DHT differs from other systems, existing measurement methodologies are not appropriate for studying it. In this paper we present an efficient methodology for estimating the number of active users in the network. We have identified an omission in previous methodologies used to measure the size of the network and our methodology corrects this. Our method is based on modeling crawling inaccuracies as a Bernoulli process. It guarantees a very accurate estimation and is able to provide the estimate in about 5 seconds. Through experiments in controlled situations, we demonstrate the accuracy of our method and show the causes of the inaccuracies in previous work, by reproducing the incorrect results. Besides accurate network size estimates, our methodology can be used to detect network anomalies, in particular Sybil attacks in the network. We also report on the results from our measurements which have been going on for almost 2.5 years and are the first long-term study of Mainline DHT.

74 citations


Proceedings ArticleDOI
19 Dec 2013
TL;DR: It is argued that denying service to uncooperative peers may not be the best long-term approach; the findings suggest that peer-to-peer live streaming can support unco cooperative peers.
Abstract: Peer-to-Peer live streaming systems help content providers and distributors drastically reduce bandwidth costs by sharing costs among peers. Researchers have dedicated significant effort developing techniques to discourage or exclude uncooperative peers from peer-to-peer systems. However, users are often unable to cooperate, e.g., users using a mobile device with limited, costly bandwidth. We study the impact of uncooperative peers on video discontinuity and latency using PlanetLab. We find that simple mechanisms, like forwarding video data requests to cooperative peers instead of wasting effort sending requests to uncooperative peers, allows peer-to-peer live streaming to serve 50% of uncooperative peers without performance degradation. We argue that denying service to uncooperative peers may not be the best long-term approach; our findings suggest that peer-to-peer live streaming can support uncooperative peers.

21 citations


Proceedings ArticleDOI
19 Dec 2013
TL;DR: This study developed a distributed SNS service, or autonomous distributed network, based on named data networking (NDN) with two-tier hierarchical ID-based encryption (HIDE) for authentication with the aim of disseminating safety confirmation after a large-scale disaster.
Abstract: During a disaster, it is important to exchange information such as the magnitude of the disaster and safety confirmation. However, the communication infrastructure, such as a mobile phone network, may not be available after a large-scale disaster because current communication infrastructures have centralized control structures. Thus, it is important to design a network that can maintain a normal service using the remaining network resources, such as base stations and user terminals, even if the central servers are no longer available because of disconnections among servers. In this study, we developed a distributed SNS service, or autonomous distributed network, based on named data networking (NDN) with two-tier hierarchical ID-based encryption (HIDE) for authentication. We conducted a simulation of a residential area in the suburbs of Tokyo to demonstrate the performance of this SNS application, where it was used to disseminate safety confirmation. We present the average message delivery time with different decision intervals, the synchronization intervals, and the ratios of moving people.

18 citations


Proceedings ArticleDOI
19 Dec 2013
TL;DR: WPSS overcomes the limitations of existing protocols by executing short random walks over a stable topology and by using shortcuts (wormholes), thus limiting the rate of connection establishments and guaranteeing freshness of samples, respectively.
Abstract: State of the art gossip protocols for the Internet are based on the assumption that connection establishment between peers comes at negligible cost. Our experience with commercially deployed P2P systems has shown that this cost is much higher than generally assumed. As such, peer sampling services often cannot provide fresh samples because the service would require too high a connection establishment rate. In this paper, we present the wormhole-based peer sampling service (WPSS). WPSS overcomes the limitations of existing protocols by executing short random walks over a stable topology and by using shortcuts (wormholes), thus limiting the rate of connection establishments and guaranteeing freshness of samples, respectively.We show that our approach can decrease the connection establishment rate by one order of magnitude compared to the state of the art while providing the same levels of freshness of samples. This, without sacrificing the desirable properties of a PSS for the Internet, such as robustness to churn and NAT-friendliness. We support our claims with a thorough measurement study in our deployed commercial system as well as in simulation.

17 citations


Proceedings ArticleDOI
19 Dec 2013
TL;DR: This paper uses cooperative game theory to dynamically assign each peer to a swarm that is commensurate with its upload contribution and the price it is willing to pay, and develops and simulates a distributed dynamic P2P streaming algorithm to dynamically adjust each peer's video version based on the collaborative behaviors of all peers.
Abstract: In dynamic streaming, a user can dynamically choose from different versions of the same video. In P2P dynamic streaming, there is one P2P swarm for each version, and within a swarm, peers can share video chunks with each other, thereby reducing the server's bandwidth cost. Due to economy of scale, cooperation among peers can also reduce the per-peer content price. In this paper, we use cooperative game theory to dynamically assign each peer to a version. To maximally incentivize peer cooperation, we use mechanism design to develop pricing schemes that reflect content and bandwidth cost savings derived from peer cooperation. With this approach, each peer is assigned to a swarm that is commensurate with its upload contribution and the price it is willing to pay. We also develop and simulate a distributed dynamic P2P streaming algorithm, consisting of chunk scheduling, token-based accounting, and video version switching, to dynamically adjust each peer's video version based on the collaborative behaviors of all peers.

17 citations


Proceedings ArticleDOI
19 Dec 2013
TL;DR: This work states that for live content delivery, network layer multicast would be desirable for ISPs as well as content providers to reduce the load due to parallel unicast connections for the same content.
Abstract: Internet video streaming causes the second largest transfer volume and is the second fastest growing application class in Internet traffic analysis [3]. In this context, also the streaming of live content becomes increasingly relevant as more traditional broadcasters start delivering content over the Internet. Today, live video streaming services rely on IP-unicast delivery or closed IP-multicast systems inside single administrative domains. Approaches such as Content Delivery Networks (CDNs) are used to improve the unicast delivery of content. They usually end at the edge of the residential broadband access Internet Service Provider (ISP) networks that connect end users to the Internet. For live content delivery, network layer multicast would be desirable for ISPs as well as content providers to reduce the load due to parallel unicast connections for the same content. Because of the well-known drawbacks and limitations of IP-multicast [2], however, network layer multicast support is usually not available.

15 citations


Proceedings ArticleDOI
19 Dec 2013
TL;DR: DirectDemocracyP2P is an open source platform developed in JAVA and offering peer-to-peer and mobile ad hoc wireless communication capabilities.
Abstract: DirectDemocracyP2P is an open source platform developed in JAVA and offering peer-to-peer and mobile ad hoc wireless communication capabilities. The platform offers an API supporting plugins, beside its main application: deliberative petition drives (aka citizens' initiatives with integrated argumentation) [1]. An authentication-by-reputation technique based on digital signatures and peer review [2], [3] is integrated into the platform via this main application. Each peer manages independently its database of items of interest. The items of interest are encapsulated as self-contained pieces of information and uniquely identifiable using a system of global identifiers (GIDs). Each GID consists of a combination of public keys with creation dates, or digest values. Communication is based on a combination of push and pull mechanisms. [1].

15 citations


Proceedings ArticleDOI
09 Sep 2013
TL;DR: This paper examines the impact of two major antipiracy actions, the closure of Megaupload and the implementation of the French antipirACY law, on publishers in the largest BitTorrent portal who are major providers of copyrighted content online.
Abstract: During recent years, a few countries have put in place online antipiracy laws and there has been some major enforcement actions against violators. This raises the question that to what extent antipiracy actions have been effective in deterring online piracy? This is a challenging issue to explore because of the difficulty to capture user behavior, and to identify the subtle effect of various underlying (and potentially opposing) causes. In this paper, we tackle this question by examining the impact of two major antipiracy actions, the closure of Megaupload and the implementation of the French antipiracy law, on publishers in the largest BitTorrent portal who are major providers of copyrighted content online. We capture snapshots of BitTorrent publishers at proper times relative to the targeted antipiracy event and use the trends in the number and the level of activity of these publishers to assess their reaction to these events. Our investigation illustrates the importance of examining the impact of antipiracy events on different groups of publishers and provides valuable insights on the effect of selected major antipiracy actions on publishers' behavior.

14 citations


Proceedings ArticleDOI
19 Dec 2013
TL;DR: Kaleidoscope is introduced, a novel routing/caching scheme designed to significantly reduce the cost of lookup operations in Kademlia by using a color-based distributed cache that greatly improves load balancing among the nodes and reduces the well documented hot spots problem.
Abstract: Kademlia is considered to be one of the most effective key based routing protocols. It is nowadays implemented in many file sharing peer-to-peer networks such as BitTorrent, KAD, and Gnutella. This paper introduces Kaleidoscope, a novel routing/caching scheme designed to significantly reduce the cost of lookup operations in Kademlia by using a color-based distributed cache. Moreover, Kaleidoscope greatly improves load balancing among the nodes and reduces the well documented hot spots problem. The paper also includes an extensive performance study demonstrating the benefits of Kaleidoscope.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: This work proposes a lightweight method that improves the locality of active swarms by 6% by suggesting geographically close peers with the Peer Exchange Protocol (PEX), without any modifications to the current system.
Abstract: BitTorrent, the most popular peer-to-peer (P2P) file-sharing protocol, accounts for a significant fraction of the traffic of the Internet. Using a novel technique, we measure live BitTorrent swarms on the Internet and confirm the conjecture that overlay networks formed by BitTorrent are not locality-aware, i.e., they include many unnecessary long distance connections. Attempts to improve the locality have failed because they require a modification of the existing protocol, or interventions by Internet service providers (ISPs). In contrast, we propose a lightweight method that improves the locality of active swarms by 6% by suggesting geographically close peers with the Peer Exchange Protocol (PEX), without any modifications to the current system. An improvement of locality not only benefits the ISPs by reducing network transit cost, it also reduces the traffic over long-distance connections, which delays the need to expand the infrastructure, easing the power consumption. We expect that if used on a large scale our method reduces the Internet's energy consumption by 8 TWh a year.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: Box2Box is presented, a new P2P file synchronization application which supports novel features not present in BitTorrent-Sync and is demonstrated in several use cases each targeted at another feature.
Abstract: Due to an increasing number of devices connected to the Internet, data synchronization becomes more important. Centrally managed storage services, such as Dropbox, are popular for synchronizing data between several devices. P2P-based approaches that run fully decentralized, such as BitTorrent-Sync, are starting to emerge. This paper presents Box2Box, a new P2P file synchronization application which supports novel features not present in BitTorrent-Sync. Box2Box is demonstrated in several use cases each targeted at another feature.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: This paper introduces the properties of nodes that are indicative of their reliability, and proposes a scheme to integrate these properties into the traditional random walks, and shows that integrating node properties into random walks results in much more robust reputation systems.
Abstract: Reputation systems are essential to establish trust and to provide incentives for cooperation among users in decentralized networks. In these systems, the most widely used algorithms for computing reputations are based on random walks. However, in decentralized networks where nodes have only a partial view of the system, random walk-based algorithms can be easily exploited by uncooperative and malicious nodes. Traditionally, a random walk only uses information about the adjacency of nodes, and ignores their structural and temporal properties. Nevertheless, the properties of nodes indicate their reliability, and so, random walks using much richer information about the nodes than simple adjacency may achieve higher robustness against malicious exploitations. In this paper, we introduce the properties of nodes that are indicative of their reliability, and we propose a scheme to integrate these properties into the traditional random walks. Particularly, we consider two common malicious exploitations of random walks in decentralized networks, uncooperative nodes and Sybil attacks, and we show that integrating node properties into random walks results in much more robust reputation systems. Our experimental evaluation in synthetic graphs and graphs derived from real-world networks covering a significant number of users, shows the effectiveness of the resulting biased random walks.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: GeoSwarm combines the strengths of a BitTorrent-like download scheme with the locality awareness of an overlay for location-based search and its built-in replication mechanism to achieve a robust and fast download as well as a reliable storage of location-related multimedia content.
Abstract: Existing peer-to-peer (p2p) overlays for location-based services suffer from two major drawbacks: (i) they do not store data persistently under peer churn and (ii) they do not allow for the fast retrieval of large files, especially under asymmetric link conditions. This tremendously limits the use of current and future p2p location-based services as users are not able to share larger files such as high resolution pictures or video snippets. To overcome these two problems, we present GeoSwarm: a reliable multi-source download scheme for p2p location-based services. GeoSwarm combines the strengths of a BitTorrent-like download scheme with the locality awareness of an overlay for location-based search and its built-in replication mechanism. Thereby, a robust and fast download as well as a reliable storage of location-related multimedia content is achieved. Through extensive evaluation, we show that 95% of all downloads in GeoSwarm are carried out successfully even under churn, while downloads benefit from a 100% increased throughput in comparison to traditional single-source downloads.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: This paper combines P2P dissemination on the social overlay with occasional access to the cloud, and shows that the protocol performs close to centralized architectures and incurs only modest monetary costs.
Abstract: Decentralized social networks are an emerging solution to the privacy issues plaguing mainstream centralized architectures. Social overlays-overlay networks mirroring the social relationships among node owners-are particularly intriguing, as they limit communication within one's friend circle. Previous work investigated efficient protocols for P2P dissemination in social overlays, but also showed that the churn induced by users, combined with the topology constraints posed by these overlays, may yield unacceptable latency. In this paper, we combine P2P dissemination on the social overlay with occasional access to the cloud. When updates from a friend are not received for a long time, the cloud serves as an external channel to verify their presence. The outcome is disseminated in a P2P fashion, quenching cloud access from other nodes and speeding dissemination of existing updates. We show that our protocol performs close to centralized architectures and incurs only modest monetary costs.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: The last few years have seen an explosion of video on demand traffic carried over the Internet infrastructure, with the success of CDN-managed services such as Netflix, Hulu and especially YouTube - according to independent research, YouTube to represent 20-30% of ISPs incoming traffic.
Abstract: The last few years have seen an explosion of video on demand traffic carried over the Internet infrastructure. While P2P applications have been proposed to carry VoD and TV content, they have so far encountered limited adoption except in Asian countries. Part of why this happens is explained with the fact that (i) the current asymmetric network infrastructure does not offer enough system capacity needed to let a fully P2P-VoD/TV to be self-sustainable, (ii) that the actual capacity at nominal peers is often smaller than the available one due to inefficiency in NAT punching[1], and (iii) the very same nonelastic nature of the service, that makes the system inherently less robust w.r.t elastic file-sharing to dynamic changes in the istantaneously available bandwidth. The other part of the story can be summarized with the success of CDN-managed services such as Netflix, Hulu and especially YouTube - according to [2], about 3 billion YouTube videos are viewed and 100's of thousand videos are uploaded every day, with independent research confirming YouTube to represent 20-30% of ISPs incoming traffic[3].

Proceedings ArticleDOI
19 Dec 2013
TL;DR: It is argued that loss-based congestion control protocols can fill large buffers, leading to a higher end-to-end delay, unlike low-priority or delay-based pollution control protocols.
Abstract: In this paper, we address the trade-off between the data plane efficiency and the control plane timeliness for the BitTorrent performance. We argue that loss-based congestion control protocols can fill large buffers, leading to a higher end-to-end delay, unlike low-priority or delay-based congestion control protocols. We perform experiments for both the uTorrent and mainline BitTorrent clients, and we study the impact of uTP (a novel transport protocol proposed by BitTorrent) and several TCP congestion control algorithms (Cubic, New Reno, LP, Vegas and Nice) on the download completion time. Briefly, in case peers in the swarm all use the same congestion control algorithm, we observe that the specific algorithm has only a limited impact on the swarm performance. Conversely, when a mix of TCP congestion control algorithms coexists, peers employing a delay-based low-priority algorithm exhibit shorter completion time.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: The efficiency of the BitTorrent protocol makes it especially suitable for massive content distribution while reducing bandwidth costs in the Cloud.
Abstract: In classic storage services, the transfer protocol used is usually HTTP. This means that all download requests are handled by a central server which sends the requested files in a single stream. But, such transfer is limited by the narrowest network condition along the way, or by the server being overloaded by requests from many clients. In this context, a number of studies have tried to combine BitTorrent content distribution technologies with Cloud environments. In fact, the efficiency of the BitTorrent protocol makes it especially suitable for massive content distribution while reducing bandwidth costs in the Cloud.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: The community home gateway is presented, a computer system attached to a wireless community network router, able to host platform and application services and removes the lack of a home gateway system that is open for service contributions and therefore constitutes an important step towards building P2P clouds with low-end devices.
Abstract: We present the community home gateway, a computer system attached to a wireless community network router, able to host platform and application services. Different to current home gateways and other low-end systems, the community home gateway offers resource virtualization, and can thus be used as a cloud resource. Furthermore, it removes the lack of a home gateway system that is open for service contributions and therefore constitutes an important step towards building P2P clouds with low-end devices.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: This work finds that an Earliest-First chunk selection policy in conjunction with theEarliest-Deadline peer selection policy allows them to achieve high download rates and takes advantage of abandonment by converting peers to “partial seeds”; this increases capacity.
Abstract: Peer-to-Peer (P2P) systems have evolved from being used for file sharing to delivering streaming video on demand (VoD). The policies adopted in P2P VoD, however, have not taken user viewing behavior - that users abandon videos - into account. We show that abandonment can result in increased interruptions and wasted resources. As a result, we reconsider the set of policies to use in the presence of abandonment. Our goal is to balance the conflicting needs of delivering videos without interruptions while minimizing wastage. We find that an Earliest-First chunk selection policy in conjunction with the Earliest-Deadline peer selection policy allows us to achieve high download rates. We take advantage of abandonment by converting peers to “partial seeds”; this increases capacity. We minimize wastage by using a playback lookahead window. We use analysis and simulation experiments using real-world traces to show the effectiveness of our approach.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: The prototype of the overlay Geodemlia, which allows for both: the persistent storage of location-based information as well as the reliable search even under high churn rates, is presented.
Abstract: Location-based services have become increasingly popular in the recent years due to the vast deployment of position-aware devices such as smartphones and tablet PCs and the ubiquitous availability of fast Internet connectivity. Existing location-based services are realized as cloud services, which cause considerably high costs. Furthermore, they are not location-aware leading to unnecessary long transmission paths between the users and the cloud infrastructure. The concept of Peer-to-Peer has proven to be a valid alternative for realizing the functionality of location-based services, which resulted in a plethora of approaches for location-based search [1], [4], [5]. Existing concepts, however, suffer from two major drawbacks: (i) they are not robust against high peer churn and (ii) they do not allow for the persistent storage of location-based data. To this end, in this demo we present the prototype of the overlay Geodemlia [3], which allows for both: the persistent storage of location-based information as well as the reliable search even under high churn rates. Location-based information in Geodemlia is stored in a location-aware way, reducing the length of the transmission path for store and search operations.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: Evaluation shows that the presented self-calibration succeeds in learning the configuration-quality interdependencies and that peer-to-peer systems can learn and adapt their behavior according to desired quality goals.
Abstract: Peer-to-peer systems scale to millions of nodes and provide routing and storage functions with best effort quality. In order to provide a guaranteed quality of the overlay functions, even under strong dynamics in the network with regard to peer capacities, online participation and usage patterns, we propose to calibrate the peer-to-peer overlay and to autonomously learn which qualities can be reached. For that, we simulate the peer-to-peer overlay systematically under a wide range of parameter configurations and use neural networks to learn the effects of the configurations on the quality metrics. Thus, by choosing a specific quality setting by the overlay operator, the network can tune itself to the learned parameter configurations that lead to the desired quality. Evaluation shows that the presented self-calibration succeeds in learning the configuration-quality interdependencies and that peer-to-peer systems can learn and adapt their behavior according to desired quality goals.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: Investigation of how to incorporate an advertisement mechanism into a P2P system to serve as an incentive to donate resources, especially for services in which users often interact with the system through mobile devices identifies payment models whose payment is super-linear with the availability of donated machines.
Abstract: In order for P2P systems to be viable, users must be given incentives to donate resources. Such incentives can be in the form of tit-for-tat like mechanisms, in which a user is rewarded with better service for contributing resources to the system. Alternatively, such incentives can be economical, i.e., users get paid for their contribution. In particular, the latter can be achieved through a P2P advertisement mechanism. This paper investigates how to incorporate an advertisement mechanism into a P2P system to serve as an incentive to donate resources, especially for services in which users often interact with the system through mobile devices. First, the precise P2P advertisement dissemination model is presented. Second, the paper proposes and explores several advertisement dissemination schemes combined with a few payment models and compares between them through simulations. The reported results are encouraging for this direction and in particular identify payment models whose payment is super-linear with the availability of donated machines. This means that they serve as good incentives for owners of donated machines to keep them connected to the P2P network for long durations.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: In this article, a simple overlay topology called OBST(k) is proposed, which is composed of k (rooted and directed) Binary Search Trees (BSTs), where k is a parameter.
Abstract: The design of scalable and robust overlay topologies has been a main research subject since the very origins of peerto-peer (p2p) computing. Today, the corresponding optimization tradeoffs are fairly well-understood, at least in the static case and from a worst-case perspective. This paper revisits the peer-to-peer topology design problem from a self-organization perspective. We initiate the study of topologies which are optimized to serve the communication demand, or even self-adjusting as demand changes. The appeal of this new paradigm lies in the opportunity to be able to go beyond the lower bounds and limitations imposed by a static, communication-oblivious, topology. For example, the goal of having short routing paths (in terms of hop count) does no longer conflict with the requirement of having low peer degrees. We propose a simple overlay topology OBST(k) which is composed of k (rooted and directed) Binary Search Trees (BSTs), where k is a parameter. We first prove some fundamental bounds on what can and cannot be achieved optimizing a topology towards a static communication pattern (a static OBST(k)). In particular, we show that the number of BSTs that constitute the overlay can have a large impact on the routing costs, and that a single additional BST may reduce the amortized communication costs from Ω(log n) to O(1), where n is the number of peers. Subsequently, we discuss a natural self-adjusting extension of OBST(k), in which frequently communicating partners are “splayed together”.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: This work presents a currently ongoing work for exploring the properties of GROUP in a more formal way using a formalization based on Markov chains, which experimentally proved to be efficient and effective with respect to its aim.
Abstract: Over recent years, we experienced a huge diffusion of internet connected computing devices. As a consequence, this leaded to research for efficient and scalable approaches for managing the burden caused by the highly increased volume of data to be exchanged and processed. Efficient communication protocols are fundamental building blocks for realizing such approaches [1], [2]. Thus, several peer-to-peer protocols have been proposed. Gossip protocols [3]-[7] are a family of peer-to-peer protocols that proved to be well-suited for supporting a scalable and decentralized strategy for peer and data aggregation and diffusion. However, one of the typical limitation of Gossip protocols consists in the selfish behavior adopted by peers in defining their neighborhood and, as a consequence, the topology of the overlay they build. GROUP [8] is a Gossip protocol we conceived to overcome this limitation. It builds explicit defined communities of peers that are identified by their leaders, each one elected in a distributed fashion. This protocol experimentally proved to be efficient and effective with respect to its aim. Anyhow, no analytical study has been realized so far. This work presents a currently ongoing work we are conducting for exploring the properties of GROUP in a more formal way. We conduct this preliminary investigation using a formalization based on Markov chains.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: It is demonstrated that incentive-compatible video sharing between peers can be easily achieved with simple video coding and distribution designs and not only significantly reduces the load on the servers, but also improves the stability of user-perceived video quality in the face of dynamic bandwidth changes.
Abstract: Adaptive streaming, such as Dynamic Adaptive Streaming over HTTP (DASH), has been widely deployed to provide uninterrupted video streaming service to users with dynamic network conditions. In this paper, we analytically study the potential of using P2P in conjunction with adaptive streaming. We first study the capacity of P2P adaptive streaming by developing utility maximization models that take into account peer heterogeneity, taxation-based incentives, multi-version videos at discrete rates. We further develop stochastic models to study the performance of P2P adaptive streaming in face of bandwidth variations and peer churn. Through analysis and simulations, we demonstrate that incentive-compatible video sharing between peers can be easily achieved with simple video coding and distribution designs. P2P adaptive streaming not only significantly reduces the load on the servers, but also improves the stability of user-perceived video quality in the face of dynamic bandwidth changes.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: The number of users in the network, the number of anonymous applications, and the type of those applications are determined and the possibility of inferring which group of users is responsible for the activity of an anonymous application is explored.
Abstract: Anonymous communications have been exponentially growing, where more and more users are shifting to a privacy-preserving Internet and anonymising their peer-to-peer communications. Anonymous systems allow users to access different services while preserving their anonymity. We aim to characterise these anonymous systems, with a special focus in the I2P network. Current statistics service for the I2P network do not provide values about the type of applications deployed in the network nor the geographical localisation of users. Our objective is to determine the number of users in the network, the number of anonymous applications, and the type of those applications. We also explore the possibility of inferring which group of users is responsible for the activity of an anonymous application. Thus, we improve the current I2P statistics and get better insights of the network.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: A simple modification to BitTorrent is introduced which enables each peer to index a random subset of tracking data, i.e. the torrent ID and list of participating nodes, and the distribution of this tracking data is shown to be capable of supporting an accurate unstructured search for torrents.
Abstract: Current BitTorrent discovery methods rely on either centralised systems or structured peer-to-peer (P2P) networks. These methods present security weaknesses that can be exploited in order to censor or remove information from the network. To alleviate this threat, we propose incorporating an unstructured peer-to-peer information discovery mechanism that can be used in the event that the centralised or structured P2P mechanisms are compromised. Unstructured P2P information discovery has fewer security weaknesses. However, in this case, the performance of the search is nondeterministic since it is not practical to perform an exhaustive search. The search performance then strongly depends on the distribution of documents in the network. To determine the practicality of unstructured P2P search over BitTorrent, we first conducted a 64 day study of BitTorrent activities, looking at the distribution of 1.6 million torrents on 5.4 million peers. We found that the distribution of torrents follows a power law which is not amenable to unstructured search. To address this, we introduce a simple modification to BitTorrent which enables each peer to index a random subset of tracking data, i.e. the torrent ID and list of participating nodes. A successful search is then one that finds a peer with tracking data, rather than a peer directly participating in the torrent. The distribution of this tracking data is shown to be capable of supporting an accurate unstructured search for torrents.We assess the overheads introduced by our extension and conclude that we would require small amounts of bandwidth, that are easily provided by current home broadband capabilities. We also simulate our extension to verify our model and to explore our extension's capabilities in different situations. We demonstrate that our extension can satisfy PAC search queries for torrents, under network churn and complex node behaviours.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: A dynamic localized peer-to-peer IM that supports and exploits any number of dimensions is proposed, and criteria for an efficient sector partitioning is determined, discussed several approaches and a suitable algorithm is presented.
Abstract: In Networked Virtual Environments (NVE), Interest Management (IM) is a key part of the system that determines which of a participant's actions have to be communicated to which subset of the other participants. Since traditional client/server approaches have several drawbacks, a variety of alternative peer-to-peer (P2P) approaches have been presented by the community [1]. In many of these approaches, interest management is also responsible for maintaining connectivity of the P2P network. Aiming for latency-sensitive applications, systems like VON [2] and pSense [3] use mutual notification mechanisms, building the topology based on the participants' virtual world proximities. A common limitation of these, however, is their fixed dimensionality: most are only designed for two spatial dimensions, a few are capable of handling three. In this work, we discuss options for dealing with an arbitrary number of dimensions. We propose a dynamic localized peer-to-peer IM that supports and exploits any number of dimensions. In our IM, peers communicate directly within their vision range. The space outside of the vision range is divided in sectors, each guarded by a sensor node that notifies them of approaching peers. We determine criteria for an efficient sector partitioning, discuss several approaches and present a suitable algorithm.