scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Peer-to-Peer Computing in 2012"


Proceedings ArticleDOI
22 Oct 2012
TL;DR: CLIVE is a cloud-assisted P2P live streaming system that estimates the available capacity in the system through a gossip-based aggregation protocol and provisions the required resources from the cloud to guarantee a given level of QoS at low cost.
Abstract: Peer-to-peer (P2P) video streaming is an emerging technology that reduces the barrier to stream live events over the Internet. Unfortunately, satisfying soft real-time constraints on the delay between the generation of the stream and its actual delivery to users is still a challenging problem. Bottlenecks in the available upload bandwidth, both at the media source and inside the overlay network, may limit the quality of service (QoS) experienced by users. A potential solution for this problem is assisting the P2P streaming network by a cloud computing infrastructure to guarantee a minimum level of QoS. In such approach, rented cloud resources (helpers) are added on demand to the overlay, to increase the amount of total available bandwidth and the probability of receiving the video on time. Hence, the problem to be solved becomes minimizing the economical cost, provided that a set of constraints on QoS is satisfied. The main contribution of this paper is CLIVE, a cloud-assisted P2P live streaming system that demonstrates the feasibility of these ideas. CLIVE estimates the available capacity in the system through a gossip-based aggregation protocol and provisions the required resources from the cloud to guarantee a given level of QoS at low cost. We perform extensive simulations and evaluate CLIVE using large-scale experiments under dynamic realistic settings.

81 citations


Proceedings ArticleDOI
22 Oct 2012
TL;DR: Wuala as discussed by the authors is a popular online backup and file sharing system that has been successfully operated for several years, but very little is known about the design and implementation of Wuala.
Abstract: Wuala is a popular online backup and file sharing system that has been successfully operated for several years. Very little is known about the design and implementation of Wuala. We capture the network traffic exchanged between the machines participating in Wuala to reverse engineer the design and operation of Wuala. When Wuala was launched, it used a clever combination of centralized storage in data centers for long-term backup with peer-assisted file caching of frequently downloaded files. Large files are broken up into transmission blocks and additional transmission blocks are generated using a classical redundancy coding scheme. Multiple transmission blocks are sent in parallel to different machines and reliability is assured via a simple Automatic Repeat Request protocol on top of UDP. Recently, however, Wuala has adopted a pure client/server based architecture. Our findings and the underlying reasons are substantiated by an interview with a co-founder of Wuala. The main reasons are lower resource usage on the client side, which is important in the case of mobile terminals, a much simpler software architecture, and a drastic reduction in the cost of data transfers originating at the data center.

58 citations


Proceedings ArticleDOI
22 Oct 2012
TL;DR: Nowadays, the growing necessity for secure and private off-site storage motivates the appearance of novel storage infrastructures where users interact just with a set of trustworthy participants, such in Friend-to-Friend (F2F) networks.
Abstract: Nowadays, the growing necessity for secure and private off-site storage motivates the appearance of novel storage infrastructures. In this sense, it is increasingly common to find storage systems where users interact just with a set of trustworthy participants, such as in Friend-to-Friend (F2F) networks.

28 citations


Proceedings ArticleDOI
22 Oct 2012
TL;DR: The evaluation results reveal that in contrast to the tree-based approach, Geodemlia provides on average a 46% better success ratio as well as a 18% better recall at a moderate higher traffic overhead of 13 bytes/s and an increased average response time of 0.2 s.
Abstract: Existing peer-to-peer overlay approaches for location-based search have proven to be a valid alternative to client-server-based schemes. One of the key issues of the peer-to-peer approach is the high churn rate caused by joining and leaving peers. To address this problem, this paper proposes a new location-aware peer-to-peer overlay termed Geodemlia to achieve a robust and efficient location-based search. To evaluate Geodemlia, a real world workload model for peer-to-peer location-based services is derived from traces of Twitter. Using the workload model, a system parameter analysis of Geodemlia is conducted with the goal of finding a suitable parameter configuration. In addition, the scalability and robustness of Geodemlia is compared to a state-of-the-art tree-based approach by investigating the performance and costs of both overlays under an increasing number of peers, an increasing radius of area searches, an increasing level of churn as well as for different peer placement and search request schemes. The evaluation results reveal that in contrast to the tree-based approach, Geodemlia provides on average a 46% better success ratio as well as a 18% better recall at a moderate higher traffic overhead of 13 bytes/s and an increased average response time of 0.2 s.

27 citations


Proceedings ArticleDOI
Lu Han1, Magdalena Punceva1, Badri Nath1, S. Muthukrishnan1, Liviu Iftode1 
22 Oct 2012
TL;DR: In this article, the authors proposed four fully distributed social cache selection algorithms, and evaluated their performance on five well known graphs, and showed that these algorithms perform almost as good as the centralized best known approximation algorithm would do.
Abstract: Distributed online social networks (DOSN) have been proposed as an alternative to centralized Online Social Networks (OSN). In contrast to centralized OSN, DOSNs do not have central repository of all user data, neither impose control regarding how users data will be accessed. Therefore, users can keep control of their private data and are not at the mercy of the social network providers. However, one of the main problems in DOSNs is how to efficiently disseminate social updates among peers. In our previous work, we proposed Social Caches for social updates dissemination in DOSN. However, the selection of social caches requires knowledge about the entire social graph. In this paper, we propose four fully distributed social cache selection algorithms, and evaluate their performance on five well known graphs. Using simulations we show that these algorithms perform almost as good as the centralized best known approximation algorithm would do. These distributed caching techniques can be used as a basis for various applications such as those that represent fusions of social and vehicular networks.

27 citations


Proceedings ArticleDOI
22 Oct 2012
TL;DR: The neighborhood filtering strategy the authors devised as most performing guarantees to deliver almost all chunks to all peers with a play-out delay as low as only 6s even with system loads close to 1.0.
Abstract: P2P-TV systems performance are driven by the overlay topology that peers form. Several proposals have been made in the past to optimize it, yet little experimental studies have corroborated results. The aim of this work is to provide a comprehensive experimental comparison of different strategies for the construction and maintenance of the overlay topology in P2P-TV systems. To this goal, we have implemented different fully-distributed strategies in a P2P-TV application, called Peer-Streamer, that we use to run extensive experimental campaigns in a completely controlled set-up which involves thousands of peers, spanning very different networking scenarios. Results show that the topological properties of the overlay have a deep impact on both user quality of experience and network load. Strategies based solely on random peer selection are greatly outperformed by smart, yet simple strategies that can be implemented with negligible overhead. Even with different and complex scenarios, the neighborhood filtering strategy we devised as most performing guarantees to deliver almost all chunks to all peers with a play-out delay as low as only 6s even with system loads close to 1.0. Results are confirmed by running experiments on PlanetLab. PeerStreamer is open-source to make results reproducible and allow further research by the community.

24 citations


Proceedings ArticleDOI
22 Oct 2012
TL;DR: In this article, a distributed algorithm for the incremental construction of a Delaunay overlay in a P2P network is presented, which employs a distributed version of the classical Edge Flipping procedure.
Abstract: P2P overlays based on Delaunay triangulations have been recently exploited to implement systems providing efficient routing and data broadcast solutions. Several applications such as Distributed Virtual Environments and geographical nearest neighbours selection benefit from this approach. This paper presents a novel distributed algorithm for the incremental construction of a Delaunay overlay in a P2P network. The algorithm employs a distributed version of the classical Edge Flipping procedure. Each peer builds the Delaunay links incrementally by exploiting a random peer sample returned by the underlying gossip level. The algorithm is then optimized by considering the Euclidean distance between peers to speed up the overlay convergence. We present theoretical results that prove the correctness of our approach along with a set of experiments that assess the convergence rate of the distributed algorithm.

22 citations


Proceedings ArticleDOI
22 Oct 2012
TL;DR: A piece-picking protocol that uses the transport features of Libswift in an essential way and its performance on both high-end and power-constrained low-end devices is investigated, comparing it to the state-of-the-art in P2P protocols.
Abstract: Video distribution is nowadays the dominant source of Internet traffic, and recent studies show that it is expected to reach 90% of the global consumer traffic by the end of 2015. Peer-to-peer assisted solutions have been adopted by many content providers with the aim of improving the scalability and reliability of their distribution network. While many solutions have been proposed, virtually all of them are at the overlay level, and so rely on the standard functionality of the transport layer. The Peer-to-Peer Streaming Protocol workgroup of the IETF has adopted the Libswift transport-layer protocol that is targeted at P2P traffic. In this paper we describe the design features and a first implementation of the Libswift protocol, and a piece-picking protocol that uses the transport features of Libswift in an essential way. We investigate its performance on both high-end and power-constrained low-end devices, comparing it to the state-of-the-art in P2P protocols.

22 citations


Proceedings ArticleDOI
22 Oct 2012
TL;DR: The peer-to-peer network of clones is used to compute the best strategy to patch the smartphones in such a way that the number of devices to patch is low (to reduce the load on the cellular infrastructure) and that the worm is stopped quickly.
Abstract: The battery limits of today smartphones require a solution. In the scientific community it is believed that a promising way of prolonging battery life is to offload mobile computation to the cloud. State of the art offloading architectures consists of virtual copies of real smartphones (the clones) that run on the cloud, are synchronized with the corresponding devices, and help alleviate the computational burden on the real smartphones. Recently, it has been proposed to organize the clones in a peer-to-peer network in order to facilitate content sharing among the mobile smartphones. We believe that P2P network of clones, aside from content sharing, can be a useful tool to solve critical security problems on the mobile network of smartphones. In particular, we consider the problem of computing an efficient patching strategy to stop worm spreading between smartphones. The peer-to-peer network of clones is used to compute the best strategy to patch the smartphones in such a way that the number of devices to patch is low (to reduce the load on the cellular infrastructure) and that the worm is stopped quickly. We consider two well defined worms, one spreading between the devices and one attacking the cloud before moving to the real smartphones; we describe CloudShield, a suite of protocols running on the peer-to-peer network of clones; and we show by experiments that CloudShield outperforms state-of-the-art worm-containment mechanisms for mobile wireless networks.

21 citations


Proceedings ArticleDOI
22 Oct 2012
TL;DR: In this article, the authors show through simulation on real datasets that churn induces relevant delays in information dissemination, which may ultimately hamper the practical application of social overlays and combine analytical and simulation techniques to enable the estimation of dissemination delays at a practical cost.
Abstract: Peer-to-peer systems based on an overlay network that mirrors the social relationships among the nodes' owners are increasingly attracting interest. Yet, the churn induced by the availability of users raises the question—still unanswered—of whether these social overlays represent a viable solution. Indeed, although constraining communication to take place only among “friends” brings many benefits, it also introduces significant limitations when healing the overlay in the presence of churn. This paper puts forth two contributions. First, we show through simulation on real datasets that churn induces relevant delays in information dissemination, which may ultimately hamper the practical application of social overlays. Yet, identifying opportunities for improvement and evaluating design alternatives through simulation is impractical, due to the size of the target networks, the large parameter space, and the many sources of randomness involved. Therefore, in our second contribution we combine analytical and simulation techniques to enable the estimation of dissemination delays at a practical cost.

17 citations


Proceedings ArticleDOI
22 Oct 2012
TL;DR: This work considers a peer-assisted content delivery system that aims to provide guaranteed average download rate to its customers, and shows that bandwidth demand peaks for contents with moderate popularity, and that careful system design is needed if locality is an important criterion when choosing cloud-based service provisioning.
Abstract: With the proliferation of cloud services, cloud-based systems can become a cost-effective means of on-demand content delivery. In order to make best use of the available cloud bandwidth and storage resources, content distributors need to have a good understanding of the tradeoffs between various system design choices. In this work we consider a peer-assisted content delivery system that aims to provide guaranteed average download rate to its customers. We show that bandwidth demand peaks for contents with moderate popularity, and identify these contents as candidates for cloud-based service. We then consider dynamic content bundling (inflation) and cross-swarm seeding, which were recently proposed to improve download performance, and evaluate their impact on the optimal choice of cloud service use. We find that much of the benefits from peer seeding can be achieved with careful torrent inflation, and that hybrid policies that combine bundling and peer seeding often reduce the delivery costs by 20% relative to only using seeding. Furthermore, all these peer-assisted policies reduce the number of files that would need to be pushed to the cloud. Finally, we show that careful system design is needed if locality is an important criterion when choosing cloud-based service provisioning.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: This demo paper presents a mobile P2P video streaming and benchmarking platform which enables to assess and compare the energy consumption of different approaches in a precise manner through live assessments at runtime.
Abstract: The proliferation of wireless broadband technologies and mobile devices has led to an increase in mobile traffic, especially due to a rapid growth of real-time entertainment such as video streaming on smartphones. Mobile peer-to-peer-based content distribution schemes can help to relieve infrastructure-based mobile networks, but require participating nodes to provide resources which can drain their battery. Thus, the goal is to exploit mobile peers' resources while minimizing and balancing the energy consumption over all participating devices. Simulation models considering energy consumption lack precision, because they abstract away important parts of the hardware. On the other hand, prototypical energy measurements are more precise, but require a time consuming implementation and assessment. To this end, this demo paper presents a mobile P2P video streaming and benchmarking platform which enables to assess and compare the energy consumption of different approaches in a precise manner through live assessments at runtime. The demonstrated platform includes a simple, yet high-performance tree-based mobile P2P streaming overlay which can be utilized to easily implement and assess further streaming overlay approaches.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: The incentive schemes that have been used by cyberlockers are explored, and to what extent they have helped to foster the current environment are examined.
Abstract: This short paper presented a prelimary economic analysis of incentives in cyberlockers based on measurement results from three different types of sites related to cyberlockers. We found: • The fastest growing cyberlockers in terms of links posted on FilesTube and Teh Paradox forum are those which offer incentive plans. • The end of incentive plans at Rapidshare was followed by a major drop-off in uploader activity on that site. The number of Rapidshare links on FilesTube fell by 67% in the months after the plans were terminated. Rapidshare links fell by 95% on the Teh Paradox forum. • It is possible for uploaders on FileSonic to earn sizeable amounts from their activities. The average daily earnings for those who posted screenshots on a web forum was $33.69 with one uploader reporting earnings of $226.27 per day. • FileSonic places a higher value on uploaders who can entice downloaders to buy premium memberships rather than uploaders who post popular files. The average earnings for uploaders who chose the pay-per-sale incentive scheme was $46.10 versus $21.12 for those who chose the pay-per-download scheme.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: The authors' models capture essential aspects of peer-assisted UGC systems, including system size, peer bandwidth heterogeneity, limited peer storage, and video characteristics, and develop analytical models to understand the rate at which users would install P2P client applications to make peer- assisted UGC a success.
Abstract: User Generated Content (UGC) video applications, such as YouTube, are enormously popular. UGC systems can potentially reduce their distribution costs by allowing peers to store and redistribute the videos that they have seen in the past. We study peer-assisted UGC from three perspectives. First, we undertake a measurement study of the peer-assisted distribution system of Tudou (a popular UGC network in China), revealing several fundamental characteristics that models need to take into account. Second, we develop analytical models for peer-assisted distribution of UGC. Our models capture essential aspects of peer-assisted UGC systems, including system size, peer bandwidth heterogeneity, limited peer storage, and video characteristics. We apply these models to numerically study YouTube-like UGC services. And third, we develop analytical models to understand the rate at which users would install P2P client applications to make peer-assisted UGC a success. Our results provide a comprehensive study of peer-assisted UGC distribution, exposing its fundamental characteristics and limitations.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: Peer2View goals are to achieve substantial savings towards the source of the stream while providing the same quality of user experience of a CDN.
Abstract: Peer2View is a commercial peer-to-peer live video streaming (P2PLS) system. The novelty of Peer2View is threefold: i) It is the first P2PLS platform to support HTTP as transport protocol for live content, ii) The system supports both single and multi-bitrate streaming modes of operation, and iii) It makes use of an application-layer dynamic congestion control to manage priorities of transfers. Peer2View goals are to achieve substantial savings towards the source of the stream while providing the same quality of user experience of a CDN.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: In this article, a peer-to-peer live video system that enables MMOG players to stream screen-captured video of their game is presented, where players can use the system to show their skills, share experience with friends, or coordinate missions in strategy games.
Abstract: One of the most attractive features of Massively Multiplayer Online Games (MMOGs) is the possibility for users to interact with a large number of other users in a variety of collaborative and competitive situations. Gamers within an MMOG typically become members of active communities with mutual interests, shared adventures, and common objectives. This demonstration presents a peer-to-peer live video system that enables MMOG players to stream screen-captured video of their game. Players can use the system to show their skills, share experience with friends, or coordinate missions in strategy games.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: In this article, a hybrid F2F personal storage system that combines resources of trusted friends with cloud storage for improving the service quality achievable by pure Friend-to-Friend (F2F) systems is presented.
Abstract: Personal storage is a mainstream service used by millions of users. Among the existing alternatives, Friend-to-Friend (F2F) systems are aimed to leverage a secure and private off-site storage service. However, the specific characteristics of these systems (reduced node degree, correlated availabilities) represent a hard obstacle to their performance. We present FriendBox: a hybrid F2F personal storage system that combines resources of trusted friends with Cloud storage for improving the service quality achievable by pure F2F systems.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: Radiommender as discussed by the authors is a peer-to-peer on-line radio system that uses an implicit voting system that assigns songs to search terms and an affinity network that correlates user interest.
Abstract: Radiommender is a fully-distributed, peer-to-peer on-line radio. Users can share their music collection and explore music collections of other users. The difference to current file sharing systems is that songs do not need to be searched individually - a distributed recommender system is used to estimate user preference. Recommendation is done with a combination of an implicit voting system that assigns songs to search terms and an affinity network that correlates user interest. The demonstration shows the software functionality and includes the visualization of affinity graphs built by the recommender system.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: Wang et al. as discussed by the authors proposed a SOcial Network integrated P2P file sharing system with enhanced Efficiency and Trustworthiness (SoNet) to fully and cooperatively leverage the common-interest, proximity-close and trust properties of OSN friends.
Abstract: Efficient and trustworthy file querying is important to the overall performance of peer-to-peer (P2P) file sharing systems. Emerging methods are beginning to address this challenge by exploiting online social networks (OSNs). However, current OSN-based methods simply cluster common-interest nodes for high efficiency or limit the interaction between social friends for high trustworthiness, which provides limited enhancement or contradicts the open and free service goal of P2P systems. Little research has been undertaken to fully and cooperatively leverage OSNs with integrated consideration of proximity and interest. In this work, we analyze a BitTorrent file sharing trace, which proves the necessity of proximity- and interest-aware clustering. Based on the trace study and OSN properties, we propose a SOcial Network integrated P2P file sharing system with enhanced Efficiency and Trustworthiness (SoNet) to fully and cooperatively leverage the common-interest, proximity-close and trust properties of OSN friends. SoNet uses a hierarchical distributed hash table (DHT) to cluster common-interest nodes, then further cluster proximity-close nodes into subcluster, and connects the nodes in a subcluster with social links. Thus, when queries travel along trustable social links, they also gain higher probability of being successfully resolved by proximity-close nodes, simultaneously enhancing efficiency and trustworthiness. The results of trace-driven experiments on the real-world PlanetLab testbed demonstrate the higher efficiency and trustworthiness of SoNet compared with other systems.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: It is found that allowing inter-swarm trades on short trading cycles can improve the throughput significantly; on the other hand, trading on long cycles does not pay off as the communication and management overhead becomes exceedingly large while the additional performance gains are marginal.
Abstract: Tit-for-tat trading lies at the heart of many incentive mechanisms for distributed systems where participants are anonymous. However, since the standard tit-for-tat approach is restricted to bilateral exchanges, data is transferred only between peers with direct and mutual interests. Generalizing tit-for-tat to multi-lateral trades where contributions can occur along cycles of interest may improve the performance of a system in terms of faster downloads without compromising the incentive-compatibility inherent to tit-for-tat trading. In this paper, we study the potential benefits and limitations of such a generalized trading in swarm-based peer-to-peer systems. Extensive simulations are performed to evaluate different techniques and to identify the crucial parameters influencing the obtainable throughput improvements and the corresponding tradeoffs. Moreover, we discuss extensions for overhead reduction and provide an optimized distributed implementation of our techniques. In summary, we find that allowing inter-swarm trades on short trading cycles can improve the throughput significantly; on the other hand, trading on long cycles does not pay off as the communication and management overhead becomes exceedingly large while the additional performance gains are marginal.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: It is argued, that future traffic classification must not rely on restricted local syntax information but instead must exploit global communication patterns and protocol semantics in order to be able to keep pace with rapid application and protocol changes.
Abstract: With the beginning of the 21st century emerging peer-to-peer networks ushered in a new era of large scale media exchange. Faced with ever increasing volumes of traffic, legal threats by copyright holders, and QoS demands of customers, network service providers are urged to apply traffic classification and shaping techniques. These systems usually are highly integrated to satisfy the harsh restrictions present in network infrastructure. They require constant maintenance and updates. Additionally, they have legal issues and violate both the net neutrality and end-to-end principles.

Proceedings ArticleDOI
Bingshuang Liu1, Tao Wei1, Jianyu Zhang1, Jun Li2, Wei Zou1, Mo Zhou1 
22 Oct 2012
TL;DR: In this paper, a measurement system called Anthill is built to analyze Kad's performance quantitatively, and find that Kad's failures can be classified into four types: packet loss, selective Denial of Service (sDoS) nodes, search sequence miss, and publish/search space miss.
Abstract: Kad is one of the most popular peer-to-peer (P2P) networks deployed on today's Internet. Its reliability is dependent on not only to the usability of the file-sharing service, but also to the capability to support other Internet services. However, Kad can only attain around a 91% lookup success ratio today. We build a measurement system called Anthill to analyze Kad's performance quantitatively, and find that Kad's failures can be classified into four types: packet loss, selective Denial of Service (sDoS) nodes, search sequence miss, and publish/search space miss. The first two are due to environment changes, the third is caused by the detachment of routing and content operations in Kad, and the last one shows the limitations of the Kademlia DHT algorithm under Kad's current configuration. Based on the analysis, we propose corresponding approaches for Kad, which achieve a success ratio of 99.8%, with only moderate communication overhead.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: This work uses passwords, as they are the most common authentication mechanism in services on the Internet today, ensuring strong user familiarity, and presents a scheme to support logins based on users knowing a username-password pair, which allows P2P systems to emulate centralized password logins.
Abstract: One of the differences between typical peer-to-peer (P2P) and client-server systems is the existence of user accounts. While many P2P applications, like public file sharing, are anonymous, more complex services such as decentralized online social networks require user authentication. In these, the common approach to P2P authentication builds on the possession of cryptographic keys. A drawback with that approach is usability when users access the system from multiple devices, an increasingly common scenario. In this work, we present a scheme to support logins based on users knowing a username-password pair. We use passwords, as they are the most common authentication mechanism in services on the Internet today, ensuring strong user familiarity. In addition to password logins, we also present supporting protocols to provide functionality related to password logins, such as resetting a forgotten password via e-mail or security questions. Together, these allow P2P systems to emulate centralized password logins. The results of our performance evaluation indicate that incurred delays are well within acceptable bounds.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: In this paper, the authors proposed a mathematical model to minimize the expected download time of cloud assisted peer-to-peer video on demand services, where peers are grouped into different classes regarding the number of concurrent video downloads.
Abstract: We propose a mathematical model to minimize the expected download time of cloud assisted Peer-to-Peer video on demand services. First, we define a simple fluid model that quantifies the evolution of peers, which are grouped into different classes regarding the number of concurrent video downloads. Then, analytical expressions for the expected download time are obtained under steady state, via Little's law. The goal is to minimize the expected download time with limited storage capacity in cache nodes of the network, called super-peers. The nature of this combinatorial problem is similar to the Multi-Knapsack Problem (MKP): the number of copies must be chosen for each video stream, with storage capacity constraints. We resolve the problem with a greedy randomized technique. The performance of this co-operative system is compared with a traditional content delivery network. Finally, the new caching-policy is tested in a real scenario. The results confirm that the swarm assisted peer-to-peer service is both more economical and suitable to address massive scenarios, whereas the performance of both systems is similar in small scale instances.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: AntiLiar as mentioned in this paper uses a secure progress log that consists of commitments and one-time signatures, and maintains consistency for an expanded neighbor view to defend against cheating attacks in mesh based streaming.
Abstract: Peer-to-peer (P2P) mesh based streaming systems have gained widespread use for multicasting of audio and video. In approaches based on BitTorrent, members of the swarm share their content availability through gossiping, and redistribute file pieces cached locally. However, such systems are vulnerable to cheating attacks such as fake reporting, selective omission, fake block attack, and neighbor selection attack. These attacks will severely impact quality of service, waste resources, and discourage cooperation among participants. A defense mechanism, called AntiLiar, is proposed for defending against cheating attacks in mesh based streaming. AntiLiar uses a secure progress log that consists of commitments and one-time signatures, and maintains consistency for an expanded neighbor view. Experimental results demonstrate that AntiLiar minimizes required costs, and improves service quality over alternatives in the face of cheating attacks.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: This demonstration presents the current status of development and deployment, a sample decentralized experiment, and an invitation to the broader large-scale distributed computing research community to participate in the project through an open call.
Abstract: Community-Lab is an open, distributed infrastructure for researchers to experiment with Community Networks, that are large scale, self-organized and decentralized networks and services built and operated by citizens for citizens. The goal of Community-Lab is to advance research and empower society by understanding and removing obstacles for these networks and services. This article outlines Community-Lab's aim, development, characteristics and infrastructure. This demonstration presents its current status of development and deployment, a sample decentralized experiment, and an invitation to the broader large-scale distributed computing research community to participate in the project through an open call.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: BOOSTER as mentioned in this paper is a decentralized epidemic-based system that boosts reception quality by coop- eratively repairing lossy packet streams among the community of DVB viewers, and it is shown that the upload bandwidth required by each node for significant recovery of a real DVB broadcast is in the order of 5KB/sec.
Abstract: Wireless broadcasting systems, such as Digital Video Broadcasting (DVB), are subject to signal degradation, having an effect on end users' reception quality. Reception quality can be improved by increasing signal strength, but this comes at a significantly increased energy use and still without guaranteeing error-free reception. In this paper we present BOOSTER, a fully decentralized epidemic-based system that boosts reception quality by coop- eratively repairing lossy packet streams among the community of DVB viewers. To validate our system, we collected real data by deploying a set of DVB receivers geographically distributed in and around Amsterdam and Utrecht, The Netherlands. We implemented and tested our system in PeerSim, using our collected real trace information as input. We present in detail the crucial design decisions, the algorithms that underpin our system, the realistic experimental methodology, as well as extensive results that demonstrate the feasibility and efficiency of this approach. In particular we conclude that the upload bandwidth required by each node for significant recovery of a real DVB broadcast is in the order of 5KB/sec when nodes allow up to 2 second delay for repairing - a rather trivial bandwidth for today's typical ADSL connections with an acceptable introduced delay.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: GoOD-TA as mentioned in this paper is a top-k algorithm for distributed information retrieval that is intended to ensure interoperability in unstructured information retrieval P2P systems, where semantic heterogeneity comes from the use of different ontologies.
Abstract: In unstructured information retrieval P2P systems, semantic heterogeneity comes from the use of different ontologies. Semantic interoperability refers to the ability of peers to communicate with each others. We take into account these notions separately, as raising two different problems. Hence we propose two independent and complementary solutions. The GoOD-TA protocol aims at reducing heterogeneity through ontology-driven topology adaptation. DiQuESh is a top-k algorithm for distributed information retrieval that is intended to ensure interoperability. This distinction enables highlighting their respective benefits on the IR performances and leads to a modular architecture. For our experiments we obtained a set of actively used real-world ontologies through the NCBO BioPortal. We implemented GoOD-TA and DiQuESH in Java and used the PeerSim simulator. We first show that GoOD-TA nicely reduces the semantic heterogeneity related to the system topology, handles the evolution of peers' descriptors, and is suitable for dynamic systems. Then, GoOD-TA and DiQuESh are run simultaneously, with a significant increase of precision and recall. This enables to identify the indirect contribution of heterogeneity reduction obtained with GoOD-TA to improving interoperability.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: It is demonstrated that there is not a universally optimal penalty for disconnection and that the effectiveness of this punishment is markedly dependent on the uptime and downtime session lengths, and proposed to incorporate predictions based on the current activity of the agents into the trust bootstrapping process.
Abstract: Trust-based systems have been proposed as means to fight against malicious agents in peer-to-peer networks. However, there still exist some issues that have been generally overlooked in the literature. One of them is the question of whether punishing disconnecting agents is effective. In this paper, we investigate this question for these initial cases where prior direct and reputational evidence is unavailable, what is referred in the literature as trust bootstrapping. First, we demonstrate that there is not a universally optimal penalty for disconnection and that the effectiveness of this punishment is markedly dependent on the uptime and downtime session lengths. Second, to minimize the effects of an inadequate selection of the disconnection penalty, we propose to incorporate predictions into the trust bootstrapping process. These predictions based on the current activity of the agents enhance the selection of potentially long-lived trustees, shortening the trust bootstrapping time when direct and reputational information is lacking.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: An extensive experimental evaluation of Optimistic Disconnect (OD), an ad hoc connection management mechanism widely employed in BitTorrent agents, finds that OD generally improves the overall performance of the swarm, while improving the robustness of its topology.
Abstract: The significance of BitTorrent motivated various studies focused on modeling and evaluating the protocol characteristics and its current implementations in the Internet. So far, however, no work has investigated Optimistic Disconnect (OD), an ad hoc connection management mechanism widely employed in BitTorrent agents. OD allows a peer to search for “better” neighbors in the swarm by disconnecting peers from the current neighborhood and connecting to others. This paper presents an extensive experimental evaluation to study and quantify potential benefits of OD, such as average download time and topology robustness. We evaluate different scenarios and the impact of factors such as average peer reachability and arrival pattern. We found that OD generally improves the overall performance of the swarm (in up to 30% in the evaluated scenarios), while improving the robustness of its topology.