scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Peer-to-Peer Computing in 2015"


Proceedings ArticleDOI
16 Nov 2015
TL;DR: In AutoTune, the bitrate adaptation problem in ABR is formulated as a noncooperative Stackelberg game, where VoD service provider and the users are players and the Stackellberg equilibrium is reached in which the cloud bandwidth consumption is minimized while users are satisfied with selected video bitrates.
Abstract: Hybrid peer-to-peer assisted cloud-based video-on-demand (VoD) systems augment cloud-based VoD systems with P2P networks to improve scalability and save bandwidth costs in the cloud. In these systems, the VoD service provider (e.g., NetFlix) relies on the cloud to deliver videos to users and pays for the cloud bandwidth consumption. The users can download videos from both the cloud and peers in the P2P network. It is important for VoD service provider to i) minimize the cloud bandwidth consumption, and ii) guarantee users’ satisfaction (i.e., quality-of-experience). Though previous adaptive bitrate streaming (ABR) methods improve video playback smoothness, they cannot achieve these two goals simultaneously. To tackle this challenge, we propose AutoTune, a game-based adaptive bitrate streaming method. In AutoTune, we formulate the bitrate adaptation problem in ABR as a noncooperative Stackelberg game, where VoD service provider and the users are players. The VoD service provider acts as a leader and it decides the VoD service price for users with the objective of minimizing cloud bandwidth consumption while ensuring users’ participation. In response to the VoD service price, the users select video bitrates that lead to maximum utility (defined as a function of its satisfaction minus associated VoD service fee). Finally, the Stackelberg equilibrium is reached in which the cloud bandwidth consumption is minimized while users are satisfied with selected video bitrates. Experimental results from the PeerSim simulator and the PlanetLab real-world testbed show that compared to existing methods, AutoTune can provide high user satisfaction and save cloud bandwidth consumption.

23 citations


Proceedings ArticleDOI
01 Sep 2015
TL;DR: In this article, the authors present how WebRTC can be used to implement an installation-free, fully decentralized online social network, which allows to construct PKI secured p2p overlays with replicated and access-controlled, reliable storage.
Abstract: Online social networks have emerged as a main tool to communicate in the Internet. While centralized solutions are prone to censorship, privacy violations and unwanted marketing of the users data, decentralized solutions, e.g. based on p2p technology, promise to overcome these limitations. One major shortcoming is the need to install additional software, which is progressively not accepted by users which are used to web-based applications. In this paper, we present how WebRTC can be used to implement an installation-free, fully decentralized online social network. With WebRTC, standard browsers can communicate directly, which allows to construct PKI secured p2p overlays with replicated and access-controlled, reliable storage. Evaluation shows that our approach is scalable in terms of the number of users and complies with performance requirements stated to today’s social networks.

13 citations


Proceedings ArticleDOI
16 Nov 2015
TL;DR: Wang et al. as mentioned in this paper proposed a parallel deadline guaranteed (PDG) scheme, which schedules data reallocation (through load re-assignment and data replication) using a tree-based bottom-up parallel process.
Abstract: It is imperative for cloud storage systems to be able to provide deadline guaranteed services according to service level agreements (SLAs) for online services. However, there are no previous works that provide the deadline guarantee specifically for cloud storage systems. In this paper, we introduce a new form of SLAs, which enables each tenant to specify a percentage of its requests it wishes to serve within a specified deadline. We first identify the multiple objectives (i.e., traffic and latency minimization, resource utilization maximization) in developing schemes to satisfy the SLAs. To satisfy the SLAs while achieving the multi-objectives, we propose a Parallel Deadline Guaranteed (PDG) scheme, which schedules data reallocation (through load re-assignment and data replication) using a tree-based bottom-up parallel process. We also enhance PDG through a prioritized data reallocation algorithm, which enables highly overloaded servers to autonomously probe nearby servers and enables the load balancer to instantly handle them. Our trace-driven experiments on a simulator show the effectiveness of our schemes for guaranteeing the SLAs while achieving the multi-objectives.

9 citations


Proceedings ArticleDOI
16 Nov 2015
TL;DR: Cyclon.p2p as discussed by the authors is an implementation of the Cyclon peer sampling protocol using the WebRTC API, and it is shown to be feasible for large scale deployment in real world web applications.
Abstract: The web application paradigm has become a dominant choice due to the ubiquity of web browsers across PCs and mobile devices, and the layer of abstraction that they provide. However, browsers have traditionally communicated only with servers, never from one browser to another. The WebRTC API overcomes this limitation, providing further avenues for the web application paradigm to evolve. In this paper we present Cyclon.p2p an implementation of the Cyclon peer sampling protocol using the WebRTC API. We provide a detailed overview of our system, and conduct a thorough evaluation of the implementation in real-world and simulated experiments. Our results strongly suggest that our implementation is feasible for large scale deployment in real world Web applications.

9 citations


Proceedings ArticleDOI
16 Nov 2015
TL;DR: QTrade, a topology agnostic incentive scheme for adaptive P2P video streaming systems based on user-validated video quality metrics is presented and shows that QTrade utilizes bandwidth more efficiently by providing incentive to distribute parts of the video with a high QoE.
Abstract: Video streaming constitutes the dominant portion of today’s traffic on the Internet and will grow in the coming years. In order to provide for a low cost distribution of bulky video content, Peer-to-Peer (P2P) approaches are a viable way to cut down server bandwidth cost by utilizing user’s upstream bandwidth to redistribute data. However, users need an incentive to participate in such a system. The related work on incentive schemes has focused on using bandwidth contribution as a measure for contribution to the system’s performance, ignoring that user perceived Quality of Experience (QoE) is not necessarily maximized by maximizing bandwidth, but by delivering the right data in the right order. Consequently, this work presents QTrade, a topology agnostic incentive scheme for adaptive P2P video streaming systems based on user-validated video quality metrics. QTrade is evaluated on top of an existing adaptive streaming overlay. The results show that QTrade utilizes bandwidth more efficiently by providing incentive to distribute parts of the video with a high QoE. Moreover, up to 70% less and shorter rebuffering events are observed for cooperative peers while maintaining a 10 to 11 times worse performance in terms of rebuffering events for non-cooperative peers.

9 citations


Proceedings ArticleDOI
16 Nov 2015
TL;DR: This work introduces a trust management system, called SocialLink, that utilizes social network and historical transaction links to manage file transactions and uses a novel weighted transaction network, which is built based on previous file transaction history.
Abstract: Current reputation systems for peer-to-peer (P2P) file sharing systems either fail to utilize existing trust within social networks or suffer from certain attacks (e.g., free-riding and collusion). To handle these problems, we introduce a trust management system, called SocialLink, that utilizes social network and historical transaction links. SocialLink manages file transactions through both the social network and a novel weighted transaction network, which is built based on previous file transaction history. First, SocialLink exploits the trust among friends in social networks by enabling two friends to share files directly. Second, the weighted transaction network is utilized to 1) deduce the trust of the client on a server in reliably providing the requested file and 2) check the fairness of the transaction. In this way, SocialLink prevents potential misbehaving transactions (i.e., providing faulty files), encourages nodes to contribute file resources to non-friends, and avoids free-riding. Furthermore, the weighted transaction network helps SocialLink resist whitewashing, collusion and Sybil attacks. Extensive simulation demonstrates that SocialLink can efficiently ensure trustable and fair P2P file sharing and resist the aforementioned attacks.

9 citations


Proceedings ArticleDOI
16 Nov 2015
TL;DR: In this article, the authors describe a decentralized approach, eliminating the need for a central collector and storing local views of network traffic patterns on the respective devices performing the capture, in order to allow for the analysis of captured data, queries formulated by analysts are distributed across all devices.
Abstract: The Internet has developed into the primary means of communication, while ensuring availability and stability is becoming an increasingly challenging task. Traffic monitoring enables network operators to comprehend the composition of traffic flowing through individual corporate and private networks, making it essential for planning, reporting and debugging purposes. Classical packet capture and aggregation concepts (e.g. NetFlow) typically rely on centralized collection of traffic metadata. With the proliferation of network enabled devices and the resulting increase in data volume, such approaches suffer from scalability issues, often prohibiting the transfer of raw metadata as such. This paper describes a decentralized approach, eliminating the need for a central collector and storing local views of network traffic patterns on the respective devices performing the capture. In order to allow for the analysis of captured data, queries formulated by analysts are distributed across all devices. Processing takes place in a parallelized fashion on the respective local data. Consequently, instead of continually transferring raw metadata, significantly smaller aggregate results are sent to a central location which are then combined into the requested final result. The proposed system describes a lightweight and scalable monitoring solution, enabling the efficient use of available system resources on the distributed devices, hence allowing for high performance, real-time traffic analysis on a global scale. The solution was implemented and deployed globally on hosts managed and maintained by a large managed network security services provider.

6 citations


Proceedings ArticleDOI
16 Nov 2015
TL;DR: This work designs a socially-aware distributed hash table (DHTs) for efficient implementation of DOSNs and proposes a gossip-based algorithm to place users in a DHT, while maximizing the social awareness among them.
Abstract: Many decentralized online social networks (DOSNs) have been proposed due to an increase in awareness related to privacy and scalability issues in centralized social networks. Such decentralized networks transfer processing and storage functionalities from the service providers towards the end users. DOSNs require individualistic implementation for services, (i.e., search, information dissemination, storage, and publish/subscribe). However, many of these services mostly perform social queries, where OSN users are interested in accessing information of their friends. In our work, we design a socially-aware distributed hash table (DHTs) for efficient implementation of DOSNs. In particular, we propose a gossip-based algorithm to place users in a DHT, while maximizing the social awareness among them. Through a set of experiments, we show that our approach reduces the lookup latency by almost 30% and improves the reliability of the communication by nearly 10% via trusted contacts.

6 citations


Proceedings ArticleDOI
16 Nov 2015
TL;DR: This paper presents DDLL, a novel decentralized algorithm for constructing distributed doubly linked lists that adopts a novel strategy based on conflict detection and sequence numbers.
Abstract: A distributed doubly linked list (or bidirectional ring) is a fundamental distributed data structure commonly used in structured peer-to-peer networks. This paper presents DDLL, a novel decentralized algorithm for constructing distributed doubly linked lists. In the absence of failure, DDLL maintains consistency with regard to lookups of nodes, even while multiple nodes are simultaneously being inserted or deleted. Unlike existing algorithms, DDLL adopts a novel strategy based on conflict detection and sequence numbers. A formal description and correctness proofs are given. Simulation results show that DDLL outperforms conventional algorithms in terms of both time and number of messages.

5 citations


Proceedings ArticleDOI
16 Nov 2015
TL;DR: This work shows how to decompose popular P2P live streaming systems, such as CoolStreaming, BitTorrent Live and others, into ingredients and how STREAMAID can help optimize and adapt these protocols.
Abstract: Peer-to-peer live streaming systems involve complex engineering and are difficult to test and to deploy. To cut through the complexity, we advocate such systems be designed by composing ingredients: a novel abstraction denoting the smallest interoperable units of code that each express a single design choice. We present a system, STREAMAID, that provides tools for designing protocols in terms of ingredients, systematically testing the impact of every design decision in a simulator, and deploying them in a wide-area testbed such as PlanetLab for evaluation. We show how to decompose popular P2P live streaming systems, such as CoolStreaming, BitTorrent Live and others, into ingredients and how STREAMAID can help optimize and adapt these protocols. By experimenting with the essential building blocks of which P2P live streaming protocols are comprised, we gain a unique vantage point of their relative quality, their bottlenecks and their potential for future improvement.

4 citations


Proceedings ArticleDOI
16 Nov 2015
TL;DR: In this article, the authors proposed two filters based on probabilistic models such that the good files with negative feedback are not completely kept out of the rating system, and the confidence of the downloading peer and the difference of positive and negative ratings of a file to calculate the probability to take a risk to download the file or reject it.
Abstract: In the recent years, the P2P file sharing systems have adopted rating systems in the hope to stop the propagation of bad files. In a rating system, users rate files after downloading and a file with positive feedback is considered a good file. However, a dishonest rater can undermine the rating system by giving positive rating to bad files and negative rating to good files. In this paper, we design two filters based on probabilistic models such that the good files with negative feedback are not completely kept out of the system. The first filter is based on the binomial distribution of the ratings of a file, and the second filter considers the confidence of the downloading peer and the difference of positive and negative ratings of a file to calculate the probability to take a risk to download the file or reject it. Our filters only need the ratings of a file and this makes them suitable for popular torrent sharing websites that rank the files using a binary rating system without any information about raters. In addition, we can implement them entirely on the client side without any modification to the content sharing sites.

Proceedings ArticleDOI
16 Nov 2015
TL;DR: It is found that in times of light congestion the decentralized algorithms perform as well as the centralized approach and during times of moderate or heavy congestion the unstructured peer-to-peer algorithm performs better than the centralized algorithm, and the DHT-based algorithm performs worse.
Abstract: In this paper we present two new decentralized algorithms for autonomous intersection management and compare the performance of the algorithms with an established centralized solution. One of the algorithms addresses the problem through an unstructured peer-to-peer approach and the other uses a Distributed Hash Table to distribute knowledge of intersection usage among participating vehicles. We evaluate these algorithms through simulation and by comparing average delay to the performance of a centralized reservation-based algorithm. We find that in times of light congestion the decentralized algorithms perform as well as the centralized approach. During times of moderate or heavy congestion the unstructured peer-to-peer algorithm performs better than the centralized algorithm, and the DHT-based algorithm performs worse.

Proceedings ArticleDOI
16 Nov 2015
TL;DR: iASK achieves high answer quality with 24 percent higher accuracy, short response latency with 53 percent less delay and effective cooperative incentives with 16 percent more answers compared to other social-based Q&A systems.
Abstract: Traditional web-based Question and Answer (Q&A) websites cannot easily solve non-factual questions to match askers’ preference. Recent research efforts begin to study social-based Q&A systems that rely on an asker’s social friends to provide answers. However, this method cannot find answerers for a question not belonging to the asker’s interests. To solve this problem, we propose a distributed Q&A system incorporating both social community intelligence and global collective intelligence, named as iASK. iASK improves the response latency and answer quality in both the social domain and global domain. It uses a neural network based friend ranking method to identify answerer candidates by considering social closeness and Q&A activities. To efficiently identify answerers in the global user base, iASK builds a virtual server tree that embeds the hierarchical structure of interests, and also maps users to the tree based on user interests. To accurately locate the cooperative experts, iASK has a fine-grained reputation system to evaluate user reputation based on their cooperativeness and expertise. Experimental results from large-scale trace-driven simulation and realworld daily usages of the iASK prototype show the superior performance of iASK. It achieves high answer quality with 24% higher accuracy, short response latency with 53% less delay and effective cooperative incentives with 16% more answers compared to other social-based Q&A systems.

Proceedings ArticleDOI
16 Nov 2015
TL;DR: A very simple, yet very effective, generalization of the formula that decouples spammers vs legitimate users penalty is proposed, showing that at the optimum, the proposal halves the harm spammers can do, avoiding by definition any impact for legitimate users.
Abstract: The BitMessage protocol offers privacy to its anonymous users. It is a completely decentralized messaging system, enabling users to exchange messages preventing accidental eavesdropping - a nice features in the Post-Snowden Internet Era. Not only messages are sent to every node on the network (making it impossible to understand the intended recipient), but their content is encrypted with the intended recipient public key (so that s/he only can decipher it). As these two properties combined might facilitate spamming, a proof-of-work (PoW) mechanism has been designed to mitigate this threat: only messages exhibiting properties of the PoW are forwarded on the network: since PoW is based on computationally heavy cryptographic functions, this slows down the rate at which spammers can introduce unsolicited messages in the network on the one hand, but also makes it harder to send legitimate messages for regular users on the other hand. In this paper, we (i) carry on an analysis of the current PoW mechanism, (ii) propose a very simple, yet very effective, generalization of the formula that decouples spammers vs legitimate users penalty showing that (iii) at the optimum, our proposal halves the harm spammers can do, avoiding by definition any impact for legitimate users.