scispace - formally typeset
Search or ask a question

Showing papers presented at "International Workshop on Quality of Service in 2012"


Proceedings ArticleDOI
04 Jun 2012
TL;DR: This paper re-model the resource provisioning problem in the Dropbox-like systems and presents an interference-aware solution that smartly allocates the Dropbox tasks to different cloud instances and remarkably reduces the synchronization delay for this new generation of file hosting service.
Abstract: Powered by cloud computing, Dropbox not only provides reliable file storage but also enables effective file synchronization and user collaboration. This new generation of service, beyond conventional client/server or peer-to-peer file hosting with storage only, has attracted a vast number of Internet users. It is however known that the synchronization delay of Dropbox-like systems is increasing with their expansion, often beyond the accepted level for practical collaboration. In this paper, we present an initial measurement to understand the design and performance bottleneck of the proprietary Dropbox system. Our measurement identifies the cloud servers/instances utilized by Dropbox, revealing its hybrid design with both Amazon's S3 (for storage) and Amazon's EC2 (for computation). The mix of bandwidth-intensive tasks (such as content delivery) and computation-intensive tasks (such as compare hash values for the contents) in Dropbox enables seamless collaboration and file synchronization among multiple users; yet their interference, revealed in our experiments, creates a severe bottleneck that prolongs the synchronization delay with virtual machines in the cloud, which has not seen in conventional physical machines. We thus re-model the resource provisioning problem in the Dropbox-like systems and present an interference-aware solution that smartly allocates the Dropbox tasks to different cloud instances. Evaluation results show that our solution remarkably reduces the synchronization delay for this new generation of file hosting service.

71 citations


Proceedings ArticleDOI
04 Jun 2012
TL;DR: An initial study on the performance of modern virtualization solutions under DoS attacks, and implements a practical modification to the VirtIO drivers in the Linux KVM package, which effectively mitigates the overhead of a DoS attack by up to 40%.
Abstract: Virtualization, which allows multiple Virtual Machines (VMs) to reside on a single physical machine, has become an indispensable technology for today's IT infrastructure. It is known that the overhead for virtualization affects system performance; yet it remains largely unknown whether VMs are more vulnerable to networked Denial of Service (DoS) attacks than conventional physical machines. A clear understanding here is obviously critical to such networked virtualization system as cloud computing platforms. In this paper, we present an initial study on the performance of modern virtualization solutions under DoS attacks. We experiment with the full spectrum of modern virtualization techniques, from paravirtualization, hardware virtualization, to container virtualization, with a comprehensive set of benchmarks. Our results reveal severe vulnerability of modern virtualization: even with relatively light attacks, the file system and memory access performance of VMs degrades at a much higher rate than their non-virtualized counterparts, and this is particularly true for hypervisor-based solutions. We further examine the root causes, with the goal of enhancing the robustness and security of these virtualization systems. Inspired by the findings, we implement a practical modification to the VirtIO drivers in the Linux KVM package, which effectively mitigates the overhead of a DoS attack by up to 40%.

52 citations


Proceedings ArticleDOI
04 Jun 2012
TL;DR: This paper presents CloudGPS, a new server selection scheme of the cloud computing environment that achieves high scalability and ISP-friendliness, and significantly reduces network distance measurement cost.
Abstract: In order to minimize user perceived latency while ensuring high data availability, cloud applications desire to select servers from one of the multiple data centers (i.e., server clusters) in different geographical locations, which are able to provide desired services with low latency and low cost. This paper presents CloudGPS, a new server selection scheme of the cloud computing environment that achieves high scalability and ISP-friendliness. CloudGPS proposes a configurable global performance function that allows Internet service providers (ISPs) and cloud service providers (CSPs) to leverage the cost in terms of inter-domain transit traffic and the quality of service in terms of network latency. CloudGPS bounds the overall burden to be linear with the number of end users. Moreover, compared with traditional approaches, CloudGPS significantly reduces network distance measurement cost (i.e., from O(N) to O(1) for each end user in an application using N data centers). Furthermore, CloudGPS achieves ISP-friendliness by significantly decreasing inter-domain transit traffic.

35 citations


Proceedings ArticleDOI
04 Jun 2012
TL;DR: The performance evaluation in a WMN scenario demonstrates the high accuracy of HyQoE in estimating the Mean Opinion Score (MOS), and highlights the lack of performance of the well-known objective methods and the Pseudo-Subjective Quality Assessment (PSQA) approach.
Abstract: As Wireless Mesh Networks (WMNs) have been increasingly deployed, where users can share, create and access videos with different characteristics, the need for new quality estimator mechanisms has become important because operators want to control the quality of video delivery and optimize their network resources, while increasing the user satisfaction. However, the development of in-service Quality of Experience (QoE) estimation schemes for Internet videos (e.g., real-time streaming and gaming) with different complexities, motions, Group of Picture (GoP) sizes and contents remains a significant challenge and is crucial for the success of wireless multimedia systems. To address this challenge, we propose a real-time quality estimator approach, HyQoE, for real-time multimedia applications. The performance evaluation in a WMN scenario demonstrates the high accuracy of HyQoE in estimating the Mean Opinion Score (MOS). Moreover, the results highlight the lack of performance of the well-known objective methods and the Pseudo-Subjective Quality Assessment (PSQA) approach.

35 citations


Proceedings ArticleDOI
04 Jun 2012
TL;DR: This work presents a HAS QoE monitoring system, based on data collected in the network, without monitoring information from the client, and presents a working prototype for the reconstruction and monitoring of Microsoft Smooth Streaming HAS sessions that is capable of dealing with intermediate caching and user interactivity.
Abstract: HTTP Adaptive Streaming (HAS) is rapidly becoming a key video delivery technology for fixed and mobile networks. However, today there is no solution that allows network operators or CDN providers to perform network-based QoE monitoring for HAS sessions. We present a HAS QoE monitoring system, based on data collected in the network, without monitoring information from the client. To retrieve the major QoE parameters such as average quality, quality variation, rebuffering events and interactivity delay, we propose a technique called session reconstruction. We define a number of iterative steps and developed algorithms that can be used to perform HAS session reconstruction. Finally, we present the results of a working prototype for the reconstruction and monitoring of Microsoft Smooth Streaming HAS sessions that is capable of dealing with intermediate caching and user interactivity. We describe the main observations when using the platform to analyze more than a hundred HAS sessions.

34 citations


Proceedings ArticleDOI
04 Jun 2012
TL;DR: This paper proposes a new soft real-time scheduling algorithm that employs flexible priority designations and automated scheduler class detection to provide a higher quality user experience and demonstrates that the overheads incurred from co-locating large numbers of virtual machines can be reduced from 66% with existing schedulers to under 2% in this system.
Abstract: Virtual Desktop Infrastructures (VDIs) are gaining popularity in cloud computing by allowing companies to deploy their office environments in a virtualized setting instead of relying on physical desktop machines. Consolidating many users into a VDI environment can significantly lower IT management expenses and enables new features such as "available-anywhere" desktops. However, barriers to broad adoption include the slow performance of virtualized I/O, CPU scheduling interference problems, and shared-cache contention. In this paper, we propose a new soft real-time scheduling algorithm that employs flexible priority designations (via utility functions) and automated scheduler class detection (via hypervisor monitoring of user behavior) to provide a higher quality user experience. We have implemented our scheduler within the Xen virtualization platform, and demonstrate that the overheads incurred from co-locating large numbers of virtual machines can be reduced from 66% with existing schedulers to under 2% in our system. We evaluate the benefits and overheads of using a smaller scheduling time quantum in a VDI setting, and show that the average overhead time per scheduler call is on the same order as the existing SEDF and Credit schedulers.

33 citations


Proceedings ArticleDOI
04 Jun 2012
TL;DR: The results show that Geo-fencing provides an effective framework for use with LBSs with a significant energy saving for mobile devices.
Abstract: Location-based services (LBSs) are often based on an area or place as opposed to an accurate determination of the precise location. However, current mobile software frameworks are geared towards using specific hardware devices (e.g., GPS or 3G or WiFi interfaces) for as precise localization as possible using that device, often at the cost of a significant energy drain. Further, often the location information is not returned promptly enough. To address this problem, we design a framework for mobile devices, called Geo-fencing. The proposed framework is based on the observation that users move from one place to another and then stay at that place for a while. These places can be, for example, airports, shopping centers, home, offices and so on. Geo-fencing defines such places as geographic areas bounded by polygons. It assumes people simply move from fence to fence and stay inside fences for a while. The framework is coordinated with available communication chips and sensors based on their energy usage and accuracy provided. The essential goal is to determine when users check in or out of fences in an energy effiecient fashion so that appropriate LBS can be triggered. Windows based smartphones are used to prototype Geo-fencing. Validations are conducted with the resulting traces of outdoor and indoor activities of several users for several months. The results show that Geo-fencing provides an effective framework for use with LBSs with a significant energy saving for mobile devices.

31 citations


Proceedings ArticleDOI
04 Jun 2012
TL;DR: By modifying the basic Galton-Watson stochastic branching process, a simple yet effective model is developed that can well capture the randomness of a video's popularity and the skewed video popularity distribution.
Abstract: Recent statistics suggest that online social network (OSN) users regularly share video contents from video sharing sites (VSSes), and a significant amount of views of VSSes are indeed from OSN users nowadays. By crawling and comparing the statistics of same videos shared in both RenRen (the largest Facebook-like OSN in China) and Youku (the largest Youtube-like VSS in China), we find that the huge and distinguished video requests from OSNs have substantially changed the workload of VSSes. In particular, OSNs amplify the skewness of video popularity so largely that about 0.31% most popular videos account for 80% of total views. Another interesting phenomenon is that many popular videos in VSSes may not receive many requests in OSNs. To further understand these findings, we track the propagation process of videos shared in RenRen since their introduction to this OSN, and analyze the effect of potential parameters to such process, including the number of initiators (users who bring the video to the OSN directly from a VSS), branching factor (the number of users who watch the friend's shared video), and share rate (the probability that the viewers of a video will further share this video). Beyond our expectation, none of these factors determine a video's popularity in an OSN. Instead, it shows great randomness for the number of a video's potential requests when it is shared to an OSN. By modifying the basic Galton-Watson stochastic branching process, we develop a simple yet effective model to simulate the video propagation process in an OSN. Simulation results show that it can well capture the randomness of a video's popularity and the skewed video popularity distribution.

28 citations


Proceedings ArticleDOI
04 Jun 2012
TL;DR: Three solutions are proposed, one for each at the beginning, the middle and the end of a TCP connection, to ensure all the on-going TCP incast flows can maintain the self-clocking, thus eliminating the need to resort to retransmission timeout for recovery and preventing the throughput collapse.
Abstract: Incast applications have grown in popularity with the advancement of data center technology. It is found that the TCP incast may suffer from the throughput collapse problem, as a consequence of TCP retransmission timeouts when the bottleneck buffer is overwhelmed and causes the packet losses. This is critical to the Quality of Service of cloud computing applications. While some previous literature has proposed solutions, we still see the problem not completely solved. In this paper, we investigate the three root causes for the poor performance of TCP incast flows and propose three solutions, one for each at the beginning, the middle and the end of a TCP connection. The three solutions are: admission control to TCP flows so that the flow population would not exceed the network's capacity; retransmission based on timestamp to detect loss of retransmitted packets; and reiterated FIN packets to keep the TCP connection active until the the termination of a session is acknowledged. The orchestration of these solutions prevents the throughput collapse. The main idea of these solutions is to ensure all the on-going TCP incast flows can maintain the self-clocking, thus eliminates the need to resort to retransmission timeout for recovery. We evaluate these solutions and find them work well in preventing the retransmission timeout of TCP incast flows, hence also preventing the throughput collapse.

23 citations


Proceedings ArticleDOI
04 Jun 2012
TL;DR: Using real-world measurements, this paper identifies the key factors that affect the bandwidth multipliers of peer swarms and thus constructs a fine-grained performance model for addressing the optimal bandwidth allocation problem (OBAP), and develops a fast-convergent iterative algorithm to solve OBAP.
Abstract: Hybrid cloud-P2P content distribution (“CloudP2P”) provides a promising alternative to the conventional cloud-based or peer-to-peer (P2P)-based large-scale content distribution. It addresses the potential limitations of these two conventional approaches while inheriting their advantages. A key strength of CloudP2P lies in the so-called bandwidth multiplier effect: by appropriately allocating a small portion of cloud (server) bandwidth S i to a peer swarm i (consisting of users interested in the same content) to seed the content, the users in the peer swarm — with an aggregate download bandwidth D i — can then distribute the content among themselves; we refer to the ratio D i /S i as the bandwidth multiplier (for peer swarm i). A major problem in the design of a CloudP2P content distribution system is therefore how to allocate cloud (server) bandwidth to peer swarms so as to maximize the overall bandwidth multiplier effect of the system. In this paper, using real-world measurements, we identify the key factors that affect the bandwidth multipliers of peer swarms and thus construct a fine-grained performance model for addressing the optimal bandwidth allocation problem (OBAP). Then we develop a fast-convergent iterative algorithm to solve OBAP. Both trace-driven simulations and prototype implementation confirm the efficacy of our solution.

23 citations


Proceedings ArticleDOI
04 Jun 2012
TL;DR: This paper considers the problem of optimizing the Physical Infrastructure Provider's (PIP) profit while minimizing the dissatisfaction of VN customers, and proposes to dynamically partition the PIP resources over VN requests belonging to different Quality of Service (QoS) classes using periodical auction mechanism.
Abstract: In this paper, our focus is on the embedding problem which consists on the mapping of VN resources onto physical infrastructure network. More specifically, we consider the problem of optimizing the Physical Infrastructure Provider's (PIP) profit while minimizing the dissatisfaction of VN customers. We propose to dynamically partition the PIP resources over VN requests belonging to different Quality of Service (QoS) classes using periodical auction mechanism. We formulate the dynamic embedding problem as an Integer Linear Program (ILP) that allows us to: (i) maximize the PIP profit, and (ii) calculate the optimal embedding scheme of VN requests without disruption of those previously accepted in order to uphold QoS guarantees.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: A service quality coordination model combining QoS and QoE is proposed and is applied to a video-sharing service and its coordination model is derived based on subjective experiments.
Abstract: Both of Quality of Service (QoS) and Quality of Experience (QoE) are defined to specify the degree of service quality. Although they are dealt with in different layers in multi-layered models, collaboration of these is necessary to improve the user satisfaction for telecommunication services. In this paper, after sorting out the concepts and specification of QoS and QoE, a service quality coordination model combining these is proposed. The model is applied to a video-sharing service and its coordination model is derived based on subjective experiments. The structural equation modeling is used to compute the user satisfaction from QoS and QoE.

Proceedings ArticleDOI
Tong Yang1, Bo Yuan1, Shenjiang Zhang1, Ting Zhang1, Ruian Duan1, Yi Wang1, Bin Liu1 
04 Jun 2012
TL;DR: Two suboptimal FIB compression algorithms are presented - EAR-fast and EAR-slow, respectively, based on the proposed Election and Representative (EAR) algorithm which is an optimal F IB compression algorithm.
Abstract: With the fast development of Internet, the size of routing tables in the backbone routers keeps a rapid growth in recent years. An effective solution to control the memory occupation of the ever-increased huge routing table is the Forwarding Information Base (FIB) compression. Existing optimal FIB compression algorithm ORTC suffers from high computational complexity and poor update performance, due to the loss of essential structure information during its compression process. To address this problem, we present two sub-optimal FIB compression algorithms -- EAR-fast and EAR-slow, respectively, based on our proposed Election and Representative (EAR) algorithm which is an optimal FIB compression algorithm. The two suboptimal algorithms preserve the structure information, and support fast incremental updates while reducing computational complexity. Experiments on an 18-month real data set show that compared with ORTC, the proposed EAR-fast algorithm requires only 9.8% compression time and 37.7% memory space, but supports faster update while prolonging the recompression interval remarkably. All these performance advantages come at a cost of merely a 1.5% loss in compression ratio compared with the theoretical optimal ratio.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: This work derives the effective service capacity for different scheduling strategies that the scheduler might apply based on a threshold error model and shows that for any QoS target and average link state there exists an optimal SNR margin improving the maximum sustainable rate.
Abstract: The concept of the effective service capacity is an analytical framework for evaluating QoS-constrained queuing performance of communication systems. Recently, it has been applied to the analysis of different wireless systems like point-to-point systems or multi-user systems. In contrast to previous work, we consider in this work slot-based systems where a scheduler determines a packet size to be transmitted at the beginning of the slot. For this, the scheduler can utilize outdated channel state information. Based on a threshold error model, we derive the effective service capacity for different scheduling strategies that the scheduler might apply. We show that even slightly outdated channel state information leads to a significant loss in capacity in comparison to an ideal system with perfect channel state information available at the transmitter. This loss depends on the ‘risk-level’ the scheduler is willing to take which is represented by an SNR margin. We show that for any QoS target and average link state there exists an optimal SNR margin improving the maximum sustainable rate. Typically, this SNR margin is around 3 dB but is sensible to the QoS target and average link quality. Finally, we can also show that adapting to the instantaneous channel state only pays off if the correlation between the channel estimate and the channel state is relatively high (with a coefficient above 0.9).

Proceedings ArticleDOI
04 Jun 2012
TL;DR: This paper derives a time lower bound on cloned-tag identification and proposes a suite of time-efficient protocols toward approaching the timeLower bound, which may benefit RFID applications that distribute tagged objects across multiple places.
Abstract: Tag cloning attacks threaten a variety of Radio Frequency Identification (RFID) applications but are hard to prevent. To secure RFID applications that confine tagged objects in the same RFID system, this paper studies the cloned-tag identification problem. Although limited existing work has shed some light on the problem, designing fast cloned-tag identification protocols for applications in large-scale RFID systems is yet not thoroughly investigated. To this end, we propose leveraging broadcast and collisions to identify cloned tags. This approach relieves us from resorting to complex cryptography techniques and time-consuming transmission of tag IDs. Based on this approach, we derive a time lower bound on cloned-tag identification and propose a suite of time-efficient protocols toward approaching the time lower bound. The execution time of our protocol is only 1.4 times the value of the time lower bound, being up to 91% less than that of the existing protocol. The proposed protocols may benefit also RFID applications that distribute tagged objects across multiple places.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: InSite is a light-weight and easy-to-deploy solution for managing the QoE of a set of video flows of a service provider, which are served from a data center, and manages the video flows that are transmitted over TCP.
Abstract: The Internet is witnessing a rapid increase in video traffic. Due to the scalability and the cost-savings offered by cloud-computing, Internet video service providers are increasingly delivering their content from multi-tenant cloud data centers. One of the major challenges faced by such a video service provider is the management of the Quality-of-Experience (QoE) of the end-users in the presence of Variable Bit Rate (VBR) video flows, time varying network conditions in the Internet, and the bounded egress bandwidth provided by the data center. To this end, we present InSite, a light-weight and easy-to-deploy solution for managing the QoE of a set of video flows of a service provider, which are served from a data center. InSite is deployed at the egress of a data center, between the video servers and the clients, and manages the video flows that are transmitted over TCP. The solution uses a novel generalized binary search technique to concurrently search for the appropriate flow rates for a set of flows, with the goal of maximizing the QoE-fairness across the flows, as opposed to TCPfairness. The search takes into account the total egress bandwidth allocated for the set of video flows at the data center, the unknown and possibly time-varying capacities of any remote bottleneck links, and the playout buffer sizes of the video flows. The solution is also designed to operate with minimal modifications to the video servers and the clients. In our evaluations using extensive ns-3 simulations and a testbed implementation for serving videos over TCP, we observe that deploying InSite achieves several folds reduction in playout stalls over a system without InSite.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: A distributed solution called decor that achieves global optimization based on local information that closes to centralized approaches in terms of performance and can easily scale up to large networks is presented.
Abstract: Network resources are often limited, so how to use them efficiently is an issue that arises in many important scenarios. Many recent proposals rely on a central controller to carefully orchestrate resources across multiple network locations. The central controller gathers network information and relative levels of usage of different resources and calculates optimized task allocation arrangements to maximize some global benefit. Examples of architectures that use this framework include coordinated sampling (cSamp [1]) and redundancy elimination (SmartRE [2]). However, a centralized solution creates practical problems as it is susceptible to overload, and the controller is a single point of failure. In this paper, we present a distributed solution called decor that achieves global optimization based on local information that closes to centralized approaches in terms of performance. In decor, the responsibility of resource monitoring and information gathering is spread among multiple nodes; thus, no single point is overloaded. Allocation of tasks is also done in a similar distributed fashion. decor can easily scale up to large networks, and the partial network failures do not affect DECOR's functioning in other parts of the network. decor can be applied to most of path-based applications. We describe in detail how to apply it to distributed SmartRE and implement it in the Click software router.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: This paper finds that the representative AQM, random early detection (RED), fails to maintain a stable backlog under time-varying wireless losses, and devise the integral controller (IC) as an embodiment of the control-theoretic vehicle, internal model principle, to realize robustly track the backlog to a preset reference level.
Abstract: In order to maintain a small, stable backlog at the router buffer, active queue management (AQM) algorithms drop packets probabilistically at the onset of congestion, leading to backoffs by Transmission Control Protocol (TCP) flows. However, wireless losses may be misinterpreted as congestive losses and induce spurious backoffs. In this paper, we raise the basic question: Can AQM maintain a stable, small backlog under wireless losses? We find that the representative AQM, random early detection (RED), fails to maintain a stable backlog under time-varying wireless losses. We find that the key to resolving the problem is to robustly track the backlog to a preset reference level, and apply the control-theoretic vehicle, internal model principle, to realize such tracking. We further devise the integral controller (IC) as an embodiment of the principle. Our simulation results show that IC is robust against time-varying wireless losses under various network scenarios.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: A novel approach is proposed that adopts a flexible segmentation policy and generalizes both LRU and LFU when applied to segmented accesses, and is shown to significantly lower wireless backhaul traffic.
Abstract: Video objects are much larger in size than traditional web objects and tend not to be viewed in entirety. Hence, caching them partially is a promising approach. Also, the projected growth in video traffic over wireless cellular networks calls for resource-efficient caching mechanisms in the wireless edge to lower traffic over the cellular backhaul and peering links and their associated costs. An evaluation of traditional partial caching solutions proposed in the literature shows that known solutions are not robust to video viewing patterns, increasing object pool size, changing object popularity, or limitation in the resources available for caching at the wireless network elements. In this paper, to overcome the limitations, we propose a novel approach that adopts a flexible segmentation policy and generalizes both LRU and LFU when applied to segmented accesses, and in our simulations, is shown to significantly lower wireless backhaul traffic (by around 20--30% and in some cases even higher).

Proceedings ArticleDOI
04 Jun 2012
TL;DR: A novel replication algorithm that deploys an optimal number of O(√n) replicas across the sensor network and achieves a search success rate of 98% while reducing the search energy consumption by an order of magnitude compared with existing schemes.
Abstract: Audio represents one of the most appealing yet least exploited modalities in wireless sensor networks, due to the potentially extremely large data volumes and limited wireless capacity. Therefore, how to effectively collect audio sensing information remains a challenging problem. In this paper, we propose a new paradigm of audio information collection based on the concept of audio-on-demand. We consider a sink-free environment targeting for disaster management, where audio chunks are stored inside the network for retrieval.The difficulty is to guarantee a high search success rate without infrastructure support. To solve the problem, we design a novel replication algorithm that deploys an optimal number of O([EQUATION]) replicas across the sensor network. We prove the optimality of the energy consumption of the algorithm, and use real test-bed experiments and extensive simulations to evaluate the performance and efficiency of our design. The experimental results show that our design can provide satisfactory quality of audio-on-demand service with short startup latency and slight playback jitter. Extensive simulation results show that this design achieves a search success rate of 98% while reducing the search energy consumption by an order of magnitude compared with existing schemes.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: ADD is presented, which extracts dependency paths for each application by decomposing the application-layer connectivity graph inferred from passive network monitoring data, and is especially effective in the presence of overlapping and multi-hop applications and resilient to data loss and estimation errors.
Abstract: Driven by the large-scale growth of applications deployment in data centers and complicated interactions between service components, automated application dependency discovery becomes essential to daily system management and operation. In this paper, we present ADD, which extracts dependency paths for each application by decomposing the application-layer connectivity graph inferred from passive network monitoring data. ADD utilizes a series of statistical techniques and is based on the combination of global observation of application traffic matrix in the data center and local observation of traffic volumes at small time scales on each server. Compared to existing approaches, ADD is especially effective in the presence of overlapping and multi-hop applications and resilient to data loss and estimation errors.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: It is shown that as the number of movies becomes large and there is some skewness in movie popularity, then one cannot expect the P2P part of the system to reduce server load as well as provide availability to all movies at the same time.
Abstract: We consider a P2P-assisted content storage and delivery system to support a streaming Video-on-Demand (VoD) service. In this system, the peers are part of the service provider (e.g. set-top boxes) with limited storage space. Servers with ample storage and bandwidth are deployed to guarantee the availability and quality, but it is desirable to minimize the server utilization to reduce costs. Based on experience of implementing a deployed P2P VoD system, it was suggested in [1] that a movie's availability should be proportional to the movie's popularity. Based on further refinement, it is observed [2] that performance can be further improved by more (than proportional) availability for cold movies in P2P system. In this paper, we show that as the number of movies becomes large and there is some skewness in movie popularity, then one cannot expect the P2P part of the system to reduce server load as well as provide availability to all movies at the same time. It is a trade-off between coverage of movies and streaming throughput provided by the P2P system. If the goal is to minimize server load, under some reasonable conditions, we show that it is best to store and replicate only the hottest K* movies in the P2P part of the system. We also study the relationship between the skewness of the movie popularity distribution, P2P resources and the value of K*. Finally, we use simulation to validate our results.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: A profitable business model is proposed to enable all involved parties to enlarge their benefits with the help of a novel QoS-based architecture integrated with caching techniques and shows that the tripartite game can result in a win-win-win outcome.
Abstract: Peer-to-peer (P2P) streaming applications have led to the disharmony among the involved parties: Content Service Providers (CSPs), Internet Service Providers (ISPs) and P2P streaming End-Users (EUs). This disharmony is not only a technical problem at the network aspect, but also an economic problem at the business aspect. To handle this tussle, this paper proposes a profitable business model to enable all involved parties to enlarge their benefits with the help of a novel QoS-based architecture integrated with caching techniques. We model the interactions, including competition and innovation, among CSPs, ISPs and EUs as a tripartite game by introducing a pricing scheme, which captures both network and business aspects of the P2P streaming applications. We study the tripartite game in different market scenarios as more and more ISPs and CSPs involve into the market. A three-stage Stackelberg game combining with Cournot game is proposed to study the interdependent, interactive and competitive relationship among CSPs, ISPs and EUs. Moreover, we investigate how the market competition motivates ISPs to upgrade the cache service infrastructure. Our theoretical analysis and empirical study both show that the tripartite game can result in a win-win-win outcome. The market competition plays an important role in curbing the pricing power of CSPs and ISPs, and this effect is more remarkable when the amounts of CSPs and ISPs become infinite. Interestingly, we find that in the tripartite game there exists a longstop at which ISPs may have no incentive to upgrade the cache service infrastructure. However, increasing the market competition level can propel the innovation of ISPs.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: This paper provides the first study on attacks toward MFNC systems, and proposes a decentralized trust and reputation approach, called NCShield, to counter such attacks, which is able to distinguish between legitimate distance variations and malicious distance alterations.
Abstract: While network coordinate (NC) systems provide scalable Internet distance estimation service and are useful for various Internet applications, decentralized, matrix factorization-based NC (MFNC) systems have received particular attention recently. They can serve large-scale distributed applications (as opposed to centralized NC systems) and do not need to assume triangle inequality (as opposed to Euclidean-based NC systems). However, because of their decentralized nature, MFNC systems are vulnerable to various malicious attacks. In this paper, we provide the first study on attacks toward MFNC systems, and propose a decentralized trust and reputation approach, called NCShield, to counter such attacks. Different from previous approaches, our approach is able to distinguish between legitimate distance variations and malicious distance alterations. Using four representative data sets from the Internet, we show that NCShield can defend against attacks with high accuracy. For example, when selecting node pairs with a shorter distance than a predefined threshold in an online game scenario, even if 30% nodes are malicious, NCShield can reduce the false positive rate from 45.5% to 3.7%.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: Elite addresses the playback lag problem in peer-assisted live streaming systems by initializing users with layered proportional initial scheduling point, and achieves shorter average playback lag compared with synchronized strategy, such as R2.
Abstract: Small playback lag in live streaming is important for time-critical and interactive applications such as live stock, market updates, sports and remote education. In this paper, we present Elite addresses the playback lag problem in peer-assisted live streaming systems. Instead of deploying a large initial offset to all the users, Elite seeks the possibility of initializing users with layered proportional initial scheduling point, thus achieving differentiated playback lag service for the system. For saving server bandwidth and reducing lag time, Elite employs a novel strategy which arranges peers into a virtual tree structure and quantifies playback lag of each layer that finally converges to a constant value. This way, Elite can help users to achieve much shorter average playback lag and prioritized service within the same channel. As illustrated in our design, analysis, and simulation studies, Elite is able to fully exploit limited pool of server bandwidth to support peer-assisted live streaming with prioritized playback lag, and achieves shorter average playback lag compared with synchronized strategy, such as R2.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: Simulation provides evidence that theoretically the P2P VoD system can work well without extra replica space as long as the bandwidth of the peers is large enough, but the extra storage space can help improve the performance of the system in practical scenarios where the peers’ bandwidth is limited.
Abstract: The P2P-assisted video-on-demand (P2P VoD) service has achieved tremendous success among the Internet users. There are three core strategies in the P2P VoD system: the piece selection policy, the peer selection policy as well as the replica management policy. Different from the existing research works that only consider single policy optimization, we for the first time study the existing P2P VoD policies by using a simulation framework to understand the performance of different policy compositions. The simulation results indicate that when the bandwidth and storage resources are limited in the P2P VoD system, the composition of the sequential piece selection policy, the cascading peer selection policy and the proportional replica management policy has the best performance among all different policy compositions. However, when the bandwidth and storage resources are sufficient in the P2P VoD system, there will be little difference between different choices.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: This work addresses the applicability of a switched queuing network as the sole communication network in an aircraft cabin, and proposes an algorithm based on a mixed-integer program to determine hard bounds for the end-to-end delay.
Abstract: This work addresses the applicability of a switched queuing network as the sole communication network in an aircraft cabin, and proposes an algorithm based on a mixed-integer program to determine hard bounds for the end-to-end delay. These hard bounds guarantee mandatory performance bounds for safety-relevant functions in the aircraft cabin, such as audio announcements or smoke detection. Techniques from the field of deterministic Network Calculus are used to benchmark the results from our novel approach. The results show that our solution allows improved mapping of non-preemptive queuing networks, compared to relevant state-of-the-art approaches.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: A Partial Periodic Pattern Mining algorithm to identify frequent spectrum occupancy patterns that are hidden in the spectrum usage of a channel is introduced and a significant reduction on miss rate in channel state prediction is shown.
Abstract: Cognitive radio appears as a promising technology to allocate wireless spectrum between licensed and unlicensed users. Predictive methods for inferring the availability of spectrum holes can help to reduce collision and improve spectrum extraction. This paper introduces a Partial Periodic Pattern Mining (PPPM) algorithm to identify frequent spectrum occupancy patterns that are hidden in the spectrum usage of a channel. The mined frequent patterns are then used to predict future channel states (i.e., busy or idle). PPPM outperforms traditional Frequent Pattern Mining (FPM) by considering real patterns that do not repeat perfectly. Using real life network activities, we show a significant reduction on miss rate in channel state prediction.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: This work designs a topology independent resource allocation and optimization approach, NetDEO, based on a swarm intelligence optimization model that improves the scalability of the DCN by relocating virtual machines (VMs) and matching resource demand and availability.
Abstract: With the ever-increasing number and complexity of applications deployed in data centers, the underlying network infrastructure can no longer sustain such a trend and exhibits several problems, such as resource fragmentation and low bisection bandwidth. In pursuit of a real-world applicable data center network (DCN) optimization approach that continuously maintains balanced network performance with high cost effectiveness, we design a topology independent resource allocation and optimization approach, NetDEO. Based on a swarm intelligence optimization model, NetDEO improves the scalability of the DCN by relocating virtual machines (VMs) and matching resource demand and availability. NetDEO is capable of (1) incrementally optimizing an existing VM placement in a data center; (2) deriving optimal deployment plans for newly added VMs; and (3) providing hardware upgrade suggestions and allowing the DCN to evolve as the workload changes over time. We evaluate the performance of NetDEO using realistic workload traces and simulated large-scale DCN under various topologies.

Proceedings ArticleDOI
04 Jun 2012
TL;DR: An efficient method of simulating wireless networks that use CSMA/CA-based protocols in the MAC layer with the first approach to coexistence of the stochastic and event-based models in wireless multi-hop network simulation.
Abstract: In this paper, we design an efficient method of simulating wireless networks that use CSMA/CA-based protocols in the MAC layer. In the method, a stochastic model to estimate the CSMA/CA frame transmission delay is naturally incorporated into the conventional fully event-based model. The stochastic model can simplify the interactions between a frame transmitter and its surrounding nodes, which alleviates the event scheduling overhead in simulation. The important feature is that the stochastic model can be applied in "per-node" and "time" basis, i.e. we may simulate the behavior of some intended nodes precisely while the others are simplified by the stochastic mode to save computational resources. To the best of our knowledge, this is the first approach to coexistence of the stochastic and event-based models in wireless multi-hop network simulation. We have implemented this scheme in a commercial network simulator and conducted several experiments. From the results, it is confirmed that the proposed method could perform simulation of frame transmission much faster than the fully event-based simulation achieving the same accuracy as the conventional model.