scispace - formally typeset
Search or ask a question

Showing papers presented at "International Workshop on Quality of Service in 2009"


Proceedings ArticleDOI
13 Jul 2009
TL;DR: Wang et al. as discussed by the authors proposed an effective and flexible distributed scheme with two salient features, opposing to its predecessors, by utilizing the homomorphic token with distributed verification of erasure-coded data, achieving the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s).
Abstract: Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, Cloud Computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.

799 citations


Proceedings ArticleDOI
13 Jul 2009
TL;DR: Co-Con is proposed, a novel cluster-level control architecture that coordinates individual power and performance control loops for virtualized server clusters and configures the two control loops rigorously, based on feedback control theory, for theoretically guaranteed control accuracy and system stability.
Abstract: Today's data centers face two critical challenges. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, server power consumption must be controlled in order to avoid failures caused by power capacity overload or system overheating due to increasing high server density. However, existing work controls power and application-level performance separately and thus cannot simultaneously provide explicit guarantees on both. This paper proposes Co-Con, a novel cluster-level control architecture that coordinates individual power and performance control loops for virtualized server clusters. To emulate the current practice in data centers, the power control loop changes hardware power states with no regard to the application-level performance. The performance control loop is then designed for each virtual machine to achieve the desired performance even when the system model varies significantly due to the impact of power control. Co-Con configures the two control loops rigorously, based on feedback control theory, for theoretically guaranteed control accuracy and system stability. Empirical results demonstrate that Co-Con can simultaneously provide effective control on both application-level performance and underlying power consumption.

58 citations


Proceedings ArticleDOI
Jun Li1, Shuang Yang1, Xin Wang1, Xiangyang Xue1, Baochun Li2 
13 Jul 2009
TL;DR: It is proved that a maximum spanning tree is an optimal regeneration tree and the results show the tree-structured scheme can reduce the regeneration time by 75%–82% and improve data availability by 73%–124%.
Abstract: Distributed storage systems, built on peer-to-peer networks, can provide large-scale data storage and high data reliability by redundant schemes, such as replica, erasure codes and linear network coding. Redundant data may get lost due to the instability of distributed systems, such as permanent node departures, hardware failures, and accidental deletions. In order to maintain data availability, it is necessary to regenerate new redundant data in another node, referred to as a newcomer. Regeneration is expected to be finished as soon as possible, because the regeneration time can influence the data reliability and availability of distributed storage systems. It has been acknowledged that linear network coding can regenerate redundant data with less network traffic than replica and erasure codes. However, previous regeneration schemes are all star-structured regeneration schemes, in which data are transferred directly from existing storage nodes, referred to as providers, to the newcomer, so the regeneration time is always limited by the path with the narrowest bandwidth between newcomer and provider, due to bandwidth heterogeneity. In this paper, we exploit the bandwidth between providers and propose a tree-structured regeneration scheme using linear network coding. In our scheme, data can be transferred from providers to the newcomer through a regeneration tree, defined as a spanning tree covering the newcomer and all the providers. In a regeneration tree, a provider can receive data from other providers, then encode the received data with the data this provider stores, and finally send the encoded data to another provider or to the newcomer. We prove that a maximum spanning tree is an optimal regeneration tree and analyze its performance. In a trace-based simulation, the results show the tree-structured scheme can reduce the regeneration time by 75%–82% and improve data availability by 73%–124%.

37 citations


Proceedings ArticleDOI
13 Jul 2009
TL;DR: This paper presents an Adaptive Dynamic Channel Allocation protocol (ADCA), which considers optimization for both throughput and delay in the channel assignment, and proposes an Interference and Congestion Aware Routing protocol (ICAR) in the hybrid network with both static and dynamic links, which balances the channel usage in the network.
Abstract: Many efforts have been devoted to maximizing network throughput in a multi-channel multi-radio wireless mesh network. Current solutions are based on either pure static or pure dynamic channel allocation approaches. In this paper, we propose a hybrid multi-channel multi-radio wireless mesh networking architecture, where each mesh node has both static and dynamic interfaces. We first present an Adaptive Dynamic Channel Allocation protocol (ADCA), which considers optimization for both throughput and delay in the channel assignment. In addition, we also propose an Interference and Congestion Aware Routing protocol (ICAR) in the hybrid network with both static and dynamic links, which balances the channel usage in the network. Compared to previous work, our simulation results show that ADCA reduces the packet delay considerably without degrading the network throughput. Moreover, the hybrid architecture shows much better adaptivity to changing traffic than pure static architecture without dramatic increase in overhead.

33 citations


Proceedings ArticleDOI
13 Jul 2009
TL;DR: This paper revisits the multicast scheduling problem, but with a new perspective in the specific case of MBS in WiMAX, considering the use of multiple ODFMA channels, multiple hops, and multiple paths simultaneously.
Abstract: The Multicast and Broadcast Service (MBS) in WiMAX has emerged as the next-generation wireless infrastructure to broadcast data or digital video. Multicast scheduling protocols play a critical role in achieving efficient multicast transmissions in MBS. However, the current state-of-the-art protocols, based on the shared-channel single-hop transmission model, do not exploit any potential advantages provided by the channel and cooperative diversity in multicast sessions, even while WiMAX OFDMA provides such convenience. The inefficient multicast transmission leads to the under-utilization of scarce wireless bandwidth. In this paper, we revisit the multicast scheduling problem, but with a new perspective in the specific case of MBS in WiMAX, considering the use of multiple ODFMA channels, multiple hops, and multiple paths simultaneously. Participating users in the multicast session are dynamically enabled as relays and concurrently communicate with others to supply more data. During the transmission, random network coding is adopted, which helps to significantly reduce the overhead. We design practical scheduling protocols by jointly studying the problems of channel and power allocation on relays, which are very critical for efficient cooperative communication. Protocols that are theoretically and practically feasible are provided to optimize multicast rates and to efficiently allocate resources in the network. Finally, with simulation studies, we evaluate our proposed protocols to highlight the effectiveness of cooperative communication and random network coding in multicast scheduling with respect to improving performance.

31 citations


Proceedings ArticleDOI
13 Jul 2009
TL;DR: The approach integrated with the model-independent self-tuning fuzzy controller can efficiently assure the average and the 90th-percentile end-to-end delay guarantees on multi-tier server clusters.
Abstract: Dynamic server provisioning is critical to quality-of-service assurance for multi-tier Internet applications. In this paper, we address three important and challenging problems. First, we propose an efficient server provisioning approach on multi-tier clusters based on an end-to-end resource allocation optimization model. It is to minimize the number of servers allocated to the system while the average end-to-end delay guarantee is satisfied. Second, we design a model-independent fuzzy controller for bounding an important performance metric, the 90 th -percentile delay of requests flowing through the multi-tier architecture. Third, to compensate for the latency due to the dynamic addition of servers, we design a self-tuning component that adaptively adjusts the output scaling factor of the fuzzy controller according to the transient behavior of the end-to-end delay. Extensive simulation results, using one representative customer behavior model in a typical three-tier web cluster, demonstrate that the provisioning approach is able to significantly reduce the server utilization compared to an existing representative approach. The approach integrated with the model-independent self-tuning fuzzy controller can efficiently assure the average and the 90 th -percentile end-to-end delay guarantees on multi-tier server clusters.

30 citations


Proceedings ArticleDOI
13 Jul 2009
TL;DR: A behavioral distance based anomaly detection mechanism with the capability of performing on-line traffic analysis and validate the efficacy of the detection system by using network traffic traces collected at Abilene and MAWI high-speed links.
Abstract: While network-wide anomaly analysis has been well studied, the on-line detection of network traffic anomalies at a vantage point inside the Internet still poses quite a challenge to network administrators. In this paper, we develop a behavioral distance based anomaly detection mechanism with the capability of performing on-line traffic analysis. To construct accurate online traffic profiles, we introduce horizontal and vertical distance metrics between various traffic features (i.e., packet header fields) in the traffic data streams. The significant advantages of the proposed approach lie in four aspects: (1) it is efficient and simple enough to process on-line traffic data; (2) it facilitates protocol behavioral analysis without maintaining per-flow state; (3) it is scalable to high speed traffic links because of the aggregation, and (4) using various combinations of packet features and measuring distances between them, it is capable for accurate on-line anomaly detection. We validate the efficacy of our proposed detection system by using network traffic traces collected at Abilene and MAWI high-speed links.

27 citations


Proceedings ArticleDOI
13 Jul 2009
TL;DR: This work proposes Fast Resilient Jumbo frames (FRJ), which exploit the synergy between three important design spaces: frame size selection, partial packet recovery, and rate adaptation and shows that there are strong interactions between them and effectively leveraging these techniques can provide increased robustness and performance benefits in wireless LANs.
Abstract: With the phenomenal growth of wireless networks and applications, it is increasingly important to deliver content efficiently and reliably over wireless links. However, wireless performance is still far from satisfactory due to limited wireless spectrum, inherent lossy wireless medium, and imperfect packet scheduling. While significant research has been done to improve wireless performance, much of the existing work focuses on individual design space. We take a holistic approach to optimizing wireless performance and resilience. We propose Fast Resilient Jumbo frames (FRJ), which exploit the synergy between three important design spaces: (i) frame size selection, (ii) partial packet recovery, and (iii) rate adaptation. While these design spaces are seemingly unrelated, we show that there are strong interactions between them and effectively leveraging these techniques can provide increased robustness and performance benefits in wireless LANs. FRJ uses jumbo frames to boost network throughput under good channel conditions and uses partial packet recovery to efficiently recover packet losses under bad channel conditions. FRJ also utilizes partial recovery aware rate adaptation to maximize throughput under partial recovery. Using real implementation and testbed experiments, we show that FRJ out-performs existing approaches in a wide range of scenarios.

27 citations


Proceedings ArticleDOI
13 Jul 2009
TL;DR: Evaluations of the proposed multi-path utility max-min fair allocation algorithms on a statistical traffic engineering application are presented to show that significantly higher minimum utility can be achieved whenMulti-path routing is considered simultaneously with bandwidth allocation under utilitymax-min fairness, and this higherminimum utility corresponds to significant application performance improvements.
Abstract: An important goal of bandwidth allocation is to maximize the utilization of network resources while sharing the resources in a fair manner among network flows. To strike a balance between fairness and throughput, a widely studied criterion in the network community is the notion of max-min fairness. However, the majority of work on max-min fairness has been limited to the case where the routing of flows has already been defined and this routing is usually based on a single fixed routing path for each flow. In this paper, we consider the more general problem in which the routing of flows, possibly over multiple paths per flow, is an optimization parameter in the bandwidth allocation problem. Our goal is to determine a routing assignment for each flow so that the bandwidth allocation achieves optimal utility max-min fairness with respect to all feasible routings of flows. We present evaluations of our proposed multi-path utility max-min fair allocation algorithms on a statistical traffic engineering application to show that significantly higher minimum utility can be achieved when multi-path routing is considered simultaneously with bandwidth allocation under utility max-min fairness, and this higher minimum utility corresponds to significant application performance improvements.

24 citations


Proceedings ArticleDOI
13 Jul 2009
TL;DR: SigLM is scalable and efficient, which imposes less than 1% overhead to the system and can perform signature matching within tens of milliseconds, and can improve resource provisioning performance by 30–80% compared to existing approaches.
Abstract: Cloud computing has emerged as a promising platform that grants users with direct yet shared access to computing resources and services without worrying about the internal complex infrastructure. Unlike traditional batch service model, cloud service model adopts a pay-as-you-go form, which demands explicit and precise resource control. In this paper, we present SigLM, a novel Signature-driven Load Management system to achieve quality-aware service delivery in shared cloud computing infrastructures. SigLM dynamically captures fine-grained signatures of different application tasks and cloud nodes using time series patterns, and performs precise resource metering and allocation based on the extracted signatures. SigLM employs dynamic time warping algorithm and multi-dimensional time series indexing to achieve efficient signature pattern matching. Our experiments using real load traces collected on the PlanetLab show that SigLM can improve resource provisioning performance by 30–80% compared to existing approaches. SigLM is scalable and efficient, which imposes less than 1% overhead to the system and can perform signature matching within tens of milliseconds.

18 citations


Proceedings ArticleDOI
13 Jul 2009
TL;DR: This paper tackles the DRL problem for general workloads and performance metrics and proposes an analytic framework for the design of stable DRL algorithms that are practical and easy to deploy with guaranteed convergence properties under a wide range of possible scenarios.
Abstract: The Distributed Rate Limiting (DRL) paradigm is a recently proposed mechanism for decentralized control of cloud-based services. DRL is a simple and efficient approach to resolve the issues of pricing and resource control/engineering of cloud based services. The existing DRL schemes focus on very specific performance metrics (such as loss rate and fair-share) and their design heavily depends on the assumption that the traffic is generated by elastic TCP sources. In this paper we tackle the DRL problem for general workloads and performance metrics and propose an analytic framework for the design of stable DRL algorithms. The closed-form nature of our results allows simple design rules which, together with extremely low communication overhead, makes the presented algorithms practical and easy to deploy with guaranteed convergence properties under a wide range of possible scenarios.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: This paper forms the admission control task as a linear programming problem and proposes a lexicographically maxmin algorithm to solve the problem, and introduces a potential metric to more accurately predict the total data size that can be transmitted to/from a vehicle.
Abstract: Roadside units can provide a variety of potential services for passing-by vehicles in future Intelligent Transportation Systems. Since each vehicle has a limited time period when passing by a roadside unit, it is important to predict whether a service can be finished in time. In this paper, we focus on admission control problems, which are important especially when a roadside unit is in or close to overloaded conditions. Traditional admission control schemes mainly concern long-term flows, such as VOIP and multimedia service. They are not applicable to highly mobile vehicular environments. Our analysis finds that it is not necessarily accurate to use deadline to evaluate the risk whether a flow can be finished in time. Instead, we introduce a potential metric to more accurately predict the total data size that can be transmitted to/from a vehicle. Based on this new concept, we formulate the admission control task as a linear programming problem and then propose a lexicographically maxmin algorithm to solve the problem. Simulation results demonstrate that our scheme can efficiently make admission decisions for coming transmission requests and effectively avoid system overload.

Proceedings ArticleDOI
25 Aug 2009
TL;DR: The problem of determining the optimum capacity allocation for multiple Virtual Machines which share the same hosting environment is addressed and the overall goal is to maximize the Service Provider profits associated with multiple classes of Service Level Agreements.
Abstract: Service Oriented Architecture (SOA) and virtualization of physical resources are key emerging technologies which are driving the interest of research both from industry and academia. The combination of the two is leading to a new paradigm - the Service Oriented Infrastructure - (SOI) whose goal is to provide a flexible solution for accessing component based service applications on demand.SOI environments are characterized by high workload fluctuations which cannot be accommodated by separating design and run-time point of view as traditionally done in Software Engineering practice. Hence, the design of SOA applications has to be complemented with issues related with the run-time resource provisioning. In this paper the problem of determining the optimum capacity allocation for multiple Virtual Machines which share the same hosting environment is addressed. The overall goal is to maximize the Service Provider profits associated with multiple classes of Service Level Agreements. The capacity allocation problem is modeled as a non-linear problem which is optimally solved. The effectiveness of our solution is assessed by performing real experiments in a prototype environment.

Proceedings ArticleDOI
25 Aug 2009
TL;DR: This work proposes an automated approach to search the design space by modifying the architectural models, to improve the architecture with respect to multiple quality criteria, and to find optimal architectural models to allow systematic engineering of high-quality software architectures.
Abstract: Quantitative prediction of quality criteria (i.e. extra-functional properties such as performance, reliability, and cost) of service-oriented architectures supports a systematic software engineering approach. However, various degrees of freedom in building a software architecture span a large, discontinuous design space. Currently, solutions with a good trade-off between multiple quality criteria have to be found manually. We propose an automated approach to search the design space by modifying the architectural models, to improve the architecture with respect to multiple quality criteria, and to find optimal architectural models. The found optimal architectural models can be used as an input for trade-off analyses and thus allow systematic engineering of high-quality software architectures. Using this approach, the design of a high-quality component-based software system is eased for the software architect and thus saves cost and effort. Our approach applies a multi-criteria genetic algorithm to software architectures modelled with the Palladio Component Model (PCM). Currently, the method supports quantitative performance and reliability prediction, but it can be extended to other quality properties such as cost as well.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: This work develops a new performance evaluation framework particularly tailored for information-driven networks, based on the recent development of stochastic network calculus, which captures the information processing and the QoS guarantee with respect to stochastics information delivery rates, which have never been formally modeled before.
Abstract: Information-driven networks include a large category of networking systems, where network nodes are aware of information delivered and thus can not only forward data packets but may also perform information processing. In many situations, the quality of service (QoS) in information-driven networks is provisioned with the redundancy in information. Traditional performance models generally adopt evaluation measures suitable for packet-oriented service guarantee, such as packet delay, throughput, and packet loss rate. These performance measures, however, do not align well with the actual need of information-driven networks. New performance measures and models for information-driven networks, despite their importance, have been mainly blank, largely because information processing is clearly application dependent and cannot be easily captured within a generic framework. To fill the vacancy, we develop a new performance evaluation framework particularly tailored for information-driven networks, based on the recent development of stochastic network calculus. Particularly, our model captures the information processing and the QoS guarantee with respect to stochastic information delivery rates, which have never been formally modeled before. This analytical model is very useful in deriving theoretical performance bounds for a large body of systems where QoS is stochastically guaranteed with a certain level of information delivery.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: This work provides a taxonomy of the existing sketches and performs a thorough study of the strengths and weaknesses of each of them, as well as the interactions between the different components, using both real and synthetic Internet trace data.
Abstract: Security is becoming an increasingly important QoS parameter for which network providers should provision. We focus on monitoring and detecting one type of network event, which is important for a number of security applications such as DDoS attack mitigation and worm detection, called distributed global icebergs. While previous work has concentrated on measuring local heavy-hitters using “sketches” in the non-distributed streaming case or icebergs in the non-streaming distributed case, we focus on measuring icebergs from distributed streams. Since an iceberg may be “hidden” by being distributed across many different streams, we combine a sampling component with local sketches to catch such cases. We provide a taxonomy of the existing sketches and perform a thorough study of the strengths and weaknesses of each of them, as well as the interactions between the different components, using both real and synthetic Internet trace data. Our combination of sketching and sampling is simple yet efficient in detecting global icebergs.

Proceedings ArticleDOI
25 Aug 2009
TL;DR: This paper proposes an extension of the Palladio Component Model (PCM) that provides natural support for modeling event-based communication and shows how this extension can be exploited to model event-driven service-oriented systems with the aim to evaluate their performance and scalability.
Abstract: The use of event-based communication within a Service-Oriented Architecture promises several benefits including more loosely-coupled services and better scalability. However, the loose coupling of services makes it difficult for system developers to estimate the behavior and performance of systems composed of multiple services. Most existing performance prediction techniques for systems using event-based communication require specialized knowledge to build the necessary prediction models. Furthermore, general purpose design-oriented performance models for component-based systems provide limited support for modeling event-based communication. In this paper, we propose an extension of the Palladio Component Model (PCM) that provides natural support for modeling event-based communication. We show how this extension can be exploited to model event-driven service-oriented systems with the aim to evaluate their performance and scalability.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: The results confirm the efficiency and near-optimality of the proposed algorithms, and show that higher-quality videos are delivered to peers if the algorithms are employed for allocating seed servers.
Abstract: We study streaming of scalable videos over peer-to-peer (P2P) networks. We focus on efficient management of seed servers resources, which need to be deployed in the network to make up for the limited upload capacity of peers in order to deliver higher quality video streams. These servers have finite serving capacity and are often loaded with a volume of requests larger than their capacity. We formulate the problem of allocating this capacity for optimally serving scalable videos. We show that this problem is NP-complete, and propose two approximation algorithms to solve it. The first one allocates seeding resources for serving peers based on dynamic programming, and is more suitable for small seeding capacities (≤ 10 Mbps). The second algorithm follows a greedy approach and is more efficient for larger capacities. We evaluate the proposed algorithms analytically and in a simulated P2P streaming system. The results confirm the efficiency and near-optimality of the proposed algorithms, and show that higher-quality videos are delivered to peers if our algorithms are employed for allocating seed servers.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: A simple strategy to make the fair coexistence possible and to ensure that delay- based flows will revert back to the delay-based operation when loss-based flows are no longer present is proposed.
Abstract: Delay-based TCP variants continue to attract a large amount of attention in the networking community. Potentially, they offer the possibility to efficiently use network resources while at the same time achieving low queueing delay and virtually zero packet loss. One major impediment to the deployment of delay-based TCP variants is their inability to coexist fairly with standard loss-based TCP. In this paper we propose a simple strategy to make the fair coexistence possible and to ensure that delay-based flows will revert back to the delay-based operation when loss-based flows are no longer present. Analytical and ns-2 simulation results are presented to validate the proposed algorithm.

Proceedings ArticleDOI
25 Aug 2009
TL;DR: In this article, a generic garbage collector overhead model is used as a constant background factor in service performance models, which can make the models miss performance effects of significant scale, and the authors provide an initial inquiry into the issues related to including such a generic overhead model as a part of the service performance model.
Abstract: Even though garbage collectors are incorporated in many service oriented systems, service performance models typically treat garbage collector overhead as a constant background factor. We use benchmark experiments to show that this treatment can make the service performance models miss performance effects of significant scale, and provides an initial inquiry into the issues related to including a generic garbage collector overhead model as a part of the service performance models.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: It is shown that switch scheduling algorithms that were designed without taking into account these interactions can exhibit a completely different behavior when interacting with feedback-based Internet traffic, and can lead to extreme unfairness with temporary flow starvation and large rate oscillations.
Abstract: In this paper, we study the interactions of user-based congestion control algorithms and router-based switch scheduling algorithms. We show that switch scheduling algorithms that were designed without taking into account these interactions can exhibit a completely different behavior when interacting with feedback-based Internet traffic. Previous papers neglected or mitigated these interactions, and typically found that flow rates reach a fair equilibrium. On the contrary, we show that these interactions can lead to extreme unfairness with temporary flow starvation, as well as to large rate oscillations. For instance, we prove that this is the case for the MWM switch scheduling algorithm, even with a single router output and basic TCP flows. We also show that the iSLIP switch scheduling algorithm achieves fairness among ports, instead of fairness among flows. Finally, we fully characterize the network dynamics for both these switch scheduling algorithms.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: This paper proposes and evaluates a simple but efficient method for fast rerouting of IP multicast traffic during link failures in managed IPTV networks, and devise an algorithm for tuning IP link weights so that the multicast routing path and the unicasts routing path between any two routers are failure disjoint, allowing to use unicast IP encapsulation for undelivered multicast packets during link failure.
Abstract: Recent deployment of IP based multimedia distribution, especially broadcast TV distribution has increased the importance of simple and fast restoration during IP network failures for service providers. In this paper, we propose and evaluate a simple but efficient method for fast rerouting of IP multicast traffic during link failures in managed IPTV networks. More specifically, we devise an algorithm for tuning IP link weights so that the multicast routing path and the unicast routing path between any two routers are failure disjoint, allowing us to use unicast IP encapsulation for undelivered multicast packets during link failures. We demonstrate that, our method can be realized with minor modification to the current multicast routing protocol (PIM-SM). We run our prototype implementation in Emulab which shows our method yields to good performance.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: An online scheme which leverages the spatial-temporal correlation of the events to balance the communication energy of the static sensor nodes is presented, and it is proved that the expected event coverage rate can be guaranteed in theory.
Abstract: There is a growing interest in using wireless sensor networks for security monitoring in the underground coal mines. In such applications, the sensor nodes are deployed to detect interested events, e.g., the density of certain gas at some locations is higher than the predefined threshold. These events are then reported to the base station outside. Using conventional multi-hop routing for data reporting, however, will result in imbalance of energy consumption among the sensors. Even worse, the unfriendly communication condition underground makes the multi-hop data transmission challenging, if not impossible. In this paper, we thus propose to leverage tramcars as mobile sinks to assist event collection and delivery. We further observe that the sensor readings have spatial and temporal correlation. More precisely, the same event may be observed by multiple neighboring sensor nodes and/or at different time. Obviously, it can be more energy-efficient if the data are selectively reported. As such, we first provide a general, yet realistic definition on the events. We then transform the event collection problem into a set coverage problem; and our objective is to maximize the system lifetime with the coverage rate of events guaranteed. We show that the problem is NP-hard even when all the events are known in advance. We present an online scheme which leverages the spatial-temporal correlation of the events to balance the communication energy of the static sensor nodes. We prove that the expected event coverage rate can be guaranteed in theory. Through extensive simulation, we demonstrate that our scheme can significantly extend system lifetime, as compared to a stochastic collection scheme.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: Novel solutions to the scalable implementation of priority queues are proposed by decomposing the problem into two parts, a succinct priority index in SRAM that can efficiently maintain a real-time sorting of priorities, coupled with a DRAM-based implementation of large packet buffers.
Abstract: Priority queues are an essential building block for implementing advanced per-flow service disciplines at high-speed network links. In this paper, we propose novel solutions to the scalable implementation of priority queues by decomposing the problem into two parts, a succinct priority index in SRAM that can efficiently maintain a real-time sorting of priorities, coupled with a DRAM-based implementation of large packet buffers. In particular, we propose three related novel succinct priority index data structures for implementing high-speed priority indexes: a Priority-Index (PI), a Counting-Priority-Index (CPI), and a Pipelined Counting-Priority-Index (Pipelined CPI). We show that all three structures can be very compactly implemented in SRAM using only Θ(U) space, where U is the size of the universe required to implement the priority keys (timestamps). We also show that our proposed priority index structures can be implemented very efficiently as well by leveraging hardware-optimized instructions that are readily available in modern 64-bit microprocessors. The operations on the PI and CPI structures take Θ(log W U) time, where W is the processor word-length (i.e., W = 64 bits). Alternatively, operations on the Pipelined CPI structure take constant time with only Θ(log W U) pipeline stages. Finally, we show the application of our proposed priority index structures for scalable management of large packet buffers at line speeds.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: This work proposes and implements an adaptive admission control mechanism that adjusts the admitted load to compensate for changes in system capacity, and employs a control theory based feedback loop to dynamically determine the rate of admitted requests.
Abstract: The system capacity available to a multi-tier Web based application is often a dynamic quantity. Most static threshold-based overload control mechanisms are best suited to situations where the system's capacity is constant or the bottleneck resource is known. However, with varying capacity, the admission control mechanism needs to adapt dynamically. We propose and implement an adaptive admission control mechanism that adjusts the admitted load to compensate for changes in system capacity. The proposed solution is implemented as a proxy server between clients and front-end Web servers. The proxy monitors ‘black-box’ performance metrics-response time and rate of successfully completed requests (goodput). With these measurements as indicators of system state, we employ a control theory based feedback loop to dynamically determine the rate of admitted requests. The objective is to balance changes in response time and changes in goodput, while preventing overloads due to reduction in available system capacity. We evaluate our mechanism with experiments on a test-bed and find that it is able to maintain higher productivity than a static admission control scheme.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: A distributed, reliable and energy-efficient algorithm to construct a smoothed moving trajectory for a mobile robot that can tolerate the potential deviation from a planned path, mitigate the trajectory oscillation problem and save the precious energy of static sensors by configuring a large moving step size.
Abstract: This paper deals with the problem of guiding mobile sensors (or robots) to a phenomenon across a region covered by static sensors. We present a distributed, reliable and energy-efficient algorithm to construct a smoothed moving trajectory for a mobile robot. The reliable trajectory is realized by first constructing among static sensors a distributed hop count based artificial potential field (DH-APF) with only one local minimum near the phenomenon, and then navigating the robot to that minimum by an attractive force following the reversed gradient of the constructed field. Besides the attractive force towards the phenomenon, our algorithm adopts an additional repulsive force to push the robot away from obstacles, exploiting the fast sensing devices carried by the robot. Compared with previous navigation algorithms that guide the robot along a planned path, our algorithm can (1) tolerate the potential deviation from a planned path, since the DH-APF covers the entire deployment region; (2) mitigate the trajectory oscillation problem; (3) avoid the potential collision with obstacles; (4) save the precious energy of static sensors by configuring a large moving step size, which is not possible for algorithms neglecting the issue of navigation reliability. Our theoretical analysis of the above features considers practical sensor network issues including radio irregularity, packet loss and radio conflict. We implement the proposed algorithm over TinyOS and test its performance on the simulation platform with a high fidelity provided by TOSSIM and Tython. Simulation results verify the reliability and energy efficiency of the proposed mobile sensor navigation algorithm.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: This paper investigates a VoD distribution architecture that exploits the increasing uplink and local storage capacities of customer equipment in a peer-to-peer (P2P) manner in order to offload the central video servers and the core network segment and shows how the components of a P2P-VoD system should be changed to be feasible under these conditions.
Abstract: This paper investigates a VoD distribution architecture that exploits the increasing uplink and local storage capacities of customer equipment in a peer-to-peer (P2P) manner in order to offload the central video servers and the core network segment. We investigate an environment where (i) the peers' upload speeds vary in time and (ii) on the subscriber's downlink a strict bandwidth limit constrains the VoD delivery, and where (iii) this downlink limit is not significantly higher than the video's own bit rate while (iv) the subscribers' upload capacities are not cut down. In such an environment providing quality for a true VoD service requires carefully selected mechanisms. We show how the components (storage policy, uplink speed management) of a P2P-VoD system should be changed to be feasible under these conditions. The main component of the system determines the minimal required server speed as a function of the prebuffered content, the uploaders' behaviours, and the given play back fault probability. Additionally, by using simulation we investigate the optimal downlink bandwidth limit for a subscriber population with different average upload speeds.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: This work analyzes the expected information captured per unit of energy consumption (IPE) as a function of the event type, the event dynamics, and the speed of the mobile sensor using a realistic energy model of motion.
Abstract: A mobile sensor is used to cover a number of points of interest (PoIs) where dynamic events appear and disappear according to given random processes. It has been shown in [1] that for Step and Exponential utility functions, the quality of monitoring (QoM), i.e., the fraction of information captured about all events, increases as the speed of the sensor increases. This work, however, does not consider the energy of motion, which is an important constraint for mobile sensor coverage. In this paper, we analyze the expected information captured per unit of energy consumption (IPE) as a function of the event type, the event dynamics, and the speed of the mobile sensor. Our analysis uses a realistic energy model of motion, and it allows the sensor speed to be optimized for information capture. We present simulation results to verify and illustrate the analytical results.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: This paper gets for the first time optimal attack patterns for both periodic and aperiodic attacks for low-rate Denial of Quality attacks by proving the Lyapunov and Lagrange stability of the system.
Abstract: Low-rate Denial of Quality (DoQ) attacks, by sending intermittent bursts of requests, can severely degrade the quality of Internet services and evade detection. In this paper, we generalize the previous results by considering arbitrary attack intervals. We obtain two sets of new results for a web server with feedback-based admission control. First, we model the web server under the attack as a switched system. By proving the Lyapunov and Lagrange stability of the system, we show that the admission rate can always be throttled to a bounded low value. Second, we investigate the worst impacts of a DoQ attack by optimizing a utility function for the attacks. As a result, we obtain for the first time optimal attack patterns for both periodic and aperiodic attacks. Extensive simulation results also agree with the analytical results.

Proceedings ArticleDOI
13 Jul 2009
TL;DR: A practical Congestion Location Detection (CLD) algorithm that effectively allows an end host to distributively detect whether congestion happens in the local access link or in more remote links is presented.
Abstract: We address the following question in this study: Can a network application detect not only the occurrence, but also the location of congestion? Answering this question will not only help the diagnostic of network failure and monitor server's QoS, but also help developers to engineer transport protocols with more desirable congestion avoidance behavior. The paper answers this question through new analytic results on the two underlying technical difficulties: 1) synchronization effects of loss and delay in TCP, and 2) distributed hypothesis testing using only local loss and delay data. We present a practical Congestion Location Detection (CLD) algorithm that effectively allows an end host to distributively detect whether congestion happens in the local access link or in more remote links. We validate the effectiveness of CLD algorithm with extensive experiments.