scispace - formally typeset
Search or ask a question

Showing papers by "Rajeev Rastogi published in 2006"


Journal ArticleDOI
TL;DR: This paper develops failure-resilient techniques for monitoring link delays and faults in a Service Provider or Enterprise IP network and proposes greedy approximation algorithms that achieve a logarithmic approximation factor for the station selection problem and a constant factors for the probe assignment problem.
Abstract: In this paper, we develop failure-resilient techniques for monitoring link delays and faults in a Service Provider or Enterprise IP network. Our two-phased approach attempts to minimize both the monitoring infrastructure costs as well as the additional traffic due to probe messages. In the first phase, we compute the locations of a minimal set of monitoring stations such that all network links are covered, even in the presence of several link failures. Subsequently, in the second phase, we compute a minimal set of probe messages that are transmitted by the stations to measure link delays and isolate network faults. We show that both the station selection problem as well as the probe assignment problem are NP-hard. We then propose greedy approximation algorithms that achieve a logarithmic approximation factor for the station selection problem and a constant factor for the probe assignment problem. These approximation ratios are provably very close to the best possible bounds for any algorithm.

71 citations


Proceedings ArticleDOI
26 Jun 2006
TL;DR: This paper presents a novel gossip-based scheme using which all the nodes in an n-node overlay network can compute the common aggregates of MIN, MAX, SUM, AVERAGE, and RANK of their values using O(n log log n) messages within O(log n log logn) rounds of communication.
Abstract: Recently, there has been a growing interest in gossip-based protocols that employ randomized communication to ensure robust information dissemination In this paper, we present a novel gossip-based scheme using which all the nodes in an n-node overlay network can compute the common aggregates of MIN, MAX, SUM, AVERAGE, and RANK of their values using O(n log log n) messages within O(log n log log n) rounds of communication To the best of our knowledge, ours is the first result that shows how to compute these aggregates with high probability using only O(n log log n) messages In contrast, the best known gossip-based algorithm for computing these aggregates requires O(nlog n) messages and O(log n) rounds Thus, our algorithm allows system designers to trade off a small increase in round complexity with a significant reduction in message complexity This can lead to dramatically lower network congestion and longer node lifetimes in wireless and sensor networks, where channel bandwidth and battery life are severely constrained

68 citations


Proceedings Article
01 Jan 2006
TL;DR: This paper proposes communication-efficient schemes for the anomaly detection problem, which is model as one of detecting the violation of global constraints defined over distributed system variables, and proposes approximation algorithms for computing provably near-optimal (in terms of the number of messages) local constraints.
Abstract: In many distributed environments, the primary function of monitoring software is to detect anomalies, i.e., instances when system behavior deviates substantially from the norm. In this paper, we propose communication-efficient schemes for the anomaly detection problem, which we model as one of detecting the violation of global constraints defined over distributed system variables. Our approach eliminates the need to continuously track the global system state by decomposing global constraints into local constraints that can be checked efficiently at each site. Only in the occasional event that a local constraint is violated, do we resort to more expensive global constraint checking. We show that the problem of selecting the local constraints, based on frequency distribution of individual system variables, so as to minimize the communication cost is NP-hard. We propose approximation algorithms for computing provably near-optimal (in terms of the number of messages) local constraints. Experimental results with real-life network traffic data sets demonstrate that our technique can reduce message communication overhead by as much as 70% compared to existing data distribution-agnostic approaches.

52 citations


01 Jan 2006
TL;DR: This paper develops failure-resilient techniques for monitoring link delays and faults in a Service Provider or En- terprise IP network and proposes greedy approximation algorithms that achieve a logarithmic approximation factor for the station selection problem and a constant factors for the probe assignment problem.
Abstract: In this paper, we develop failure-resilient techniques for monitoring link delays and faults in a Service Provider or En- terprise IP network. Our two-phased approach attempts to mini- mize both the monitoring infrastructure costs as well as the addi- tional traffic due to probe messages. In the first phase, we compute the locations of a minimal set of monitoring stations such that all network links are covered, even in the presence of several link fail- ures. Subsequently, in the second phase, we compute a minimal set of probe messages that are transmitted by the stations to measure link delays and isolate network faults. We show that both the sta- tion selection problem as well as the probe assignment problem are NP-hard. We then propose greedy approximation algorithms that achieve a logarithmic approximation factor for the station selection problem and a constant factor for the probe assignment problem. These approximation ratios are provably very close to the best pos- sible bounds for any algorithm.

40 citations


Proceedings ArticleDOI
01 Nov 2006
TL;DR: These tools can be used by service providers and enterprises to identify network impairments that cause service quality degradation and take corrective measures in real time so that the impact on the degradation perceived by end-users is minimal.
Abstract: Service providers and enterprises all over the world are rapidly deploying Voice over IP (VoIP) networks because of reduced capital and operational expenditure, and easy creation of new services. Voice traffic has stringement requirements on the quality of service, like strict delay and loss requirements, and 99.999% network availability. However, IP networks have not been designed to easily meet the above requirements. Thus, service providers need service quality management tools that can proactively detect and mitigate service quality degradation of VoIP traffic. In this paper, we present active and passive probes that enable service providers to detect service impairments. We use the probes to compute the network parameters (delay, loss and jitter) that can be used to compute the call quality as a Mean Opinion Score using a voice quality metric, E-model. These tools can be used by service providers and enterprises to identify network impairments that cause service quality degradation and take corrective measures in real time so that the impact on the degradation perceived by end-users is minimal.

10 citations


Proceedings ArticleDOI
07 Aug 2006
TL;DR: These tools can be used by service providers and enterprises to identify network impairments that cause service quality degradation and take corrective measures in real time so that the impact on the degradation perceived by end-users is minimal.
Abstract: Service providers and enterprises all over the world are rapidly deploying Voice over IP (VoIP) networks because of reduced capital and operational expenditure, and easy creation of new services. Voice traffic has stringement requirements on the quality of service, like strict delay and loss requirements, and 99.999% network availability. However, IP networks have not been designed to easily meet the above requirements. Thus, service providers need service quality management tools that can proactively detect and mitigate service quality degradation of VoIP traffic. In this paper, we present active and passive probes that enable service providers to detect service impairments. We use the probes to compute the network parameters (delay, loss and jitter) that can be used to compute the call quality as a Mean Opinion Score using a voice quality metric, E-model. These tools can be used by service providers and enterprises to identify network impairments that cause service quality degradation and take corrective measures in real time so that the impact on the degradation perceived by end-users is minimal

10 citations


Proceedings ArticleDOI
01 Nov 2006
TL;DR: This paper presents a novel gossip-based scheme using which all the nodes in an n-node overlay network can compute the common aggregates of MIN, MAX, SUM, AVERAGE, and RANK of their values using O(n log log n) messages within O(log n log logn) rounds of communication.
Abstract: Recently, there has been a growing interest in gossip-based protocols that employ randomized communication to ensure robust information dissemination. In this paper, we present a novel gossip-based scheme using which all the nodes in an n-node overlay network can compute the common aggregates of MIN, MAX, SUM, AVERAGE, and RANK of their values using O(n log log n) messages within O(log n log log n) rounds of communication. To the best of our knowledge, ours is the first result that shows how to compute these aggregates with high probability using only O(n log log n) messages. In contrast, the best known gossip-based algorithm for computing these aggregates requires O(n log n) messages and O(log n) rounds. Thus, our algorithm allows system designers to trade off a small increase in round complexity with a significant reduction in message complexity. This can lead to dramatically lower network congestion and longer node lifetimes in wireless and sensor networks, where channel bandwidth and battery life are severely constrained.

4 citations