scispace - formally typeset
Search or ask a question

Showing papers by "Sonia Fahmy published in 2007"


Proceedings ArticleDOI
01 May 2007
TL;DR: An optimization problem is formulated, which aims to set the capture probability threshold at each hop such that the network lifetime is maximized, while the multi-hop delivery performance is guaranteed, and turns out to be non-convex and hard to solve exactly.
Abstract: We study sleep/wake scheduling for low duty cycle sensor networks. Our work is different from prior work in that we explicitly consider the effect of synchronization error in the design of the sleep/wake scheduling algorithm. In our previous work, we have studied sleep/wake scheduling for single hop communications, e.g., intra-cluster communications between a cluster head and cluster members. We showed that the there is an inherent trade-off between energy consumption and message delivery performance (defined as the message capture probability). We proposed an optimal sleep/wake scheduling algorithm, which satisfies a message capture probability threshold (assumed to be given) with minimum energy consumption. In this work, we consider multi-hop communications. We remove the previous assumption that the capture probability threshold is already given, and study how to decide the per-hop capture probability thresholds to meet the quality of services (QoS) requirements of the application. In many sensor network applications, the QoS is decided by the amount of data delivered to the base station(s), i.e., the multi-hop delivery performance. We formulate an optimization problem, which aims to set the capture probability threshold at each hop such that the network lifetime is maximized, while the multi-hop delivery performance is guaranteed. The problem turns out to be non-convex and hard to solve exactly. By investigating the unique structure of the problem and using approximation techniques, we obtain a solution that achieves at least 0.73 of the optimal performance.

77 citations


Journal ArticleDOI
TL;DR: It is shown that the mean number of hops and mean per-hop delay between parent and child hosts in overlay trees generally decrease as the level of the host in the overlay tree increases, and this phenomenon yields overlay tree cost savings.
Abstract: Overlay networks among cooperating hosts have recently emerged as a viable solution to several challenging problems, including multicasting, routing, content distribution, and peer-to-peer services. Application-level overlays, however, incur a performance penalty over router-level solutions. This paper quantifies and explains this performance penalty for overlay multicast trees via: 1) Internet experimental data; 2) simulations; and 3) theoretical models. We compare a number of overlay multicast protocols with respect to overlay tree structure, and underlying network characteristics. Experimental data and simulations illustrate that the mean number of hops and mean per-hop delay between parent and child hosts in overlay trees generally decrease as the level of the host in the overlay tree increases. Overlay multicast routing strategies, overlay host distribution, and Internet topology characteristics are identified as three primary causes of the observed phenomenon. We show that this phenomenon yields overlay tree cost savings: Our results reveal that the normalized cost L(n)/U(n) is propn0.9 for small n, where L(n) is the total number of hops in all overlay links, U(n) is the average number of hops on the source to receiver unicast paths, and n is the number of members in the overlay multicast session. This can be compared to an IP multicast cost proportional to n0.6 to n0.8

60 citations


Proceedings ArticleDOI
13 Jun 2007
TL;DR: A series of DoS impact metrics that measure the QoS experienced by end users during an attack are proposed and it is demonstrated that these metrics capture the doS impact more precisely than the measures used in the past.
Abstract: To date, the measurement of user-perceived degradation of quality of service during denial of service (DoS) attacks remained an elusive goal. Current approaches mostly rely on lower level traffic measurements such as throughput, utilization, loss rate, and latency. They fail to monitor all traffic parameters that signal service degradation for diverse applications, and to map application quality-of-service (QoS) requirements into specific parameter thresholds. To objectively evaluate an attack's impact on network services, its severity and the effectiveness of a potential defense, we need precise, quantitative and comprehensive DoS impact metrics that are applicable to any test scenario.We propose a series of DoS impact metrics that measure the QoS experienced by end users during an attack. The proposed metrics consider QoS requirements for a range of applications and map them into measurable traffic parameters with acceptable thresholds. Service quality is derived by comparing measured parameter values with corresponding thresholds, and aggregated into a series of appropriate DoS impact metrics. We illustrate the proposed metrics using extensive live experiments, with a wide range of background traffic and attack variants. We successfully demonstrate that our metrics capture the DoS impact more precisely than the measures used in the past.

52 citations


Proceedings ArticleDOI
10 Sep 2007
TL;DR: Recommendations on configuring TCP and MAC parameters are given, which in many cases contradict previous proposals (which had themselves contradicted each other).
Abstract: Although it is well-known that TCP throughput is suboptimal in multihop wireless networks, little performance data is available for TCP in realistic wireless environments. In this paper, we present the results of an extensive experimental study of TCP performance on a 32-node wireless mesh network testbed deployed on the Purdue University campus. Contrary to prior work which considered a single topology with equal-length links and only 1-hop neighbors within transmission range of each other, our study considers more realistic heterogeneous topologies. We vary the maximum TCP window size, in correlation with two important MAC layer parameters: the use of RTS/CTS and the MAC data rate. Based on our TCP throughput results, wegive recommendations on configuring TCP and MAC parameters, which in many cases contradict previous proposals (which had themselves contradicted each other).

26 citations


Proceedings ArticleDOI
21 May 2007
TL;DR: A set of sampled and comprehensive benchmark scenarios, and a workbench for experiments involving denial-of-service (DoS) attacks, are described, developed by sampling features of attacks, legitimate traffic and topologies from the real Internet.
Abstract: While the DETER testbed provides a safe environment and basic tools for security experimentation, researchers face a significant challenge in assembling the testbed pieces and tools into realistic and complete experimental scenarios. In this paper, we describe our work on developing a set of sampled and comprehensive benchmark scenarios, and a workbench for experiments involving denial-of-service (DoS) attacks. The benchmark scenarios are developed by sampling features of attacks, legitimate traffic and topologies from the real Internet. We have also developed a measure of DoS impact on network services to evaluate the severity of an attack and the effectiveness of a proposed defense. The benchmarks are integrated with the testbed via the experimenter's workbench - a collection of traffic generation tools, topology and defense library, experiment control scripts and a graphical user interface. Benchmark scenarios provide inputs to the workbench, bypassing the user's selection of topology and traffic settings, and leaving her only with the task of selecting a defense, its configuration and deployment points. Jointly, the benchmarks and the experimenter's workbench provide an easy, point-and-click environment for DoS experimentation and defense testing.

24 citations


06 Aug 2007
TL;DR: The following automation tools were developed: the Experimenter's Workbench that provides a graphical user interface, tools for topology, traffic and monitoring setup and tools for statistics collection, visualization and processing, and a DDoS benchmark suite that contains a set of diverse and comprehensive attack scenarios.
Abstract: While the DETER testbed provides a safe environment and basic tools for security experimentation, researchers face a significant challenge in assembling the testbed pieces and tools into realistic and complete experimental scenarios. In this paper, we describe our work on automating experimentation for distributed denial-of-service attacks. We developed the following automation tools: (1) the Experimenter's Workbench that provides a graphical user interface, tools for topology, traffic and monitoring setup and tools for statistics collection, visualization and processing, (2) a DDoS benchmark suite that contains a set of diverse and comprehensive attack scenarios, (3) the Experiment Generator that combines chosen AS-level and edge-level topologies, legitimate traffic and a set of attacks into DETER-compatible scripts. Jointly, these tools facilitate easy experimentation even for novice users.

15 citations


Proceedings ArticleDOI
11 May 2007
TL;DR: The architecture of a black-box router profiling tool which integrates the popular ns-2 simulator with the Click modular router and a modified network driver is described and the preliminary results demonstrate that routers and other forwarding devices cannot be modeled as simple output port queues, even if correct rate limits are observed.
Abstract: Simulation, emulation, and wide-area testbeds exhibit different strengths and weaknesses with respect to fidelity, scalability, and manageability. Fidelity is a key concern since simulation or emulation inaccuracies can lead to a dramatic and qualitative impact on the results. For example, high-bandwidth denial of service attack floods of the same rates have very different impact on the different platforms, even if the experimental scenario is supposedly identical. This is because many popular simulation and emulation environments fail to account for realistic commercial router behaviors, and incorrect results have been reported based on experiments conducted in these environments. In this paper, we describe the architecture of a black-box router profiling tool which integrates the popular ns-2 simulator with the Click modular router and a modified network driver. We use this profiler to collect measurements on a Cisco router. Our preliminary results demonstrate that routers and other forwarding devices cannot be modeled as simple output port queues, even if correct rate limits are observed. We discuss our future work plans for using our data to create high-fidelity network simulation/emulation models that are not computationally prohibitive.

8 citations


13 Jun 2007
TL;DR: A series of DoS impact metrics that measure the QoS experienced by end users during an attack are proposed that are easily reproducible and the relevant traffic parameters are extracted from packet traces gathered at the source and the destination networks during an experiment.
Abstract: The exclusive goal of a Denial of Service (DoS) attack is to significantly degrade a network's service quality by introducing large or variable delays, excessive losses, and service interruptions. Conversely, the aim of any DoS defense is to neutralize this effect, and to quickly and fully restore service quality to levels acceptable to the users. DoS attacks and defenses have typically been studied by researchers via network simulation and live experiments in isolated testbeds. To objectively evaluate an attack's impact on network services, its severity and the effectiveness of a potential defense, we need a precise, quantitative and comprehensive DoS impact metrics that are applicable to any test scenario. Current evaluation approaches do not meet these goals. They commonly measure one or a few traffic parameters and determine attack's impact by comparing parameter value distributions in different tests. These approaches are customized to a particular test scenario, and they fail to monitor all traffic parameters that signal service degradation for diverse applications. Further, they are imprecise because they fail to map application quality-of-service (QoS) requirements into specific parameter thresholds. We propose a series of DoS impact metrics that measure the QoS experienced by end users during an attack. Our measurements and metrics are ideal for testbed experimentation. They are easily reproducible and the relevant traffic parameters are extracted from packet traces gathered at the source and the destination networks during an experiment. The proposed metrics consider QoS requirements for a range of applications and map them into measurable traffic parameters. We then specify thresholds for each relevant parameter that, when breached, indicate poor service quality. Service quality is derived by comparing measured parameter values with corresponding thresholds, and aggregated into a series of appropriate DoS impact metrics. We illustrate the proposed metrics using extensive live experiments, with a wide range of background traffic and attack variants. We successfully demonstrate that our metrics capture the DoS impact more precisely than the measures used in the past.

6 citations


Proceedings ArticleDOI
12 Jun 2007
TL;DR: Several DoS impact metrics that measure the quality of service experienced by end users during an attack are proposed, and compare these measurements to application-specific thresholds.
Abstract: Denial-of-service (DoS) research community lacks accurate metrics to evaluate an attack's impact on network services, its severity and the effectiveness of a potential defense. We propose several DoS impact metrics that measure the quality of service experienced by end users during an attack, and compare these measurements to application-specific thresholds. Our metrics are ideal for testbed experimentation, since necessary traffic parameters are extracted from packet traces gathered during an experiment.

4 citations