scispace - formally typeset
Search or ask a question

Showing papers by "Sonia Fahmy published in 2008"


Proceedings ArticleDOI
13 Apr 2008
TL;DR: This work designs an algorithm which starts from an arbitrary tree and iteratively reduces the load on bottleneck nodes (nodes likely to soon deplete their energy due to high degree or low remaining energy) and shows that the algorithm terminates in polynomial time and is provably near optimal.
Abstract: Energy efficiency is critical for wireless sensor networks. The data gathering process must be carefully designed to conserve energy and extend the network lifetime. For applications where each sensor continuously monitors the environment and periodically reports to a base station, a tree-based topology is often used to collect data from sensor nodes. In this work, we study the construction of a data gathering tree to maximize the network lifetime, which is defined as the time until the first node depletes its energy. The problem is shown to be NP-complete. We design an algorithm which starts from an arbitrary tree and iteratively reduces the load on bottleneck nodes (nodes likely to soon deplete their energy due to high degree or low remaining energy). We show that the algorithm terminates in polynomial time and is provably near optimal.

174 citations


Proceedings ArticleDOI
28 May 2008
TL;DR: This paper analyzes and compares the underlying distribution frameworks of three video sharing services - YouTube, Dailymotion and Metacafe - based on traces collected from measurements over a period of 23 days and investigates the variation in service delay with the user's geographical location and with video characteristics such as age and popularity.
Abstract: Serving multimedia content over the Internet with negligible delay remains a challenge. With the advent of Web 2.0, numerous video sharing sites using different storage and content delivery models have become popular. Yet, little is known about these models from a global perspective. Such an understanding is important for designing systems which can efficiently serve video content to users all over the world. In this paper, we analyze and compare the underlying distribution frameworks of three video sharing services - YouTube, Dailymotion and Metacafe - based on traces collected from measurements over a period of 23 days. We investigate the variation in service delay with the user's geographical location and with video characteristics such as age and popularity. We leverage multiple vantage points distributed around the globe to validate our observations. Our results represent some of the first measurements directed towards analyzing these recently popular services.

99 citations


Proceedings ArticleDOI
31 Oct 2008
TL;DR: This first study to compare streaming overlay architectures in real Internet settings, considering not only intuitive aspects such as scalability and performance under churn, but also less studied factors such as bandwidth and latency heterogeneity of overlay participants indicates that mesh-based systems are superior for nodes with high bandwidth capabilities and low round trip times.
Abstract: We compare two representative streaming systems using mesh-based and multiple tree-based overlay routing through deployments on the PlanetLab wide-area experimentation platform. To the best of our knowledge, this is the first study to compare streaming overlay architectures in real Internet settings, considering not only intuitive aspects such as scalability and performance under churn, but also less studied factors such as bandwidth and latency heterogeneity of overlay participants. Overall, our study indicates that mesh-based systems are superior for nodes with high bandwidth capabilities and low round trip times, while multi-tree based systems currently cope better with stringent real time deadlines under heterogeneous conditions.

42 citations


Proceedings ArticleDOI
13 Apr 2008
TL;DR: A measurement-based model for routers and other forwarding devices is presented, which is used to simulate two different Cisco routers under varying traffic conditions and preliminary results indicate that the model can approximate the Cisco routers.
Abstract: Several popular simulation and emulation environments fail to account for realistic packet forwarding behaviors of commercial switches and routers. Such simulation or emulation inaccuracies can lead to dramatic and qualitative impacts on the results. In this paper, we present a measurement-based model for routers and other forwarding devices, which we use to simulate two different Cisco routers under varying traffic conditions. The structure of our model is device-independent, but requires device-specific parameters. We construct a profiling tool and use it to derive router parameter tables within a few hours. Our preliminary results indicate that our model can approximate the Cisco routers. The compactness of the parameter tables and simplicity of the model makes it possible to use it for high-fidelity simulations while preserving simulation scalability.

32 citations


Proceedings ArticleDOI
13 Apr 2008
TL;DR: This work proposes a network measurement service, with a focus on quantifying and bounding the impact of active measurements on the network resources being measured, and introduces methods to characterize the behavior ofactive measurements for use in admission control and scheduling decisions.
Abstract: Many network services such as voice, video, and collaborative applications require an informed view of network characteristics for effective operation. Sharing a network measurement service across multiple applications can significantly reduce measurement overhead, and remove the burden of performing network measurements from individual applications. To be effectively shared, a network measurement service must provide a variety of measurements to applications on-demand, including end-to-end available bandwidth, delay, and loss. We propose such a service, with a focus on quantifying and bounding the impact of active measurements on the network resources being measured. Resource bounds are necessary for wide-scale deployment, since not all users of the service can be trusted. The service informs applications of how service bounds affect their measurements. We introduce methods to characterize the behavior of active measurements for use in admission control and scheduling decisions. We evaluate our methods in experiments under realistic scenarios on the Emulab testbed.

11 citations


Proceedings ArticleDOI
28 Apr 2008
TL;DR: This paper investigates via simulations the applicability of packet-level downscaling approaches to DoS scenarios, selecting two representative methods: SHRiNK and TranSim, and proposes guidelines for researches to select the most suitable downscaled approach for for their own research.
Abstract: A major challenge that researchers face in studying denial of service (DoS) attacks is the size of the network to be investigated. A typical DoS attack usually takes place over a large portion of the Internet and involves a considerable number of hosts. This can be intractable for testbed experimentation, and even simulation. Therefore, it is important to simplify a network scenario with DoS attacks before applying it to a simulation/testbed platform. Several approaches have been proposed in the literature to downscale a network scenario, while preserving certain critical properties. In this paper, we investigate via simulations the applicability of packet-level downscaling approaches to DoS scenarios. We select two representative methods: SHRiNK and TranSim. Our experiments identify the operational range of the two downscaling approaches, and propose guidelines for researches to select the most suitable downscaling approach for for their own research.

5 citations


01 Jan 2008
TL;DR: An extensive comparison between simulation and emulation environments for the same Denial of Service (DoS) attack experiment reveals that there are drastic differences between emulated and simulated results and between various emulation testbeds, and argues that measurement-based models for routers and other forwarding devices are crucial.
Abstract: Simulation, emulation, and wide-area testbeds exhibit different tradeoffs with respect to fidelity, scalability, and manageability. Network security and network planning/dimensioning experiments introduce additional requirements compared to traditional networking and distributed system experiments. For example, high capacity attack or multimedia flows can push packet forwarding devices to the limit and expose unexpected behaviors. Many popular simulation and emulation tools use high-level models of forwarding behavior in switches and routers, and give little guidance on setting model parameters such as buffer sizes. Thus, a myriad of papers report results that are highly sensitive to the forwarding model or buffer size used. In this work, we first motivate the need for better models by performing an extensive comparison between simulation and emulation environments for the same Denial of Service (DoS) attack experiment. Our results reveal that there are drastic differences between emulated and simulated results and between various emulation testbeds. We then argue that measurement-based models for routers and other forwarding devices are crucial. We devise such a model and validate it with measurements from three types of Cisco routers and one Juniper router, under varying traffic conditions. The structure of our model is device-independent, but requires device-specific parameters. The compactness of the parameter tables and simplicity of the model make it versatile for high-fidelity simulations that preserve simulation scalability. We construct a black box profiler to infer parameter tables within a few hours. Our results indicate that our model can approximate different types of routers. Additionally, the results indicate that queue characteristics vary dramatically among the devices we measure, and that backplane contention must be modeled.

3 citations


Journal IssueDOI
01 Aug 2008
TL;DR: A time synchronization framework for clustered, multi-hop sensor networks, assuming that relative node synchronization is sufficient, that is, consensus on one time value is not required, and it is proved that Niter is O(1) for all protocols.
Abstract: Time synchronization is essential for several ad-hoc network protocols and applications, such as TDMA scheduling and data aggregation In this paper, we propose a time synchronization framework for clustered, multi-hop sensor networks We assume that relative node synchronization is sufficient, that is, consensus on one time value is not required Our goal is to divide the network into connected synchronization regions (nodes within two-hops) and perform inter-regional synchronization in O(LLSync) × Niter time, where O(LLSync) denotes the complexity of the underlying low-level synchronization technique (used for single-hop synchronization), and Niter denotes the number of iterations where the low-level synchronization protocol is invoked Thus, our main objective is rapid convergence We propose novel fully distributed protocols, SYNC-IN and SYNC-NET, for regional and network synchronization, respectively, and prove that Niter is O(1) for all protocols Our framework does not require any special node capabilities (eg, being global positioning systems (GPS)-enabled), or the presence of reference nodes in the network Our framework is also independent of the particular clustering, inter-cluster routing, and low-level synchronization protocols We formulate a density model for analyzing inter-regional synchronization, and evaluate our protocols via extensive simulations Copyright © 2007 John Wiley & Sons, Ltd

1 citations


Book
01 Jan 2008
TL;DR: This paper optimize an existing implementation of a CPU-based channelizer and implement a novel GPU-basedChannelizer, which delivers an overall improvement of 30 % for the CPU optimization on Intel Core i7-4790 @ 3.60 GHz, and a 3.2-fold improvement for the GPU implementation on AMD R9 290, when compared to the originalCPU-based implementation.
Abstract: The essential process to analyze signals from multicarrier communication systems is to isolate independent communication channels using a channelizer. To implement a channelizer in software-defined radio systems, the Polyphase Filterbank (PFB) is commonly used. For real-time applications, the PFB has to process the digitized signal faster or equal to its sampling rate. Depending on the underlying hardware, PFB can run on a CPU, a Graphical Processing Unit (GPU), or even a Field-Programmable Gate Arrays (FPGA). CPUs and GPUs are more reconfigurable and scalable platforms than FPGAs. In this paper, we optimize an existing implementation of a CPU-based channelizer and implement a novel GPU-based channelizer. Our proposed solutions deliver an overall improvement of 30 % for the CPU optimization on Intel Core i7-4790 @ 3.60 GHz, and a 3.2-fold improvement for the GPU implementation on AMD R9 290, when compared to the original CPU-based implementation.

1 citations