scispace - formally typeset
Search or ask a question

Showing papers on "Testbed published in 2009"


Journal ArticleDOI
TL;DR: The Data Assimilation Research Testbed (DART) as discussed by the authors is an open-source community facility for data assimilation education, research, and development, which is used for data integration.
Abstract: The Data Assimilation Research Testbed (DART) is an open-source community facility for data assimilation education, research, and development. DART's ensemble data assimilation algorithms, careful ...

538 citations


01 Jan 2009
TL;DR: The testbed of noise-free functions is defined and motivated, and the participants' favorite black-box real-parameter optimizer in a few dimensions a few hundreds of times and execute the provided post-processing script afterwards.
Abstract: Quantifying and comparing performance of optimization algorithms is one important aspect of research in search and optimization. However, this task turns out to be tedious and difficult to realize even in the single-objective case -- at least if one is willing to accomplish it in a scientifically decent and rigorous way. The BBOB 2009 workshop will furnish most of this tedious task for its participants: (1) choice and implementation of a well-motivated real-parameter benchmark function testbed, (2) design of an experimental set-up, (3) generation of data output for (4) post-processing and presentation of the results in graphs and tables. What remains to be done for the participants is to allocate CPU-time, run their favorite black-box real-parameter optimizer in a few dimensions a few hundreds of times and execute the provided post-processing script afterwards. In this report, the testbed of noise-free functions is defined and motivated.

521 citations


Journal ArticleDOI
TL;DR: The algorithm is implemented in TinyOS and shown to be effective in adapting to local topology changes without incurring global overhead in the scheduling, and the effect of the time-varying nature of wireless links on the conflict-free property of DRAND-assigned time slots is evaluated.
Abstract: This paper presents a distributed implementation of RAND, a randomized time slot scheduling algorithm, called DRAND. DRAND runs in O(delta) time and message complexity where delta is the maximum size of a two-hop neighborhood in a wireless network while message complexity remains O(delta), assuming that message delays can be bounded by an unknown constant. DRAND is the first fully distributed version of RAND. The algorithm is suitable for a wireless network where most nodes do not move, such as wireless mesh networks and wireless sensor networks. We implement the algorithm in TinyOS and demonstrate its performance in a real testbed of Mica2 nodes. The algorithm does not require any time synchronization and is shown to be effective in adapting to local topology changes without incurring global overhead in the scheduling. Because of these features, it can also be used even for other scheduling problems such as frequency or code scheduling (for FDMA or CDMA) or local identifier assignment for wireless networks where time synchronization is not enforced. We further evaluate the effect of the time-varying nature of wireless links on the conflict-free property of DRAND-assigned time slots. This experiment is conducted on a 55-node testbed consisting of the more recent MicaZ sensor nodes.

339 citations


Proceedings ArticleDOI
08 Jul 2009
TL;DR: This BI-population CMA-ES is benchmarked on the BBOB-2009 noiseless function testbed and could solve 23, 22 and 20 functions out of 24 in search space dimensions 10, 20 and 40, respectively, within a budget of less than $10^6 D$ function evaluations per trial.
Abstract: We benchmark the BI-population CMA-ES on the BBOB-2009 noisy functions testbed. BI-population refers to a multistart strategy with equal budgets for two interlaced restart strategies, one with an increasing population size and one with varying small population sizes. The latter is presumably of little use on a noisy testbed. The BI-population CMA-ES could solve 29, 27 and 26 out of 30 functions in search space dimension 5, 10 and 20 respectively. The time to find the solution ranges between 100 D and 105 D2 objective function evaluations, where D is the search space dimension.

301 citations


Proceedings ArticleDOI
18 Jun 2009
TL;DR: The development of a wireless sensor network to detect landslides is discussed, which includes the design, development and implementation of a WSN for real time monitoring, the development of the algorithms needed that will enable efficient data collection and data aggregation, and the network requirements of the deployed landslide detection system.
Abstract: Wireless sensor networks are one of the emerging areas which have equipped scientists with the capability of developing real-time monitoring systems. This paper discusses the development of a wireless sensor network(WSN) to detect landslides, which includes the design, development and implementation of a WSN for real time monitoring, the development of the algorithms needed that will enable efficient data collection and data aggregation, and the network requirements of the deployed landslide detection system. The actual deployment of the testbed is in the Idukki district of the Southern state of Kerala, India, a region known for its heavy rainfall, steep slopes, and frequent landslides.

182 citations


Proceedings ArticleDOI
30 Nov 2009
TL;DR: An automated deployment algorithm that indicates when a mesh node needs to be deployed as the coverage area grows is developed and tested, and areas for further study and development in rapidly-deployable multihop networks are recommended.
Abstract: This paper describes a wireless mesh network testbed for research in rapid deployment and autoconfiguration of mesh nodes. Motivated by the needs of first responders and military personnel arriving to an incident area, we developed and tested an automated deployment algorithm that indicates when a mesh node needs to be deployed as the coverage area grows. Conventional radios can experience severe coverage limitations inside structures such as hi-rise buildings, subterranean buildings, caves, and underground mines. The approach examined here is to deploy wireless relays that extend coverage through multihop communication using a deployment algorithm that employs physical layer measurements. A flexible platform based on IEEE 802.11 radios has been implemented and tested in a subterranean laboratory complex where conventional public safety radios have no coverage. Applications tested include two-way voice, data, and location information. This paper describes the testbed, presents experimental results, and recommends areas for further study and development in rapidly-deployable multihop networks.

150 citations


Proceedings ArticleDOI
19 Aug 2009
TL;DR: It is shown through real life implementation on a state of the art testbed of server machines that vGreen can improve both performance and system level energy savings by 20% and 15% across benchmarks with varying characteristics.
Abstract: In this paper, we present vGreen, a multi-tiered software system for energy efficient computing in virtualized environments. It comprises of novel hierarchical metrics that capture power and performance characteristics of virtual and physical machines, and policies, which use it for energy efficient virtual machine scheduling across the whole deployment. We show through real life implementation on a state of the art testbed of server machines that vGreen can improve both performance and system level energy savings by 20% and 15% across benchmarks with varying characteristics.

132 citations


Proceedings ArticleDOI
12 Sep 2009
TL;DR: Empirical results on a physical testbed show that the SHIP control solution can provide precise power control, as well as power differentiations for optimized system performance, and extensive simulation results demonstrate the efficacy of the control solution in large-scale data centers composed of thousands of servers.
Abstract: In today's data centers, precisely controlling server power consumption is an essential way to avoid system failures caused by power capacity overload or overheating due to increasingly high server density. While various power control strategies have been recently proposed, existing solutions are not scalable to control the power consumption of an entire large-scale data center, because these solutions are designed only for a single server or a rack enclosure. In a modern data center, however, power control needs to be enforced at three levels: rack enclosure, power distribution unit, and the entire data center, due to the physical and contractual power limits at each level. This paper presents SHIP, a highly scalable hierarchical power control architecture for large-scale data centers. SHIP is designed based on well-established control theory for analytical assurance of control accuracy and system stability. Empirical results on a physical testbed show that our control solution can provide precise power control, as well as power differentiations for optimized system performance. In addition, our extensive simulation results based on a real trace file demonstrate the efficacy of our control solution in large-scale data centers composed of thousands of servers.

117 citations


Book ChapterDOI
23 Mar 2009
TL;DR: A secure version of the Modbus SCADA protocol that incorporates integrity, authentication, non-repudiation and anti-replay mechanisms is described and experimental results indicate that the augmented protocol provides good security functionality without significant overhead.
Abstract: The interconnectivity of modern and legacy supervisory control and data acquisition (SCADA) systems with corporate networks and the Internet has significantly increased the threats to critical infrastructure assets. Meanwhile, traditional IT security solutions such as firewalls, intrusion detection systems and antivirus software are relatively ineffective against attacks that specifically target vulnerabilities in SCADA protocols. This paper describes a secure version of the Modbus SCADA protocol that incorporates integrity, authentication, non-repudiation and anti-replay mechanisms. Experimental results using a power plant testbed indicate that the augmented protocol provides good security functionality without significant overhead.

116 citations


Book ChapterDOI
25 Sep 2009
TL;DR: The overall architecture of WISEBED is presented, focusing on certain aspects of the software ecosystem surrounding the project, such as the Open Federation Alliance, which will enable a view of the whole testbed, or parts of it, as single entities, and the testbed’s tight integration with the Shawn network simulator.
Abstract: In this paper we present an overview of WISEBED, a large-scale wireless sensor network testbed, which is currently being built for research purposes. This project is led by a number of European Universities and Research Institutes, hoping to provide scientists, researchers and companies with an environment to conduct experiments with, in order to evaluate and validate their sensor network-related work. The initial planning of the project includes a large, heterogeneous testbed, consisting of at least 9 geographically disparate networks that include both sensor and actuator nodes, and scaling in the order of thousands (currently being in total 550 nodes). We present here the overall architecture of WISEBED, focusing on certain aspects of the software ecosystem surrounding the project, such as the Open Federation Alliance, which will enable a view of the whole testbed, or parts of it, as single entities, and the testbed’s tight integration with the Shawn network simulator. We also present examples of the actual hardware used currently in the testbed and outline the architecture of two of the testbed’s sites.

111 citations


Proceedings Article
15 Jun 2009
TL;DR: Open Cirrus is developed, a cloud computing testbed for the research community that federates heterogeneous distributed data centers and offers a cloud stack consisting of physical and virtual machines, and global services, such as sign-on, monitoring, storage, and job submission.
Abstract: There are a number of important and useful testbeds, such as PlanetLab, EmuLab, IBM/Google cluster, and Amazon EC2/S3, that enable researchers to study different aspects of distributed computing. However, no single testbed supports research spanning systems, applications, services, open-source development, and datacenters. Towards this end, we have developed Open Cirrus, a cloud computing testbed for the research community that federates heterogeneous distributed data centers. Open Cirrus offers a cloud stack consisting of physical and virtual machines, and global services, such as sign-on, monitoring, storage, and job submission. By developing the testbed and making it available to the research community, we hope to help spur innovation in cloud computing and catalyze the development of an open source stack for the cloud.

Proceedings ArticleDOI
21 Sep 2009
TL;DR: The deployment of the OpenRoads testbed, a testbed that allows multiple network experiments to be conducted concurrently in a production network, is described and discussed at Stanford University.
Abstract: We have built and deployed OpenRoads [11], a testbed that allows multiple network experiments to be conducted concurrently in a production network. For example, multiple routing protocols, mobility managers and network access controllers can run simultaneously in the same network. In this paper, we describe and discuss our deployment of the testbed at Stanford University. We focus on the challenges we faced deploying in a production network, and the tools we built to overcome these challenges. Our goal is to gain enough experience for other groups to deploy OpenRoads in their campus network.

Proceedings ArticleDOI
Carlos Queiroz1, Abdun Naser Mahmood1, Jiankun Hu1, Zahir Tari1, Xinghuo Yu1 
19 Oct 2009
TL;DR: Using Distributed Denial of Service (DDoS) scenarios, this work proposes the architecture of a modular SCADA testbed and describes a tool which mimics a SCADA network, monitors and controls real sensors and actuators using Modbus/TCP protocol.
Abstract: SCADA (Supervisory Control and Data Acquisition) systems control and monitor industrial and critical infrastructure functions, such as the electricity, gas, water, waste, railway and traffic. Recent attacks on SCADA systems highlight the need of a SCADA security testbed, which can be used to model real SCADA systems and study the effects of attacks on them. We propose the architecture of a modular SCADA testbed and describe our tool which mimics a SCADA network, monitors and controls real sensors and actuators using Modbus/TCP protocol. Using Distributed Denial of Service (DDoS) scenarios we show how attackers can disrupt the operation of a SCADA system.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: This work designs and implements Soft-TDMAC, a software Time Division Multiple Access (TDMA) based MAC protocol, running over commodity 802.11 hardware, and shows that it synchronizes multi-hop networks to within a few microsecond sized TDMA slots.
Abstract: We design and implement Soft-TDMAC, a software Time Division Multiple Access (TDMA) based MAC protocol, running over commodity 802.11 hardware. Soft-TDMAC has a synchronization mechanism, which synchronizes all pairs of network clocks to within microseconds of each other. Building on pairwise synchronization, Soft-TDMAC achieves network wide synchronization. With, out-of-band, network wide synchronization Soft-TDMAC can schedule arbitrary TDMA transmission patterns. We summarize hundreds of hours of testing Soft-TDMAC on a multi-hop testbed. Our experimental results show that Soft-TDMAC synchronizes multi-hop networks to within a few microsecond sized TDMA slots. Soft-TDMAC can schedule transmissions to take end-to-end demands into account and in a way that decreases end-to-end delay. With no collisions, under good channel conditions, TCP achieves almost the full wireless channel bandwidth.

Proceedings ArticleDOI
26 May 2009
TL;DR: The implementation and analysis of the testbed considering the Link Quality Window Size (LQWS) parameter of Optimized Link State Routing (OLSR) and Better Approach To Mobile Ad-hoc Networking (B.T.A.M.N.) protocols found that throughput of TCP was improved by reducing LQWS.
Abstract: In this paper, we present the implementation and analysis of our testbed considering the Link Quality Window Size (LQWS) parameter of Optimized Link State Routing (OLSR) and Better Approach To Mobile Ad-hoc Networking (B.A.T.M.A.N.) protocols. We investigate the effect of mobility in the throughput of a Mobile Ad-hoc Network (MANET). The mobile nodes move toward the destination at a regular speed. When the mobile nodes arrive at the corner, they stop for about three seconds. In our experiments, we consider two cases: only one node is moving (mobile node)and two nodes (intermediate nodes) are moving at the same time. We assess the performance of our testbed in terms of throughput, round trip time, jitter and packet loss. From our experiments, we found that throughput of TCP was improved by reducing LQWS.

Proceedings Article
10 Aug 2009
TL;DR: Potential use cases in order to motivate the integration of VPST with other testbeds, identify requirements of interconnected test beds, and describe the design for integration with VPST are discussed.
Abstract: The Virtual Power System Testbed (VPST) at University of Illinois at Urbana-Champaign is part of the Trustworthy Cyber Infrastructure for the Power Grid (TCIP) and is maintained by members of the Information Trust Institute (ITI). VPST is designed to be integrated with other testbeds across the country to explore performance and security of Supervisory Control And Data Acquisition (SCADA) protocols and equipment. We discuss potential use cases in order to motivate the integration of VPST with other testbeds, identify requirements of interconnected testbeds, and describe our design for integration with VPST.

Proceedings ArticleDOI
22 Jun 2009
TL;DR: It is argued that a broad range of mobility experiments could be performed in a testbed which provides the properties of temporal, technological, and spatial diversity, and demonstrated through analysis of data collected from DOME over a period of four years.
Abstract: A series of complex dependencies conspire to make it difficult to model mobile networks, including mobility, channel and radio characteristics, and power consumption. To address these challenges, we have designed and built a testbed for large-scale mobile experimentation, called the Diverse Outdoor Mobile Environment. DOME consists of computer-equipped buses, battery-powered nomadic nodes, organic WiFi APs, and a municipal WiFi mesh network. While the construction of a testbed such as DOME presents a significant engineering challenge, this paper describes a concrete set of scientific results derived from this experience. We argue that a broad range of mobility experiments could be performed in a testbed which provides the properties of temporal, technological, and spatial diversity. We demonstrate these properties in our testbed through analysis of data collected from DOME over a period of four years. Finally, we use DOME to provide insight into several open problems in mobile systems research.

Book ChapterDOI
TL;DR: This demonstration will be the presentation of a new testbed for joint activity of heterogeneous teams, similar to the classic AI planning problem of Blocks World extended into what the authors are calling Blocks World for Teams (BW4T).
Abstract: This demonstration will be the presentation of a new testbed for joint activity. The domain for this demonstration will be similar to the classic AI planning problem of Blocks World (BW) extended into what we are calling Blocks World for Teams (BW4T). By teams, we mean at least two, but usually more members. Additionally, we do not restrict the membership to artificial agents, but include and in fact expect human team members. Study of joint activity of heterogeneous teams is the main function of the BW4T testbed.

Book ChapterDOI
22 Nov 2009
TL;DR: This paper presents the design and implementation of a working prototype built on a EUCALYPTUS-based heterogeneous compute cloud that actively monitors the response time of each virtual machine assigned to the farm and adaptively scales up the application to satisfy a SLA promising a specific average response time.
Abstract: Current service-level agreements (SLAs) offered by cloud providers make guarantees about quality attributes such as availability. However, although one of the most important quality attributes from the perspective of the users of a cloud-based Web application is its response time, current SLAs do not guarantee response time. Satisfying a maximum average response time guarantee for Web applications is difficult due to unpredictable traffic patterns, but in this paper we show how it can be accomplished through dynamic resource allocation in a virtual Web farm. We present the design and implementation of a working prototype built on a EUCALYPTUS-based heterogeneous compute cloud that actively monitors the response time of each virtual machine assigned to the farm and adaptively scales up the application to satisfy a SLA promising a specific average response time. We demonstrate the feasibility of the approach in an experimental evaluation with a testbed cloud and a synthetic workload. Adaptive resource management has the potential to increase the usability of Web applications while maximizing resource utilization.

Proceedings ArticleDOI
20 Apr 2009
TL;DR: The MVDCT is being constructed to provide hardware validation for research associated with the development of medium voltage dc distribution systems for future naval warships and some initial test results are provided.
Abstract: Medium voltage dc distribution systems are currently of interest for future naval warships. In order to provide hardware validation for research associated with the development of these systems, a low power Medium Voltage DC Testbed (MVDCT) is being constructed. This paper documents the system being constructed and provides some initial test results.

Journal ArticleDOI
TL;DR: This article proposes a method where parameters are estimated online, and thus also adapts to the changing environment, and compares it to two other control strategies proposed in the literature, which are based on off-line estimation of certain parameters.
Abstract: Resource management in IT-enterprises gain more and more attention due to high operation costs. For instance, web sites are subject to very changing traffic-loads over the year, over the day, or even over the minute. Online adaption to the changing environment is one way to reduce losses in the operation. Control systems based on feedback provide methods for such adaption, but is in nature slow, since changes in the environment has to propagate through the system before being compensated. Therefore, feed-forward systems can be introduced that has shown to improve the transient performance. However, earlier proposed feed-forward systems have been based on offline estimation. In this article we show that off-line estimations can be problematic in online applications. Therefore, we propose a method where parameters are estimated online, and thus also adapts to the changing environment. We compare our solution to two other control strategies proposed in the literature, which are based on off-line estimation of certain parameters. We evaluate the controllers with both discrete-event simulations and experiments in our testbed. The investigations show the strength of our proposed control system.

Journal ArticleDOI
TL;DR: Several DoS impact metrics that measure the quality of service experienced by users during an attack are proposed that map QoS requirements for several applications into measurable traffic parameters with acceptable, scientifically determined thresholds.
Abstract: Researchers in the denial-of-service (DoS) field lack accurate, quantitative, and versatile metrics to measure service denial in simulation and testbed experiments. Without such metrics, it is impossible to measure severity of various attacks, quantify success of proposed defenses, and compare their performance. Existing DoS metrics equate service denial with slow communication, low throughput, high resource utilization, and high loss rate. These metrics are not versatile because they fail to monitor all traffic parameters that signal service degradation. They are not quantitative because they fail to specify exact ranges of parameter values that correspond to good or poor service quality. Finally, they are not accurate since they were not proven to correspond to human perception of service denial. We propose several DoS impact metrics that measure the quality of service experienced by users during an attack. Our metrics are quantitative: they map QoS requirements for several applications into measurable traffic parameters with acceptable, scientifically determined thresholds. They are versatile: they apply to a wide range of attack scenarios, which we demonstrate via testbed experiments and simulations. We also prove metrics' accuracy through testing with human users.

Journal ArticleDOI
TL;DR: This work proposes an abstraction of ldquovirtual collocationrdquo and its realization by the software infrastructure of middleware, and describes the implementation as well as some experimental results over a traffic control testbed.
Abstract: We focus on the mechanism half of the policy-mechanism divide for networked control systems, and address the issue of what are the appropriate abstractions and architecture to facilitate their development and deployment. We propose an abstraction of ldquovirtual collocationrdquo and its realization by the software infrastructure of middleware. Control applications are to be developed as a collection of software components that communicate with each other through the middleware, called Etherware. The middleware handles the complexities of network operation, such as addressing, start-up, configuration and interfaces, by encapsulating application components in ldquoShellsrdquo which mediate component interactions with the rest of the system. The middleware also provides mechanisms to alleviate the effects of uncertain delays and packet losses over wireless channels, component failures, and distributed clocks. This is done through externalization of component state, with primitives to capture and reuse it for component restarts, upgrades, and migration, and through services such as clock synchronization. We further propose an accompanying use of local temporal autonomy for reliability, and describe the implementation as well as some experimental results over a traffic control testbed.

Journal ArticleDOI
TL;DR: The modular concept of the system provides the capability to test the antenna hardware, beamforming unit, and beamforming algorithm in an independent manner, thus allowing the smart antenna system to be developed and tested in parallel, hence reduces the design time.
Abstract: A new design of smart antenna testbed developed at UKM for digital beamforming purpose is proposed. The smart antenna UKM testbed developed based on modular design employing two novel designs of L-probe fed inverted hybrid E-H (LIEH) array antenna and software reconfigurable digital beamforming system (DBS). The antenna is developed based on using the novel LIEH microstrip patch element design arranged into 4 × 1 uniform linear array antenna. An interface board is designed to interface to the ADC board with the RF front-end receiver. The modular concept of the system provides the capability to test the antenna hardware, beamforming unit, and beamforming algorithm in an independent manner, thus allowing the smart antenna system to be developed and tested in parallel, hence reduces the design time. The DBS was developed using a high-performance TMS320C67112™ floating-point DSP board and a 4-channel RF front-end receiver developed in-house. An interface board is designed to interface to the ADC board with the RF front-end receiver. A four-element receiving array testbed at 1.88-2.22 GHz frequency is constructed, and digital beamforming on this testbed is successfully demonstrated.

Journal ArticleDOI
TL;DR: Efficient and accurate link-quality monitor (EAR) effectively identifies the existence of wireless link asymmetry by measuring the quality of each link in both directions of the link, thus improving the utilization of network capacity by up to 114%.
Abstract: This paper presents a highly efficient and accurate link-quality measurement framework, called efficient and accurate link-quality monitor (EAR), for multihop wireless mesh networks (WMNs) that has several salient features. First, it exploits three complementary measurement schemes: passive, cooperative, and active monitoring. By adopting one of these schemes dynamically and adaptively, EAR maximizes the measurement accuracy, and its opportunistic use of the unicast application traffic present in the network minimizes the measurement overhead. Second, EAR effectively identifies the existence of wireless link asymmetry by measuring the quality of each link in both directions of the link, thus improving the utilization of network capacity by up to 114%. Finally, its cross-layer architecture across both the network layer and the IEEE 802.11-based device driver makes EAR easily deployable in existing multihop wireless mesh networks without system recompilation or MAC firmware modification. EAR has been evaluated extensively via both ns-2-based simulation and experimentation on our Linux-based implementation in a real-life testbed. Both simulation and experimentation results have shown EAR to provide highly accurate link-quality measurements with minimum overhead.

Proceedings ArticleDOI
14 Jun 2009
TL;DR: An application of the Cognitive Networking paradigm to the problem of dynamic channel selection in infrastructured wireless networks is presented, in which a Neural Network-based cognitive engine learns how environmental measurements and the status of the network affect the performance experienced on different channels, and can dynamically select the channel which is expected to yield the best performance for the mobile users.
Abstract: In this paper, we present an application of the Cognitive Networking paradigm to the problem of dynamic channel selection in infrastructured wireless networks. We first discuss some of the key challenges associated with the cognitive control of wireless networks. Then we introduce our solution, in which a Neural Network-based cognitive engine learns how environmental measurements and the status of the network affect the performance experienced on different channels, and can therefore dynamically select the channel which is expected to yield the best performance for the mobile users. We carry out performance evaluation of the proposed system by experimental measurements on a testbed implementation; the obtained results show that the proposed cognitive engine is effective in achieving performance enhancements with respect to state-of-the-art channel selection strategies.

Proceedings ArticleDOI
08 Jul 2009
TL;DR: The (1+1) Evolution Strategy with one-fifth success rule is benchmarked which is one of the first and simplest adaptive search algorithms proposed for optimization and conducts for each run 106 times the dimension of the search space function evaluations.
Abstract: In this paper, we benchmark the (1+1) Evolution Strategy (ES) with one-fifth success rule which is one of the first and simplest adaptive search algorithms proposed for optimization. The benchmarking is conducted on the noise-free BBOB-2009 testbed. We implement a restart version of the algorithm and conduct for each run 106 times the dimension of the search space function evaluations.

Book ChapterDOI
09 Sep 2009
TL;DR: The overall results seem to demonstrate, that, while multicast support quality in different products is still varied and often requires additional configuration, it is possible to select a WiFi access point model and determine the best system parameters to ensure a good video transfer conditions in terms of acceptable QoP/E (Quality of Perception/Exellence).
Abstract: The aim of the work is to analyse capabilities and limitations of different IEEE 802.11 technologies (IEEE 802.11 b/g/n), utilized for both multicast and unicast video streaming transmissions directed to mobile devices. Our preliminary research showed that results obtained with currently popular simulation tools can be drastically different than these possible in real-world environment, so, in order to correctly evaluate performance of video streaming, a simple wireless test-bed infrastructure has been created. The results show a strong dependence of the quality of video streaming on the chosen transmission technology. At the same time there are significant differences in perception quality between multicast (1:n) and unicast (1:1) streams, and also between devices offered by different manufacturers. The overall results seem to demonstrate, that, while multicast support quality in different products is still varied and often requires additional configuration, it is possible to select a WiFi access point model and determine the best system parameters to ensure a good video transfer conditions in terms of acceptable QoP/E (Quality of Perception/Exellence).

Proceedings ArticleDOI
01 Jan 2009
TL;DR: The Boeing VSTL design and capabilities, including the indoor localization system, multi-vehicle command and control (C2) and operator interface, realtime virtual environment, and health-based adaptive behaviors are discussed.
Abstract: Increased levels of vehicle collaboration and auton omy are seen as a means to reduce overall mission completion costs while expanding mi ssion capabilities and increasing mission assurance for complex coupled system of systems. Systems health management technologies have made rapid advances that enable systems to know their own condition and capabilities, thus creating the opportunity for unprecedented lev els of adaptive control, real-time reconfiguration, and mission contingency management. Multi-agent task allocation and mission managements systems must account for vehicle- and system-level health-related issues to ensure that these systems are reliable an d cost effective to operate. Boeing’s Vehicle Swarm Technology Lab (VSTL), established in 2004, includes a 100’x50’x20’ testbed equipped with a vision-based motion capture indoor localization system. The testbed provides a cost-effective rapid prototyping capabil ity for integrating health-based adaptive control of subsystems, vehicle, mission, and swarms to guarantee top-level system-of-systems performance metrics. The lab’s heterogeneous fleet includes over 20 heterogeneous air vehicles, including VTOL and fixed wing, along with their ground stations and communication links in addition to heterogeneous ground vehicles and wall climbing robots. This paper discusses the Boeing VSTL design and capabilities, including the indoor localization system, multi-vehicle command and control (C2) and operator interface, realtime virtual environment, and health-based adaptive behaviors. The lab supports rapid prototyping and exploration of various multi-vehicl e operational concept of operations and missions including persistent surveillance, area se arch and tracking, and high density air traffic management. Additionally, the lab supports experimentation tasks for many other platform configuration and collaborative air, groun d, space, and maritime autonomous system of systems concepts.

Proceedings ArticleDOI
06 Apr 2009
TL;DR: This work proposes a TCP performance evaluation testbed, called SVEET, on which real implementations of the TCP variants can be accurately evaluated under diverse network configurations and workloads in large-scale network settings.
Abstract: The ability to establish an objective comparison between high-performance TCP variants under diverse networking conditions and to obtain a quantitative assessment of their impact on the global network traffic is essential to a community-wide understanding of various design approaches. Small-scale experiments are insufficient for a comprehensive study of these TCP variants. We propose a TCP performance evaluation testbed, called SVEET, on which real implementations of the TCP variants can be accurately evaluated under diverse network configurations and workloads in large-scale network settings. This testbed combines real-time immersive simulation, emulation, machine and time virtualization techniques. We validate the testbed via extensive experiments and assess its capabilities through case studies involving real web services.