scispace - formally typeset
Search or ask a question

Showing papers on "Testbed published in 2002"


Journal ArticleDOI
TL;DR: This work studied the topology and protocols of the public Gnutella network to evaluate costs and benefits of the peer-to-peer (P2P) approach and to investigate possible improvements that would allow better scaling and increased reliability in Gnutsella and similar networks.
Abstract: We studied the topology and protocols of the public Gnutella network. Its substantial user base and open architecture make it a good large-scale, if uncontrolled, testbed. We captured the network's topology, generated traffic, and dynamic behavior to determine its connectivity structure and how well (if at all) Gnutella's overlay network topology maps to the physical Internet infrastructure. Our analysis of the network allowed us to evaluate costs and benefits of the peer-to-peer (P2P) approach and to investigate possible improvements that would allow better scaling and increased reliability in Gnutella and similar networks. A mismatch between Gnutella's overlay network topology and the Internet infrastructure has critical performance implications.

790 citations


Journal ArticleDOI
TL;DR: This paper uses feedback control theory to achieve overload protection, performance guarantees, and service differentiation in the presence of load unpredictability, and shows that control-theoretic techniques offer a sound way of achieving desired performance in performance-critical Internet applications.
Abstract: The Internet is undergoing substantial changes from a communication and browsing infrastructure to a medium for conducting business and marketing a myriad of services. The World Wide Web provides a uniform and widely-accepted application interface used by these services to reach multitudes of clients. These changes place the Web server at the center of a gradually emerging e-service infrastructure with increasing requirements for service quality and reliability guarantees in an unpredictable and highly-dynamic environment. This paper describes performance control of a Web server using classical feedback control theory. We use feedback control theory to achieve overload protection, performance guarantees, and service differentiation in the presence of load unpredictability. We show that feedback control theory offers a promising analytic foundation for providing service differentiation and performance guarantees. We demonstrate how a general Web server may be modeled for purposes of performance control, present the equivalents of sensors and actuators, formulate a simple feedback loop, describe how it can leverage on real-time scheduling and feedback-control theories to achieve per-class response-time and throughput guarantees, and evaluate the efficacy of the scheme on an experimental testbed using the most popular Web server, Apache. Experimental results indicate that control-theoretic techniques offer a sound way of achieving desired performance in performance-critical Internet applications. Our QoS (Quality-of-Service) management solutions can be implemented either in middleware that is transparent to the server, or as a library called by server code.

625 citations


Proceedings ArticleDOI
07 Aug 2002
TL;DR: This paper presents a design methodology to build a hierarchical large-scale ad hoc network using different types of radio capabilities at different layers and proposes a new stable clustering scheme to deploy the BN.
Abstract: A mobile ad hoc network (MANET) is usually assumed to be homogeneous, where each mobile node shares the same radio capacity. However, a homogeneous ad hoc network suffers from poor scalability. Recent research has demonstrated its performance bottleneck both theoretically and through simulation experiments and testbed measurement Building a physically hierarchical ad hoc network is a very promising way to achieve good scalability. In this paper, we present a design methodology to build a hierarchical large-scale ad hoc network using different types of radio capabilities at different layers. In such a structure, nodes are first dynamically grouped into multihop clusters. Each group elects a cluster-head to be a backbone node (BN). Then higher-level links are established to connect the BN into a backbone network. Following this method recursively, a multilevel hierarchical network can be established. Three critical issues are addressed in this paper. We first analyze the optimal number of BN for a layer in theory. Then, we propose a new stable clustering scheme to deploy the BN. Finally LANMAR routing is extended to operate the physical hierarchy efficiently. Simulation results using GloMoSim show that our proposed schemes achieve good performance.

240 citations


Proceedings ArticleDOI
17 Mar 2002
TL;DR: A mobility metric called virtual mobility is introduced that is based on the measured signal quality instead of the geometric distance between nodes, hence it reflects how a routing protocol actually perceives the network's dynamics.
Abstract: We have built an ad hoc protocol evaluation testbed (APE) in order to perform large-scale, reproducible experiments. APE aims at assessing several different routing protocols in a real-world environment instead of by simulation. We present the APE testbed architecture and report on initial experiments with up to 37 physical nodes that show the reproducibility and scalability of our approach. Several scenario scripts have been written that include strict choreographic instructions to the testers who walk around with ORiNOCO equipped laptops. We introduce a mobility metric called virtual mobility that we use to compare different test runs. This metric is based on the measured signal quality instead of the geometric distance between nodes, hence it reflects how a routing protocol actually perceives the network's dynamics.

187 citations


Proceedings ArticleDOI
10 Dec 2002
TL;DR: A unique feature of the Caltech Multi-Vehicle Wireless Testbed is that the vehicles have second order dynamics, requiring real-time feedback algorithms to stabilize the system while performing cooperative tasks.
Abstract: We introduce the Caltech Multi-Vehicle Wireless Testbed (MVWT), a platform for testing decentralized control methodologies for multiple vehicle coordination and formation stabilization. The testbed consists of eight mobile vehicles, an overhead vision system that provides GPS-like state information and wireless Ethernet for communications. Each vehicle rests on omni-directional casters and is powered by two high-performance ducted fans. Thus, a unique feature of our testbed is that the vehicles have second order dynamics, requiring real-time feedback algorithms to stabilize the system while performing cooperative tasks. The testbed will be used by various research groups at Caltech and elsewhere to validate theoretical advances in multi-vehicle coordination and control, networked control systems, real-time networking and high confidence distributed computation.

125 citations


Proceedings ArticleDOI
09 Mar 2002
TL;DR: The Lincoln adaptable real-time information assurance testbed, LARIAT, is an extension of the testbed created for DARPA 1998 and 1999 intrusion detection (ID) evaluations and is undergoing continued development and refinement.
Abstract: The Lincoln adaptable real-time information assurance testbed, LARIAT, is an extension of the testbed created for DARPA 1998 and 1999 intrusion detection (ID) evaluations. LARIAT supports real-time, automated and quantitative evaluations of ID systems and other information assurance (IA) technologies. Components of LARIAT generate realistic background user traffic and real network attacks, verify attack success or failure, score ID system performance, and provide a graphical user interface for control and monitoring. Emphasis was placed on making LARIAT easy to adapt, configure and run without requiring a detailed understanding of the underlying complexity. LARIAT is currently being exercised at four sites and is undergoing continued development and refinement.

113 citations


Journal ArticleDOI
TL;DR: A testbed for studying the decision behaviors of agents in multi-agent contracting, which generates sets of tasks with known statistical attributes, formulates and submits requests for quotations, generates bids with well-defined statistics, and evaluates bids according to several criteria.
Abstract: In multi-agent contracting, customer agents solicit the resources and capabilities of other agents, sometimes executing multistep tasks in which tasks are contracted out to different suppliers. The authors have developed a testbed for studying the decision behaviors of agents in this context. It generates sets of tasks with known statistical attributes, formulates and submits requests for quotations, generates bids with well-defined statistics, and evaluates bids according to several criteria. Each of these processes is supported by an abstract interface and a series of pluggable modules with numerous configuration parameters, and with data collection and analysis tools.

99 citations


Journal ArticleDOI
TL;DR: The ad hoc wireless network traffic collected in an ad hoc network (AHN) testbed is shown to be self-similar, which validates that AHN traffic is forecastable becauseSelf-similar time-series can be forecasted, and applies a fuzzy logic system to ad hoc Wireless Network Traffic Forecasting results show that it performs much better than does an LMS adaptive filter.
Abstract: Lots of works have been carried out to discuss the self-similarity of Ethernet and World Wide Web traffic. In this letter, we study the ad hoc wireless network traffic collected in an ad hoc network (AHN) testbed and show that the ad hoc wireless network traffic is self-similar, which validates that AHN traffic is forecastable because self-similar time-series can be forecasted. We apply a fuzzy logic system to ad hoc wireless network traffic forecasting and simulation results show that it performs much better than does an LMS adaptive filter. All these studies are very important for evaluating network capacity and determining the battery power mode based on the forecasted traffic workload.

88 citations


Proceedings ArticleDOI
28 Sep 2002
TL;DR: The nsclick simulation environment was constructed by embedding the Click Modular Router inside of the popular s~network simulator, which describes the design, use, validation and performance of nsclick.
Abstract: Ad hoc network protocols are often developed, tested and evaluated using simulators. However, when the time comes to deploy those protocols for use or testing on real systems the protocol must be reimplemented for the target platform. This usually results in two, completely separate code-bases that must be maintained. Bugs which are found and fixed under simulated conditions must also be fixed separately in the deployed implementation, and vice versa. There is ample opportunity for the two implementations to drift apart, possibly to the point where the deployed and simulated version have little actual resemblance to each other. Testing the deployed version may also require construction of a testbed, a potentially time-consuming and expensive endeavor. Even if constructing an actual testbed is feasible, simulators are very useful for running large, repeatable scenarios for tasks such as protocol evaluation and regression testing. Furthermore, since the implementation may require modification of the kernel network stack, there's a good chance that a particular implementation may only run on specific versions of specific operating systems. To address these issues, we constructed the nsclick simulation environment by embedding the Click Modular Router inside of the popular s~network simulator. Routing protocols may be implemented as Click graphs and easily moved between simulation and any operating system supported by Click. This paper describes the design, use, validation and performance of nsclick.

73 citations


Journal ArticleDOI
TL;DR: The design and prototype implementation of the VLAM-G platform is described, including several recent technologies such as the Globus toolkit, enhanced federated database systems, and visualization and simulation techniques.
Abstract: The Grid-based Virtual Laboratory AMsterdam (VLAM-G), provides a science portal for distributed analysis in applied scientific research. It offers scientists remote experiment control, data management facilities and access to distributed resources by providing cross-institutional integration of information and resources in a familiar environment. The main goal is to provide a unique integration of existing standards and software packages. This paper describes the design and prototype implementation of the VLAM-G platform. In this testbed we applied several recent technologies such as the Globus toolkit, enhanced federated database systems, and visualization and simulation techniques. Several domain specific case studies are described in some detail. Information management will be discussed separately in a forthcoming paper.

73 citations


Proceedings ArticleDOI
07 May 2002
TL;DR: A mobile streaming media CDN (Content Delivery Network) architecture in which content segmentation, request routing, pre-fetch scheduling, and session handoff are controlled by SMIL (Synchronized Multimedia Integrated Language) modification is presented.
Abstract: In this paper, we present a mobile streaming media CDN (Content Delivery Network) architecture in which content segmentation, request routing, pre-fetch scheduling, and session handoff are controlled by SMIL (Synchronized Multimedia Integrated Language) modification. In this architecture, mobile clients simply follow modified SMIL files downloaded from a streaming portal server; these modifications enable multimedia content to be delivered to the mobile clients from the best surrogates in the CDN. The key components of this architecture are 1) content segmentation with SMIL modification, 2) on-demand rewriting of URLs in SMIL, 3) pre-fetch scheduling based on timing information derived from SMIL, 4) SMIL updates by SOAP (Simple Object Access Protocol) messaging for session handoffs due to clients mobility. We also introduce QoS control with a network agent called an "RTP monitoring agent" to enable appropriate control of media quality based on both network congestion and radio link conditions. The current status of our prototyping on a mobile QoS testbed "MOBIQ" is reported in this paper. We are currently designing the SOAP-based APIs (Application Programmable Interfaces) needed for the mobile streaming media CDN and building the CDN over the current testbed.

Proceedings ArticleDOI
15 Jul 2002
TL;DR: The MACE3J design criteria and the approach to a number of critical tradeoffs that, to the authors' knowledge, have not previously been treated explicitly in MAS literature or platforms are presented.
Abstract: Scientific study of multi-agent systems (MAS) requires infrastructure such as development testbeds and simulation tools for repeatable, controlled experiments with MAS structure and behavior. Testbeds and simulation tools are also critical for MAS education and development. A number of MAS testbeds currently exist, but to date none meets in a comprehensive way criteria laid out by many analysts for general, scientific, experimental study of MAS by a large community. Moreover, none really scales to very large MAS or exploits the power of modern distributed computing environments such as large multiprocessor clusters and computational grids. Because of this, and specifically to fulfill widespread need for tools supporting distributed collaborative scientific research in large-scale, large-grain MAS, we created the MACE3J system, a successor to the pioneering MACE testbed.MACE3J is a Java-based MAS simulation, integration, and development testbed, with a supporting library of components, examples, and documentation, distributed freely. MACE3J currently runs on single- and multiprocessor workstations, and in large multiprocessor cluster environments. The MACE3J design is multi-grain, but gives special attention to simulating very large communities of large-grain agents. It exhibits a significant degree of scalability, and has been effectively used in fast simulations of over 5,000 agents, 10,000 tasks, and 10M messages, and on multiprocessor configurations of up to 48 processors, with a future target of at least 1000 processors.This paper presents MACE3J design criteria and our approach to a number of critical tradeoffs that, to our knowledge, have not previously been treated explicitly in MAS literature or platforms. We present the innovative features of the MACE3J architecture that contribute to its breadth, flexibility and scalability, and finally give results from the use of MACE3J in real experiments in realistic MAS domains, both simple and complex.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: A hardware testbed is developed to make an empirical analysis of the time it takes to establish Bluetooth connections and the range at which those connections can be established, and ways in which to improve connection setup times.
Abstract: Bluetooth/sup TM/ is a promising wireless technology designed for short-range ad hoc connections, which has many potentially useful applications. One such use is the transfer of data between two fast-moving vehicles such as automobiles. In this paper we explore the suitability of Bluetooth to make connections in highly mobile environments. In particular, we have developed a hardware testbed to make an empirical analysis of the time it takes to establish Bluetooth connections and the range at which those connections can be established. We also explore, by means of simulation, ways in which to improve connection setup times and the impact this will have on any potential data transfer.

Proceedings ArticleDOI
24 Jun 2002
TL;DR: The combination of two existing algorithm visualization systems implements pedagogical requirements that are not supported in most systems and thereby provides a rich testbed for future studies of effectiveness.
Abstract: Although algorithm visualizations have become numerous, they still have not been successfully adapted into mainstream computer science education. Algorithm visualization systems need to better address pedagogical requirements for effective educational use. We discuss the relevance of several such requirements that are not supported in most systems. The combination of two existing algorithm visualization systems implements these requirements and thereby provides a rich testbed for future studies of effectiveness.

Book ChapterDOI
01 Jan 2002
TL;DR: An overview of the current status and plans of the EDG project is given, a distributed testbed is described and the technology components essential for the implementation of a world-wide data and computational Grid on a scale not previously attempted are described.
Abstract: The objective of the European DataGrid (EDG) project is to assist the next generation of scientific exploration, which requires intensive computation and analysis of shared large-scale datasets, from hundreds of terabytes to petabytes, across widely distributed scientific communities. We see these requirements emerging in many scientific disciplines, including physics, biology, and earth sciences. Such sharing is made complicated by the distributed nature of the resources to be used, the distributed nature of the research communities, the size of the datasets and the limited network bandwidth available. To address these problems we are building on emerging computational Grid technologies to establish a research network that is developing the technology components essential for the implementation of a world-wide data and computational Grid on a scale not previously attempted. An essential part of this project is the phased development and deployment of a large-scale Grid testbed.The primary goals of the first phase of the EDG testbed were: 1) to demonstrate that the EDG software components could be integrated into a productionquality computational Grid; 2) to allow the middleware developers to evaluate the design and performance of their software; 3) to expose the technology to end-users to give them hands-on experience; and 4) to facilitate interaction and feedback between end-users and developers. This first testbed deployment was achieved towards the end of 2001 and assessed during the successful European Union review of the project on March 1, 2002. In this article we give an overview of the current status and plans of the EDG project and describe the distributed testbed.

Proceedings ArticleDOI
09 Mar 2002
TL;DR: The Advanced Space Computing and Autonomy Testbed on the ARGOS satellite provides the first direct, on orbit comparison of a radiation hardened 32 bit processor with a similar COTS processor, offsetting the performance overhead of the SIHFT techniques used on the COTS board while consuming less power.
Abstract: The Advanced Space Computing and Autonomy Testbed on the ARGOS satellite provides the first direct, on orbit comparison of a modem radiation hardened 32 bit processor with a similar COTS processor. This investigation was motivated by the need for higher capability computers for space flight use than could be met with available radiation hardened components. The use of COTS devices for space applications has been suggested to accelerate the development cycle and produce cost effective systems. Software-implemented corrections of radiation-induced SEUs (SIHFT) can provide low-cost solutions for enhancing the reliability of these systems. We have flown two 32-bit single board computers (SBCs) onboard the ARGOS spacecraft. One is full COTS, while the other is RAD-hard. The COTS board has an order of magnitude higher computational throughput than the RAD-hard board, offsetting the performance overhead of the SIHFT techniques used on the COTS board while consuming less power.

Proceedings ArticleDOI
05 Aug 2002
TL;DR: A search-theoretic approach based on “rate of return” maps is used to develop cooperative search plans for UAVs that try to approximate the optimal non-implementable search plan.
Abstract: This paper explores the problem of cooperative search for stationary targets with multiple uninhabited air vehicles (UAVs). A search-theoretic approach based on “rate of return” maps is used to develop cooperative search plans for UAVs that try to approximate the optimal non-implementable search plan. The approach is illustrated by use of a simulation testbed for multiple searching UAVs and Monte Carlo simulation runs to evaluate our cooperative strategy relative to the optimal plan and relative to a noncooperative strategy.

Dissertation
01 Jan 2002
TL;DR: The SPHERES testbed as discussed by the authors provides a low-risk, representative dynamic environment for the interactive development and verification of formation flight control and autonomy algorithms, and properties relevant to control formulation such as thruster placement geometry and actuation non-linearity are discussed.
Abstract: To reduce mission cost and improve spacecraft performance, the National Aeronautics and Space Administration and the United States military are considering the use of distributed spacecraft architectures in several future missions. Precise relative control of separated spacecraft position and attitude is an enabling technology for many science and defense applications that require distributed measurements, such as long-baseline interferometric arrays. The SPHERES testbed provides a low-risk, representative dynamic environment for the interactive development and verification of formation flight control and autonomy algorithms. The testbed is described, and properties relevant to control formulation such as thruster placement geometry and actuation non-linearity are discussed. A hybrid state determination methodology utilizing a memoryless attitude update and a Kalman filter for position and velocity is presented. State updates are performed based on range measurements to each vehicle from known positions on the periphery of the test volume. A high-level, modular control interface facilitates rapid test development and the efficient reuse of old code, while maintaining freedom in the design of new algorithms. A simulation created to facilitate the development of new maneuvers, tests, and control algorithms is described. Thesis Supervisor: David W. Miller Title: Associate Professor

01 Jan 2002
TL;DR: A simple testbed for experimenting with C4ISR architectures (based on a 'SCUD hunt' scenario), the FINC methodology for analysing C4 ISR architectures, and some experimental results are presented.
Abstract: : In this paper we present a simple testbed for experimenting with C4ISR architectures (based on a 'SCUD hunt' scenario), the FINC methodology for analysing C4ISR architectures, and some experimental results. The testbed allows us to explore different organisational architectures under a range of conditions. The FINC (Force, Intelligence, Networking and C2) methodology allows the calculation of three metrics for every C4ISR architecture. Applying the FINC methodology to our testbed provides a partial validation of the methodology, as well as allowing us to derive four basic principles of C4ISR architectures.

01 Jan 2002
TL;DR: The use of simulation techniques for performance evaluation is proposed and the use of a Java-based discrete event simulation toolkit, called GridSim, is advocated, which provides facilities for modeling and simulating Grid resources and network connectivity with different capabilities and configurations.
Abstract: Numerous research groups in universities, research labs, and industries around the world are now working on Computational Grids or simply Grids that enable aggregation of distributed resources for solving large-scale data intensive problems in science, engineering, and commerce. Several institutions and universities have started research and teaching programs on Grid computing as part of their parallel and distributed computing curriculum. The researchers and students interested in resource management and scheduling on Grid need a testbed infrastructure for implementing, testing, and evaluating their ideas. Students often do not have access to the Grid testbed and even if they have access, the testbed size is often small, which limits their ability to test ideas for scalable performance and large-scale evaluation. It is even harder to explore large-scale application and resource scenarios involving multiple users in a repeatable and comparable manner due to dynamic nature of Grid environments. To address these limitations, we propose the use of simulation techniques for performance evaluation and advocate the use of a Java-based discrete event simulation toolkit, called GridSim. The toolkit provides facilities for modeling and simulating Grid resources (both time and spaceshared high performance computers) and network connectivity with different capabilities and configurations. We have used GridSim toolkit to simulate Nimrod-G like Grid resource broker that supports deadline and budget constrained cost and time minimization scheduling algorithms.

Proceedings ArticleDOI
09 Mar 2002
TL;DR: IAC in collaboration with the Air Force and the Army is developing a testbed to perform data collection and to develop fusion techniques for gas turbine engine health monitoring, and the testbed and examples of its operation are presented here.
Abstract: A key to producing reliable engine diagnostics and prognostics resides in fusion of multisensor data. It is believed that faults will manifest effects in a variety of sensors. By 'integration' (fusion) of information across sensors detections can be made of faults that are undetectable on just a single sensor. Data to support development of prognostic techniques is very rare. The development requires continuous collection of significant amounts of data to capture not only "normal" data but also capture potential fault event data well before the fault is detected by existing techniques, as well as capture data related to rare events. The collected data can be analyzed to develop processing tailored to new events and to continuously update algorithms so as to improve detection and classification performance and reduce false alarms. IAC in collaboration with the Air Force and the Army is developing a testbed to perform data collection and to develop fusion techniques for gas turbine engine health monitoring. The testbed and examples of its operation are presented here.

Journal ArticleDOI
TL;DR: The author’s conceptualization of an Information Commons (IC) is revisited and elaborated in reaction to Bailey and Tierney's article and prospects for media-rich learning environments relate the IC to the implementation of Internet2.

Journal ArticleDOI
TL;DR: A testbed to support the rapid development of configuration management systems is developed by providing a generic model of a distributed repository and an associated programmatic interface that specific configuration management policies are programmed as unique extensions to the generic interface.
Abstract: Even though the number and variety of available configuration management systems has grown rapidly in the past few years, the need for new configuration management systems still remains. Driving this need are the emergence of situations requiring highly specialized solutions, the demand for management of artifacts other than traditional source code and the exploration of entirely new research questions in configuration management. Complicating the picture is the trend toward organizational structures that involve personnel working at physically separate sites. We have developed a testbed to support the rapid development of configuration management systems. The testbed separates configuration management repositories (i.e., the stores for versions of artifacts) from configuration management policies (i.e., the procedures, according to which the versions are manipulated) by providing a generic model of a distributed repository and an associated programmatic interface. Specific configuration management policies are programmed as unique extensions to the generic interface, while the underlying distributed repository is reused across different policies. The authors describe the repository model and its interface and present their experience in using a prototype of the testbed, called NUCM, to implement a variety of configuration management systems.

Journal ArticleDOI
TL;DR: In this article, the transport performance of an optically transparent regional-size ring network testbed with circumference of 280 km, based on metro-area optimized optical layer components and fiber, is demonstrated under dynamic traffic conditions.
Abstract: The transport performance of an optically transparent regional-size ring network testbed with circumference of 280 km, based on metro-area optimized optical layer components and fiber, is demonstrated under dynamic traffic conditions. For the longest transmission path, excellent transmission performance is achieved using cost-effective directly modulated signals. Network reconfigurability is achieved using add-drop modules that are commercially available as of this writing. We show that the dynamic nature of the network does not affect the system performance. In particular, we show that electronic gain control of erbium-doped amplifiers is capable of managing switching transients in amplified metro-scale networks.

Proceedings ArticleDOI
09 Mar 2002
TL;DR: The SPHERES (Synchronized Position Hold Engage Reorient Experimental Satellites) formation flight testbed as mentioned in this paper provides multiple investigators with a long term, replenishable, and upgradable testbed for the validation of high risk metrology, control, and autonomy technologies.
Abstract: The MIT Space Systems Laboratory (SSL) is developing the SPHERES (Synchronized Position Hold Engage Reorient Experimental Satellites) formation flight testbed to provide multiple investigators with a long term, replenishable, and upgradable testbed for the validation of high risk metrology, control, and autonomy technologies. These technologies are critical to the operation of distributed satellite and docking missions such as TechSat21, Starlight, Terrestrial Planet Finder, and Orbital Express. The development of SPHERES follows the guidelines set in a laboratory design philosophy created from lessons learned through the development and operation of prior microgravity testbeds by the MIT SSL. The philosophy ensures that the resulting laboratory provides a risk-tolerant and cost-effective environment that facilitates the design process and reduces the development costs of unproven technologies. The testbed consists of three free flyer units which can control their relative positions and orientations in six degrees of freedom. The testbed can operate in 2D on a laboratory platform and in 3D on NASA's KC-135 and inside the International Space Station. Flight tests aboard NASA's KC-135 and studies in the ground laboratory confirm the functionality of SPHERES.

Journal ArticleDOI
TL;DR: The implementation of basic portal features such as job submission, file transfer, and job monitoring are described and how the portal addresses security requirements of the deployment centers are discussed.
Abstract: In this paper we describe the basic services and architecture of Gateway, a commodity-based Web portal that provides secure remote access to unclassified Department of Defense computational resources. The portal consists of a dynamically generated, browser-based user interface supplemented by client applications and a distributed middle tier, WebFlow. WebFlow provides a coarse-grained approach to accessing both stand-alone and Grid-enabled back-end computing resources. We describe in detail the implementation of basic portal features such as job submission, file transfer, and job monitoring and discuss how the portal addresses security requirements of the deployment centers. Finally, we outline future plans, including integration of Gateway with Department of Defense testbed Grids. Copyright © 2002 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
08 May 2002
TL;DR: The design and implementation of logic controllers on a small-scale machining line testbed using modular finite state machines are described and algorithms are presented for design, reconfiguration, and error handling integration.
Abstract: This paper describes the design and implementation of logic controllers on a small-scale machining line testbed using modular finite state machines. The logic is verified to be internally correct before being implemented on the testbed. Reconfiguration of the controller for a new manufacturing scenario is demonstrated, as is the integration of error handling. The ease of use of this modular finite state machine design methodology is discussed, as is the complexity of the resulting designs. Algorithms are presented for design, reconfiguration, and error handling integration.

Journal ArticleDOI
K.H. Liu1, Changdong Liu1, J.L. Pastor, A. Roy, J.Y. Wei 
TL;DR: The simulation results show that the topologies designed by the reconfiguration algorithms outperform the fixed topology with throughput gain as well as average hop-distance reduction.
Abstract: With the widespread deployment of Internet protocol/wavelength division multiplexing (IP/WDM) networks, it becomes necessary to develop traffic engineering (TE) solutions that can effectively exploit WDM reconfigurability More importantly, experimental work on reconfiguring lightpath topology over testbed IP/WDM networks is needed urgently to push the technology forward to operational networks This paper presents a performance and testbed study of topology reconfiguration for IP/WDM networks IP/WDM TE can be fulfilled in two fashions, overlay vs integrated, which drives the network control software, eg, routing and signaling protocols, and selects the corresponding network architecture model, eg, overlay or peer-to-peer We present a traffic management framework for IP over reconfigurable WDM networks Three "one-hop traffic maximization"-oriented heuristic algorithms for lightpath topology design are introduced A reconfiguration migration algorithm to minimize network impact is presented To verify the performance of the topology design algorithms, we have conducted extensive simulation study The simulation results show that the topologies designed by the reconfiguration algorithms outperform the fixed topology with throughput gain as well as average hop-distance reduction We describe the testbed network and software architecture developed in the Defense Advanced Research Projects Agency (DARPA) Next Generation Internet (NGI) SuperNet Network Control and Management project and report the TE experiments conducted over the testbed

Journal ArticleDOI
TL;DR: In this paper, a non-linear analytical model of a two-stage electronic proportional valve is derived and simplified as a reduced order linear model with the inherit system zeros illustrated.
Abstract: By examining the dynamics of a popular type of two-stage electronic proportional valve, this paper addresses its performance limitations, with both cautions in control implementation and suggestions in valve design. While several benefits do exist, there are limitations to the closed loop performance of the valve when it is included in a valve-controlled electro-hydraulic system. These limitations come from the structural feature that the pilot flow not only controls but also contributes to the total flow. Although for steady state performance this design gives a higher flow efficiency, for dynamic performance it results in zeros in the open loop transfer function, which will limit the closed loop bandwidth of a flow control system. A non-linear analytical model of this particular type of valve is derived first. It is then simplified as a reduced order linear model with the inherit system zeros illustrated. Validation of the analysis is obtained by experimental results on a testbed.

Journal ArticleDOI
TL;DR: An adaptive forward error correction protocol is described, which adjusts the level of redundancy in the data stream in response to packet loss conditions, and can quickly accommodate worsening channel characteristics in order to reduce delay and increase throughput for reliable multicast channels.
Abstract: This paper addresses the problem of reliably multicasting Web resources across wireless local area networks (WLANs) in support of collaborative computing applications. An adaptive forward error correction (FEC) protocol is described, which adjusts the level of redundancy in the data stream in response to packet loss conditions. The proposed protocol is intended for use on a proxy server that supports mobile users on a WLAN. The software architecture of the proxy service and the operation of the adaptive FEC protocol are described. The performance of the protocol is evaluated using both experimentation on a mobile computing testbed as well as simulation. The results of the performance study show that the protocol can quickly accommodate worsening channel characteristics in order to reduce delay and increase throughput for reliable multicast channels.