scispace - formally typeset
Search or ask a question

Showing papers on "Testbed published in 2000"


Proceedings ArticleDOI
14 May 2000
TL;DR: The proposed Nimrod/G grid-enabled resource management and scheduling system builds on the earlier work on Nimrod and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components.
Abstract: The availability of powerful microprocessors and high-speed networks as commodity components has enabled high-performance computing on distributed systems (wide-area cluster computing). In this environment, as the resources are usually distributed geographically at various levels (department, enterprise or worldwide), there is a great challenge in integrating, coordinating and presenting them as a single resource to the user, thus forming a computational grid. Another challenge comes from the distributed ownership of resources, with each resource having its own access policy, cost and mechanism. The proposed Nimrod/G grid-enabled resource management and scheduling system builds on our earlier work on Nimrod (D. Abramson et al., 1994, 1995, 1997, 2000) and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components. It uses the GUSTO (GlobUS TOolkit) services and can be easily extended to operate with any other emerging grid middleware services. It focuses on the management and scheduling of computations over dynamic resources scattered geographically across the Internet at department, enterprise or global levels, with particular emphasis on developing scheduling schemes based on the concept of computational economy for a real testbed, namely the Globus testbed (GUSTO).

965 citations


Proceedings ArticleDOI
TL;DR: Nimrod/G as mentioned in this paper is a grid-enabled resource management and scheduling system that follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components.
Abstract: The availability of powerful microprocessors and high-speed networks as commodity components has enabled high performance computing on distributed systems (wide-area cluster computing). In this environment, as the resources are usually distributed geographically at various levels (department, enterprise, or worldwide) there is a great challenge in integrating, coordinating and presenting them as a single resource to the user; thus forming a computational grid. Another challenge comes from the distributed ownership of resources with each resource having its own access policy, cost, and mechanism. The proposed Nimrod/G grid-enabled resource management and scheduling system builds on our earlier work on Nimrod and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components. It uses the Globus toolkit services and can be easily extended to operate with any other emerging grid middleware services. It focuses on the management and scheduling of computations over dynamic resources scattered geographically across the Internet at department, enterprise, or global level with particular emphasis on developing scheduling schemes based on the concept of computational economy for a real test bed, namely, the Globus testbed (GUSTO).

371 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: In this paper, the role of parametric modeling as an application for the global computing grid is examined, and some heuristics which make it possible to specific soft real time deadlines for larger computational experiments are explored.
Abstract: This paper examines the role of parametric modeling as an application for the global computing grid, and explores some heuristics which make it possible to specific soft real time deadlines for larger computational experiments. We demonstrate the scheme with a case study utilizing the Globus toolkit running on the GUSTO testbed.

341 citations


Journal ArticleDOI
TL;DR: A testbed developed at the San Francisco, Berkeley, and Santa Barbara campuses of the University of California for research in understanding, assessing, and training surgical skills is described, including virtual environments for training perceptual motor skills, spatial skills, and critical steps of surgical procedures.
Abstract: With the introduction of minimally invasive techniques, surgeons must learn skills and procedures that are radically different from traditional open surgery. Traditional methods of surgical training that were adequate when techniques and instrumentation changed relatively slowly may not be as efficient or effective in training substantially new procedures. Virtual environments are a promising new medium for training. This paper describes a testbed developed at the San Francisco, Berkeley, and Santa Barbara campuses of the University of California for research in understanding, assessing, and training surgical skills. The testbed includes virtual environments for training perceptual motor skills, spatial skills, and critical steps of surgical procedures. Novel technical elements of the testbed include a four-DOF haptic interface, a fast collision detection algorithm for detecting contact between rigid and deformable objects, and parallel processing of physical modeling and rendering. The major technical challenge in surgical simulation to be investigated using the testbed is the development of accurate, real-time methods for modeling deformable tissue behavior. Several simulations have been implemented in the testbed, including environments for assessing performance of basic perceptual motor skills, training the use of an angled laparoscope, and teaching critical steps of the cholecystectomy, a common laparoscopic procedure. The major challenges of extending and integrating these tools for training are discussed.

198 citations


Proceedings Article
01 Jan 2000
TL;DR: Examining interoperability, the efficacy of the S-BGP countermeasures in securing BGP control traffic, and their impact on BGP performance, and thus evaluating the feasibility of deployment in the Internet are described.
Abstract: The Border Gateway Protocol (BGP), which is used to distribute routing information between autonomous systems, is an important component of the Internet’s routing infrastructure. Secure BGP (S-BGP) addresses critical BGP vulnerabilities by providing a scalable means of verifying the authenticity and authorization of BGP control traffic. To facilitate widespread adoption, S-BGP must avoid introducing undue overhead (processing, bandwidth, storage) and must be incrementally deployable, i.e., interoperable with BGP. To provide a proof of concept demonstration, we developed a prototype implementation of S-BGP and deployed it in DARPA’s CAIRN testbed. Real Internet BGP traffic was fed to the testbed routers via replay of a recorded BGP peering session with an ISP’s BGP router. This document describes the results of these experiments ‐ examining interoperability, the efficacy of the S-BGP countermeasures in securing BGP control traffic, and their impact on BGP performance, and thus evaluating the feasibility of deployment in the Internet.

164 citations


Journal ArticleDOI
TL;DR: In this paper, a comparison of control options in private offices in an Advanced Lighting Controls Testbed is presented, based on the Illuminating Engineering Society (ILES) journal, Vol. 29, No. 2, 2000.
Abstract: (2000). Comparison of Control Options in Private Offices in an Advanced Lighting Controls Testbed. Journal of the Illuminating Engineering Society: Vol. 29, No. 2, pp. 39-60.

118 citations


Proceedings ArticleDOI
12 Dec 2000
TL;DR: Modeling and performance control of an Internet server using classical feedback control theory and experimental results indicate that control-theoretical techniques offer a promising way of achieving desired performance in emerging critical Internet applications.
Abstract: The paper describes modeling and performance control of an Internet server using classical feedback control theory. We show that classical feedback control can leverage on well-known real-time scheduling results to resolve one of the fundamental problems in Internet-servers today; namely, achieving overload protection and performance guarantees in the presence of load unpredictability. The research is motivated by the increasing proliferation of a new category of Web-based services, such as online trading, banking, and business transactions, where performance guarantees are required in the face of unpredictable server load. Failure to meet desired performance levels may result in loss of customers, financial damage or liability violations. State-of-the-art Web servers are not designed to offer such performance guarantees. We show that control theory offers a robust solution to the server performance control problem. We demonstrate that a general Web server may be modeled as a linear time-varying system, describe the equivalents of sensors and actuators in that system, formulate a simple feedback loop, describe how it can leverage on real-time scheduling theory to achieve timing guarantees, and evaluate the efficacy of the scheme on an experimental testbed using a real Web server (Apache), which is the most popular Internet server today. Experimental results indicate that control-theoretical techniques offer a promising way of achieving desired performance in emerging critical Internet applications.

106 citations


01 Jan 2000
TL;DR: The SPHERES formation flying testbed for the International Space Station (ISS) was developed by the MIT Space Systems Laboratory (MSL) as discussed by the authors, which consists of three 23-centimeter diameter, three-kilogram satellites, or "spheres", which can control their relative orientations.
Abstract: The MIT Space Systems Laboratory is developing the SPHERES formation flying testbed for operation on a 2-D laboratory platform, the KC-135, and the International Space Station. The hardware consists of three 23-centimeter diameter, three-kilogram satellites, or “spheres,” which can control their relative orientations. Each sphere consists of all the subsystems typical of a conventional satellite. The purpose of SPHERES is to provide the Air Force and NASA with a long term, replenishable, and upgradable testbed for validating high risk metrology, control, and autonomy technologies needed for operating distributed satellites as a part of sparse aperture missions such as TechSat21, ST3, TPF, etc. SPHERES draws upon the MODE family of dynamics and control laboratories (STS-40, 42, 48, 62, MIR) by providing a cost-effective laboratory with direct astronaut interaction which exploits the micro-gravity conditions of space. This paper will present the SPHERES objective, design, and hardware status as well as the status of the technologies to be validated using the testbed.

98 citations


Proceedings ArticleDOI
23 Sep 2000
TL;DR: Preliminary quantitative results from data collected during runs of the multi-hop wireless ad hoc network testbed show that the network successfully carried a composite workload including voice, bulk data, and real-time data.
Abstract: This paper presents preliminary quantitative results from data collected during runs of our multi-hop wireless ad hoc network testbed. The network successfully carried a composite workload including voice, bulk data, and real-time data. Careful analysis of recorded runs highlights radio propagation issues that network protocols will need to address in the future.

93 citations


Journal ArticleDOI
TL;DR: The Illinois Roadway Simulator (IRS) is a novel, mechatronic, scaled testbed used to study vehicle dynamics and controls and is used in a vehicle control case study to demonstrate the potential benefits of scaled investigations.
Abstract: The Illinois Roadway Simulator (IRS) is a novel, mechatronic, scaled testbed used to study vehicle dynamics and controls. An overview of this system is presented, and individual hardware issues are addressed. System modeling results on the vehicles and hardware are introduced, and comparisons of the resulting dynamics are made with full-sized vehicles. Comparisons are made between dynamic responses of full-scale and IRS-scale vehicles. The method of dynamic similitude is a key to gaining confidence in the scaled testbed as an accurate representation of actual vehicles to a first approximation. The IRS is then used in a vehicle control case study to demonstrate the potential benefits of scaled investigations. The idea of driver-assisted control is formulated as a yaw-rate model-following problem based on the representation of the driver as a known disturbance model. The controller is designed and implemented to show that the vehicle's dynamics can be changed to match a prescribed reference model.

64 citations


Proceedings ArticleDOI
16 Oct 2000
TL;DR: AODV (ad hoc on-demand distance vector) has been implemented as a part of the operating system protocol stack and the performance evaluation reveals that the performance is poor beyond two hops at moderate to high loads.
Abstract: We experimentally evaluate the performance of a wireless ad hoc network from the point of view of both the routing and transport layers. The experiments are done on a testbed with desktop PCs and laptops using wireless radio LAN interfaces. For these experiments an on-demand routing protocol called AODV (ad hoc on-demand distance vector) has been implemented as a part of the operating system protocol stack. We describe our design choices and the experimental setup. The performance evaluation reveals that the performance is poor beyond two hops at moderate to high loads.

Proceedings ArticleDOI
01 Nov 2000
TL;DR: This paper addresses the scalability problem through distribution of monitoring tasks, applicable for tools such as SI- MONE (SNMP-based monitoring prototype implemented by the authors), and is flexible and integratable into a SNMP tool without altering other system components.
Abstract: Traditional centralized monitoring systems do not scale to present-day large, complex, network-computing systems. Based on recent SNMP standards for distributed management, this paper addresses the scalability problem through distribution of monitoring tasks, applicable for tools such as SIMONE (SNMP-based monitoring prototype implemented by the authors).Distribution is achieved by introducing one or more levels of a dual entity called the Intermediate Level Manager (ILM) between a manager and the agents. The ILM accepts monitoring tasks described in the form of scripts and delegated by the next higher entity. The solution is flexible and integratable into a SNMP tool without altering other system components. A testbed of up to 1024 monitoring elements is used to assess scalability. Noticeable improvements in the round trip delay (from seconds to less than one tenth of a second) were observed when more than 200 monitoring elements are present and as few as two ILM's are used.

Proceedings ArticleDOI
27 Nov 2000
TL;DR: Simulation results show that the proposed IP-MAC protocols are efficient and comparable to the authors' selected benchmark and will be implemented in the IP-HORNET testbed.
Abstract: As data traffic increases exponentially, IP over WDM transport will replace conventional SONET transport in metropolitan area networks Such networks will require new media access control (MAC) protocols to efficiently share network bandwidth among multiple network nodes This paper describes and evaluates novel and practical carrier sense multiple access with collision avoidance (CSMA/CA) MAC protocols for IP over WDM ring networks that handle variable size IP packets without complex variable optical delays or centralized algorithms Simulation results show that the proposed IP-MAC protocols are efficient and comparable to our selected benchmark and will be implemented in the IP-HORNET testbed

Proceedings ArticleDOI
23 Sep 2000
TL;DR: The construction of, and the experimental experience with, an ad hoc wireless testbed developed as part of the DARPA Global Mobile Information Systems (GloMo) sponsored MMWN and DAWN projects to support network-layer support for real-time QoS in large, dense, mobile ad hoc networks.
Abstract: We describe the construction of, and our experimental experience with, an ad hoc wireless testbed developed as part of the DARPA Global Mobile Information Systems (GloMo) sponsored MMWN and DAWN projects. The goal of the MMWN/DAWN system is the network-layer support for real-time QoS in large, dense, mobile ad hoc networks. The testbed consists of MMWN/DAWN switches-a single box with an embedded processor, and a commercial radio transceiver, and endpoints-laptops running voice and data applications. We have conducted a number of real-life experiments with this testbed, and report on our experiences and results.

Proceedings ArticleDOI
01 Nov 2000
TL;DR: This work describes a new message-passing architecture, MPICH-GQ, that uses quality of service (QoS) mechanisms to manage contention and hence improve performance of message passing interface (MPI) applications and demonstrates its ability to maintain application performance in the face of heavy network contention.
Abstract: Parallel programmers typically assume that all resources required for a program's execution are dedicated to that purpose. However, in local and wide area networks, contention for shared networks, CPUs, and I/O systems can result in significant variations in availability, with consequent adverse effects on overall performance. We describe a new message-passing architecture, MPICH-GQ, that uses quality of service (QoS) mechanisms to manage contention and hence improve performance of message passing interface (MPI) applications. MPICH-GQ combines new QoS specification, traffic shaping, QoS reservation, and QoS implementation techniques to deliver QoS capabilities to the high-bandwidth bursty flows, complex structures, and reliable protocols used in high-performance applications-characteristics very different from the low-bandwidth, constant bit-rate media flows and unreliable protocols for which QoS mechanisms were designed. Results obtained on a differentiated services testbed demonstrate our ability to maintain application performance in the face of heavy network contention.

Journal ArticleDOI
TL;DR: This work focuses its attention to QoS monitoring, being locally significant in network subdomains, and realize a QoS management strategy in response to variations of user, customer of application requirements, and of the network state.
Abstract: Network programmability seems to be a promising solution to network management and quality of service (QoS) control. Software mobile-agents technology is boosting the evolution toward application-level control of network functionalities. Code may be deployed in the network dynamically and on demand for the benefit of applications or application classes. Agents support a dynamic distribution of control and management functions across networks, thus increasing flexibility and efficiency. We propose to use mobile-agent technology to overcome some of the problems inherent in current Internet technology. We focus our attention to QoS monitoring, being locally significant in network subdomains, and realize a QoS management strategy in response to variations of user, customer of application requirements, and of the network state. We describe our experience and the results obtained from our testbed, where software agents are instantiated, executed, migrated, and suspended in order to implement flexible QoS management in IP networks.

Proceedings ArticleDOI
05 Oct 2000
TL;DR: Rockwell Science Center has assembled a wearable testbed for AR applications, comprised only of commercial off-the-shelf (COTS) hardware components, and some of the AR applications developed on this testbed are described.
Abstract: Personal applications employing augmented reality (AR) technology for information systems require ease of use and wearability. Progress in hardware miniaturization is enabling the development of wearable testbeds for such applications, providing sufficient computing power for the demanding AR tasks. Rockwell Science Center has assembled a wearable testbed for AR applications, comprised only of commercial off-the-shelf (COTS) hardware components. The system is designed to be worn like a jacket, with all the hardware attached and affixed to a vest frame (Xybernaut) with concealed routing of cables under Velcro channels. Two possible configurations allow the system to be used either in a stand-alone mode (the Intelligent Tetherless Wearable Augmented Reality Navigation System, or "itWARNS") or to be linked to a larger-scale multi-modal user interface testbed (the Wearable Immersive Multi-Media Information System, or "WIMMIS"). Completely tetherless operation is made possible by wireless digital connections as well as analog video and 3D audio connections over radio frequencies (RF). This paper describes these two testbed configurations, as well as some of the AR applications developed on this testbed.

Proceedings ArticleDOI
16 Jul 2000
TL;DR: This work provides an empirical proof that representation change of the network data can result in a significant increase in the classification performances of the traffic models and compares models of network traffic acquired by a system based on a distributed genetic algorithm with the ones acquired by one based on greedy heuristics.
Abstract: The detection of intrusions over computer networks (i.e., network access by non-authorized users) can be cast to the task of detecting anomalous patterns of network traffic. In this case, models of normal traffic have to be determined and compared against the current network traffic. Data mining systems based on genetic algorithms can contribute powerful search techniques for the acquisition of patterns of the network traffic from the large amount of data made available by audit tools. We compare models of network traffic acquired by a system based on a distributed genetic algorithm with the ones acquired by a system based on greedy heuristics. Also we provide an empirical proof that representation change of the network data can result in a significant increase in the classification performances of the traffic models. Network data made available from the Information Exploration Shootout project and the 1998 DARPA Intrusion Detection Evaluation have been chosen as experimental testbed.

Dissertation
01 Jan 2000
TL;DR: The SPHERES Satellite Formation Flight Testbed as discussed by the authors provides a low-risk, low-cost environment to model, develop, debug, and optimize the control, metrology, and autonomy algorithms required for new space missions.
Abstract: New space missions, under development at NASA and the Air Force, utilize Formation Flight technologies to take advantage of the improved angular resolution of separated spacecraft interferometers and distributed arrays. The SPHERES Satellite Formation Flight Testbed provides these programs with a low-risk, low-cost environment to model, develop, debug, and optimize the control, metrology, and autonomy algorithms required for these missions. The SPHERES testbed consists of three independent re-programmable units that contain propulsion, communications, power, metrology, and software systems. A laptop computer works as the ground station to send high-level commands to the units and stores telemetry data from the units. Tests on one-g laboratory facilities and on NASA's KC-135 reduced gravity airplane have demonstrated the use of SPHERES to study and develop Formation Flight algorithms. Thesis Supervisor: Prof. David W. Miller Title: Associate Professor; Director, MIT Space Systems Laboratory

Proceedings ArticleDOI
07 Mar 2000
TL;DR: Optical label switching testbed has been demonstrated to realize an NGI network with ultra-low latency, simplified protocol stacks, and data transparency.
Abstract: Optical label switching testbed has been demonstrated to realize an NGI network with ultra-low latency, simplified protocol stacks, and data transparency. Packet applications have been successfully sent, with less than 1 msec switching node delay, over a label-switched network.

Proceedings ArticleDOI
16 Oct 2000
TL;DR: The unicast functionality of ODMRP is described and the protocol performance in a real ad hoc network testbed of seven laptop computers in an indoor environment is analyzed to indicate the direction for future research.
Abstract: The on-demand multicast routing protocol (ODMRP) is an effective and efficient routing protocol designed for mobile wireless ad hoc networks. One of the major strengths of ODMRP is its capability to operate both as a unicast and a multicast routing protocol. This versatility of ODMRP can increase network efficiency as the network can handle both unicast and multicast traffic with one protocol. We describe the unicast functionality of ODMRP and analyze the protocol performance in a real ad hoc network testbed of seven laptop computers in an indoor environment. Both static and dynamic networks are deployed. We generate various topological scenarios in our wireless testbed by applying mobility to network hosts and study their impacts on our protocol performance. We believe that the performance study in a testbed network can help us analyze the protocol in a realistic way and indicate the direction for future research.

Proceedings ArticleDOI
10 Dec 2000
TL;DR: The primary goal is to study the impact and the importance of partitioning in the PCS model while reducing significantly the number of rollbacks.
Abstract: In this paper, we present a simulation testbed for wireless and mobile telecommunication systems, a two-stage PCS parallel simulation testbed which makes use of a conservative scheme at Stage 1, and of time warp at Stage 2. While Time warp is considered to be an effective synchronization mechanism in parallel and distributed discrete event simulation (PDES) it is also well known for its unstability due to rollbacks and its devastating effect, i.e., serie of cascading rollbacks, among other factors. Thus, our primary goal in this paper is to study the impact and the importance of partitioning in our PCS model while reducing significantly the number of rollbacks.

Journal ArticleDOI
TL;DR: This paper presents a novel method to generate input data sets that enable us to observe the normal behavior of a process in a secure environment and proposes various techniques to derive either fixed-length or variable-length patterns from the input data set.
Abstract: This paper addresses the problem of creating patterns that can be used to model the normal behavior of a given process. The models can be used for intrusion-detection purposes. First, we present a novel method to generate input data sets that enable us to observe the normal behavior of a process in a secure environment. Second, we propose various techniques to derive either fixed-length or variable-length patterns from the input data sets. We show the advantages and drawbacks of each technique, based on the results of the experiments we have run on our testbed.

Proceedings ArticleDOI
18 Mar 2000
TL;DR: The methodology used to develop a detailed radiation fault model for the REE Testbed architecture is outlined, and the methodology by which this model will be used to derive application-level error effects sets is explained.
Abstract: The goal of the NASA HPCC Remote Exploration and Experimentation (REE) Project is to transfer commercial supercomputing technology into space. The project will use state of the art, low-power, non-radiation-hardened, COTS hardware chips and COTS software to the maximum extent possible, and will rely on software-implemented fault tolerance to provide the required levels of availability and reliability. We outline the methodology used to develop a detailed radiation fault model for the REE Testbed architecture. The model addresses the effects of energetic protons and heavy ions which cause single event upset and single event multiple upset events in digital logic devices and which are expected to be the primary fault generation mechanism. Unlike previous modeling efforts, this model will address fault rates and types in computer subsystems at a sufficiently fine level of granularity (i.e., the register level) that specific software and operational errors can be derived. We present the current state of the model, model verification activities and results to date, and plans for the future. Finally, we explain the methodology by which this model will be used to derive application-level error effects sets. These error effects sets will be used in conjunction with our Testbed fault injection capabilities and our applications' mission scenarios to replicate the predicted fault environment on our suite of onboard applications.

Journal ArticleDOI
TL;DR: A testbed for building and evaluating decision support tools for Free Flight, a system that lets pilots modify their routes in real time, and which requires new conflict detection, resolution, and visualization decisionSupport tools.
Abstract: Free Flight lets pilots modify their routes in real time. It requires new conflict detection, resolution, and visualization decision support tools. We describe a testbed for building and evaluating such tools.

Proceedings ArticleDOI
22 Oct 2000
TL;DR: The approach is to expand the semantics of the reservation so that instead of being a single value indicating the level of service needed by an application, it becomes a range of service levels in which the application can operate, together with the current reserved value within that range.
Abstract: This paper looks at issues involved in providing quality-of-service (QoS) support in a dynamic environment. We focus on a resource reservation-based approach, which we believe is attractive for military applications but is especially difficult in a dynamic network environment. This is because resources reserved for a particular flow may contract after they have been "committed" to the flow, causing the reservation to be dropped. Our approach is to expand the semantics of the reservation so that instead of being a single value indicating the level of service needed by an application, it becomes a range of service levels in which the application can operate, together with the current reserved value within that range. This provides flexibility so that reservations can be maintained as network conditions change. Rather than being forced to make a binary "admit/fail" decision for each flow, the network provides feedback to applications on the current reservation level. Based on this feedback, applications can adapt their behavior to what the network can support. We have developed a prototype implementation of this concept by extending the Reservation Setup Protocol (RSVP) protocol. We are currently evaluating the implementation in a testbed network where we can vary the link bandwidth. The testbed also includes several adaptive applications (audio, video, data transfer) running over the User Datagram Protocol (UDP). The paper discusses our approach, testbed, experiences to date, and current plans.

Proceedings ArticleDOI
TL;DR: In this article, a focus-diverse phase retrieval algorithm was developed to measure and correct wavefront errors in segmented telescopes, such as the Next Generation Space Telescope (NGSS).
Abstract: We have developed a focus-diverse phase retrieval algorithm to measure and correct wavefront errors in segmented telescopes, such as the Next Generation Space Telescope These algorithms incorporate new phase unwrapping techniques imbedded in the phase retrieval algorithms to measure aberrations larger than one wave Through control of a deformable mirror and other actuators, these aberrations are successfully removed from the system to make the system diffraction limited Results exceed requirements for the Wavefront Control Testbed An overview of these techniques and performance results on the Wavefront Control Testbed are presented

Proceedings ArticleDOI
03 Jul 2000
TL;DR: An efficient technique for checkpointing multithreaded applications using processes constructed around the ARMOR (Adaptive Reconfigurable Mobile Objects of Reliability) paradigm implemented in the Chameleon testbed is introduced.
Abstract: In this paper we introduce an efficient technique for checkpointing multithreaded applications. Our approach makes use of processes constructed around the ARMOR (Adaptive Reconfigurable Mobile Objects of Reliability) paradigm implemented in our Chameleon testbed. ARMOR processes are composed of disjoint elements (objects) with controlled manipulation of element state. These characteristics of ARMORS allow the process state to be collected during runtime in an efficient manner and saved to disk when necessary. We call this approach micro-checkpointing. We demonstrate micro-checkpointing in the Chameleon testbed, an environment for developing reliable distributed applications. Our results show that the overhead ranges from between 39% to 141% with an aggressive checkpointing policy, depending upon the degree to which the process conforms to our ARMOR paradigm.

Journal ArticleDOI
TL;DR: The discussed architecture has been developed in the form of an integrated system which incorporates state-of-the-art software and hardware subsystems, and an OC-12c ATM adapter, and has proven very efficient and reliable, providing high-throughput and low-latency bulk data communications.
Abstract: This article presents the design and development of a networking system architecture targeted to support high-speed TCP/IP communication over ATM. The discussed architecture has been developed in the form of an integrated system which incorporates state-of-the-art software and hardware subsystems, and an OC-12c ATM adapter (622 Mb/s). Moreover, the design of this embedded system has been based on the Chorus real-time operating system, which, in turn, hosts an accelerated TCP/IP protocol stack over ATM. Furthermore, the embedded system board has been developed according to the PCI specification to easily be plugged into a host platform. In addition, the OC-12c ATM adapter subsystem has been designed and developed in order to also be plugged into the same host. The developed architecture has proven very efficient and reliable, providing high-throughput and low-latency bulk data communications. The measured performance on an OC-3c-based (155 Mb/s) testbed has shown that an optimally implemented TCP/IP stack, hosted by a real-time kernel and coupled with an ATM adapter, offers a robust desktop platform for high-speed end-to-end communications. The main feature of the accelerated TCP/IP protocol stack is the out-of-band processing of control and data information. The protocol accelerator embedded system processes the TCP/IP headers and accomplishes checksum computations, while data is transferred from the host's user memory space directly to the network. Finally, for validation purposes, the prototype system has been incorporated in an existing networking infrastructure targeted to support mass storage applications.

Book ChapterDOI
31 May 2000
TL;DR: This work compares models of network traffic acquired by a system based on a distributed genetic algorithm with the ones acquired by one based on greedy heuristics, and discusses representation change of the network data and its impact over the performances of the traffic models.
Abstract: The detection of intrusions over computer networks (i.e., network access by non-authorized users) can be cast to the task of detecting anomalous patterns of network traffic. In this case, models of normal traffic have to be determined and compared against the current network traffic. Data mining systems based on Genetic Algorithms can contribute powerful search techniques for the acquisition of patterns of the network traffic from the large amount of data made available by audit tools. We compare models of network traffic acquired by a system based on a distributed genetic algorithm with the ones acquired by a system based on greedy heuristics. Also we discuss representation change of the network data and its impact over the performances of the traffic models. Network data made available from the Information Exploration Shootout project and the 1998 DARPA Intrusion Detection Evaluation have been chosen as experimental testbed.