scispace - formally typeset
Search or ask a question

Showing papers on "Testbed published in 1991"


Proceedings ArticleDOI
13 May 1991
TL;DR: It is argued that multiuser distributed memory multiprocessors with dynamic mapping of the application onto the hardware structure are needed to make available the advantages of this type of architecture to a wider user community.
Abstract: It is argued that multiuser distributed memory multiprocessors with dynamic mapping of the application onto the hardware structure are needed to make available the advantages of this type of architecture to a wider user community. It is shown, based on an abstract model, that such architectures may be used efficiently. It is also shown that future developments in interconnection hardware will allow the fulfillment of the assumptions made in the model. Since a dynamic load balancing procedure will be one of the most important components in the systems software, the elements of its implementation are discussed and first results based on a testbed implementation are presented. >

77 citations


Proceedings ArticleDOI
02 Dec 1991
TL;DR: An iterative approximate technique is obtained, using a number of auxiliary models, each of much lower state complexity, on a substantial model which represents a parallel implementation of two layers of protocols for data communications.
Abstract: Performance analysis of Petri net models is limited by state explosion in the underlying Markovian model. To overcome this problem, an iterative approximate technique is obtained, using a number of auxiliary models, each of much lower state complexity. It is demonstrated on a substantial model which represents a parallel implementation of two layers of protocols for data communications. The model represents ten separate software tasks and their interactions via rendezvous, and is based on a testbed implementation in the laboratory. Submodels can be constructed in various ways, and this is illustrated with four different decompositions. Their state space complexity, solution time and solution accuracy are evaluated. >

75 citations


Journal ArticleDOI
TL;DR: This paper describes the implementation of a testbed for load balancing techniques, used for different static and dynamic strategies for balancing the work load of an iPSC/2 Implementation of a simple simulation of population evolution.

74 citations


Proceedings ArticleDOI
TL;DR: In this paper, a 35 meter baseline orbiting optical interferometer is studied as a focus mission for a testbed for controlled structures research, which captures the essential architecture, physics and performance requirements of a full scale instrument.
Abstract: A class of proposed space-based astronomical missions requiring large baselines and precision alignment can benefit from the application of Controlled Structures Technology. One candidate mission, that of a 35 meter baseline orbiting optical interferometer, is studied as a focus mission for a testbed for controlled structures research. Interferometry science requirements are investigated and used to design a laboratory testbed which captures the essential architecture, physics and performance requirements of a full scale instrument. Testbed hardware used for identification and control is presented, including an on-board six-axis laser metrology system using state of the art cat's eye retroreflectors. The testbed and research program are discussed in terms of controlled structures design and in terms of the expected benefits to the optical engineering and science communities.

33 citations


Journal ArticleDOI
01 Apr 1991
TL;DR: It is found that two-way traffic has dynamics which can significantly decrease fairness among competing connections using congestion avoidance, and an effective modification to the congestion avoidance algorithms is developed to maintain fairness with two- way traffic.
Abstract: An extensive set of measurements were made in an OSI testbed to study the behavior of congestion control and avoidance. Testbed systems used the Connectionless Network Protocol (CLNP) and Transport Protocol Class 4 (TP4), which had been modified to perform the CE-bit [10] congestion avoidance and the "CUTE" [6] congestion recovery algorithms.We found that two-way traffic has dynamics which can significantly decrease fairness among competing connections using congestion avoidance. We present experiments that demonstrate this problem and our analysis of how two-way traffic results in reduced fairness. This analysis led us to develop an effective modification to the congestion avoidance algorithms to maintain fairness with two-way traffic.Our analysis of experimental results also points to undesirable interactions between two-way traffic dynamics and a sending strategy that times data transmissions, by the receipt of acknowledgements. These interactions reinforce burstiness of transmissions. therefore increasing buffering requirements and delay in routers. They may also decrease throughput.

30 citations


Book ChapterDOI
01 Jan 1991
TL;DR: This report has taken an integrated approach to developing algorithms for cpu scheduling, concurrency control, conflict resolution, transaction restart, transaction wakeup, deadlock, buffer man- agement, and disk I/O scheduling that directly address real-time constraints.
Abstract: In many application areas database management systems may have to operate under real-time constraints. We have taken an integrated approach to developing algorithms for cpu scheduling, concurrency control (based both on locking and on optimistic concurrency control), conflict resolution, transaction restart, transaction wakeup, deadlock, buffer man- agement, and disk I/O scheduling. In all cases the algorithms directly address real-time constraints. We have developed new algorithms, implemented them on an experimental testbed called RT-CARAT, and evaluated their performance. We have paid particular note to how the algorithms interact with each other and to actual implementation costs and their impact on performance. The experimental results are numerous and constitute the first such results on an actual real-time database testbed. The main algorithms and conclusions reached are presented in this report.

28 citations


01 May 1991
TL;DR: This dissertation studies some of the file system issues needed to get high performance from parallel disk systems, since parallel hardware alone cannot guarantee good performance, and shows that prefetching and caching improved the performance of parallel file systems, often dramatically.
Abstract: The increasing speed of the most powerful computers, especially multiprocessors, makes it difficult to provide sufficient I/O bandwidth to keep them running at full speed for the largest problems. Trends show that the difference in the speed of disk hardware and the speed of processors is increasing, with I/O severely limiting the performance of otherwise fast machines. This widening access-time gap is known as the "I/O bottleneck crisis." One solution to the crisis, suggested by many researchers, is to use many disks in parallel to increase the overall bandwidth. This dissertation studies some of the file system issues needed to get high performance from parallel disk systems, since parallel hardware alone cannot guarantee good performance. The target systems are large MIMD multiprocessors used for scientific applications, with large files spread over multiple disks attached in parallel. The focus is on automatic caching and prefetching techniques. We show that caching and prefetching can transparently provide the power of parallel disk hardware to both sequential and parallel applications using a conventional file system interface. We also propose a new file system interface (compatible with the conventional interface) that could make it easier to use parallel disks effectively. Our methodology is a mixture of implementation and simulation, using a software testbed that we built to run on a BBN GP1000 multiprocessor. The testbed simulates the disks and fully implements the caching and prefetching policies. Using a synthetic workload as input, we use the testbed in an extensive set of experiments. The results show that prefetching and caching improved the performance of parallel file systems, often dramatically.

27 citations


Book ChapterDOI
01 Jan 1991
TL;DR: The Princeton University Behavioral Synthesis System (PUBSS) is being built as a testbed for high-level synthesis methods and development of optimization methods for partially-specified architectures.
Abstract: We are building the Princeton University Behavioral Synthesis System (PUBSS) as a testbed for high-level synthesis methods. Our research in high-level synthesis is guided by two principles: concentration on control-dominated machines and development of optimization methods for partially-specified architectures.

25 citations


Journal ArticleDOI
TL;DR: An experimental ATM switching node being developed by CSELT in the framework of CNR PFT is described, built around a self-routing buffered multistage switching network based oh a single type of integrated component.
Abstract: An experimental ATM switching node being developed by CSELT in the framework of CNR PFT is described. The node is built around a self-routing buffered multistage switching network based oh a single type of integrated component. Each port of the network is connected to a packet processor which is in charge of interfacing with external ATM flows at 150 Mbit/s and of dealing with the ATM layer protocols. A distributed control structure is implemented, together with a software architecture which permits a rapid and easy introduction and testing of new services by abstracting system resources and defining a formal interface to application software. The access network developed within the ATM testbed is based on an active star shaped B-NT which controls the access of the customer equipment to the local exchange and performs statistical multiplexing of the generated information flows. The terminals and interworking unit being provided in the ATM testbed have also been described, to give a picture of the services which will be considered in the experimentations.

19 citations


Journal ArticleDOI
TL;DR: In this paper, an optical holographic backplane interconnect system capable of high-speed information transmission between optoelectronic transmitter/receiver boards is described using conjugate pairs of transmission gratings in a folded reflection geometry.
Abstract: An optical holographic backplane interconnect system capable of high-speed information transmission between optoelectronic transmitter/receiver boards is described. Using conjugate pairs of transmission gratings in a folded reflection geometry, a practical method of insulating the interconnect system from wavelength variations due to temperature or power fluctuations can be achieved. The final demonstration unit was developed in a fully packaged form and has the potential for reconfigurable interconnects and may serve as a testbed for a variety of interconnect networks and hardware protocols. >

18 citations


Proceedings ArticleDOI
01 Jan 1991
TL;DR: The malicious code testbed (MCT) is based upon both static and dynamic analysis tools developed at the University of California, Davis, which have been shown to be effective against certain types of malicious code.
Abstract: An environment for detecting many types of malicious code, including computer viruses, Trojan horses, and time/logic bombs, is proposed. The malicious code testbed (MCT) is based upon both static and dynamic analysis tools developed at the University of California, Davis, which have been shown to be effective against certain types of malicious code. The testbed extends the usefulness of these tools by using them in a complementary fashion to detect more general cases of malicious code. Perhaps more importantly, the MCT allows administrators and security analysts to check a program before installation, thereby avoiding any damage a malicious program might inflict. >

Proceedings ArticleDOI
01 Jan 1991

Proceedings ArticleDOI
23 Jun 1991
TL;DR: The testbed is intended to affect services definition, signaling standards, and network software, and to offer insights and technology to broadband field trials.
Abstract: Bellcore is assembling an experimental research prototype for investigation of the end-to-end integration of broadband networks, multimedia terminals, and potential services and applications. The testbed includes an ATM (asynchronous transfer mode)-based exchange network prototype, adaptors to metropolitan and local area networks and terminals, a broadband ISDN (integrated services digital network) network termination for a multiterminal environment, and experimental terminals. The capabilities of the prototype will be stressed by services including multimedia, multipoint connections, connectionless data service, multimedia teleconferencing, adaptable-rate video, and video on demand. The testbed is intended to affect services definition, signaling standards, and network software, and to offer insights and technology to broadband field trials. >

Book ChapterDOI
01 Jan 1991
TL;DR: The experiments particularly concern the relationship between ’situated’ action and action resulting from ’predictive’ or ’strategic’ planning in multi-agent communities, and the impact of changing the content of agents’ social models on their behaviour and collective effectiveness.
Abstract: This paper describes the MCS multiple agent software testbed which has been developed as a research tool in the University of Essex, Department of Computer Science. Recent enhancements to the testbed are noted, and experiments using it are briefly reported and discussed. The experiments particularly concern the relationship between ’situated’ action and action resulting from ’predictive’ or ’strategic’ planning in multi-agent communities, and the impact of changing the content of agents’ social models on their behaviour and collective effectiveness. The experiments are partly oriented to the study of human social systems in prehistory.


01 May 1991
TL;DR: The Terrestrial Navigator as discussed by the authors is a six-wheeled skid-steer machine utilizing compliant tires for suspension, a hybrid power system provides for different modes and environments during operation and a rugged drive system and integral controller provide a complete package for doing robotics research.
Abstract: : The Terrestrial Navigator, or Terregator, is a mobile robot device designed to provide a reliable and rugged testbed for both indoor and outdoor work in robotic navigation, guidance, sensor interpretation, and architectures. The design of mobile robots involves conflicting needs and a wide mix of disciplines. The Terregator design resolves many such needs through flexible and extensible mechanics, electronics, hardware and software. It is a six-wheeled skid-steer machine utilizing compliant tires for suspension, a hybrid power system provides for different modes and environments during operation and a rugged drive system and integral controller provide a complete package for doing robotics research.

DOI
01 Jan 1991
TL;DR: This document describes how application models will be validated in the National PDES Testbed at the National Institute of Standards and Technology.
Abstract: This document is part of the National PDES Testbed Report Series and is intended to complement the other reports of the Validation Testing System (VTS) project. Specifically, other documents are available which fully describe the model validation methodology used in the Testbed, software requirements for the VTS, and details of the software which automates that methodology. These documents are referred to throughout this report. The problem of sharing data has many facets. The need for the capability to share data across multiple enterprises, different hardware platforms, different data storage paradigms and systems, and a variety of network architectures is growing. The emerging Standard for the Exchange of Product Model Data (STEP), a project of the International Organization for Standardization (ISO), addresses this need by providing information models which clearly and unambiguously describe data. These models are organized into application protocols. An application protocol addresses the data sharing needs for a particular application area. STEP integrates the information requirements from all the application protocols. The validity of these information models is essential for success in sharing data in a highly automated environment. This document describes how application models will be validated in the National PDES Testbed at the National Institute of Standards and Technology. (PDES, Product Data Exchange using STEP, is the U.S. effort in support of the international standard.) Application model development and testing is a complex process which involves synthesizing, analyzing, and manipulating large amounts of diverse information. Most of the process relies exclusively on human capabilities for analysis, judgment, and interaction; however, part of this process can and should be automated. A strategy for automation is based on an analysis of the information flow needed for the development and testing process and initial experiences with automation for validation testing at the National PDES Testbed.

Proceedings Article
28 Oct 1991
TL;DR: The initial work on the development of an X.500 simulation testbed to be used in investigating the behaviour of X. 500 directories in large distributed environments is described and its underlying assumptions are discussed.
Abstract: The X500 Standard has been proposed as the basis for a directory service in distributed systems There is some question as to whether it is suited to this use This paper describes the initial work on the development of an X500 simulation testbed to be used in investigating the behaviour of X500 directories in large distributed environments The initial version of the testbed has been developed using the Quipu prototype implementation of X500 and the Nest network simulation tool The long-term goals of this work are twofold: first, to evaluate the effectiveness of X500 in providing directory services in a distributed system environment and, second, to evaluate and explore proposed changes to the Standard This paper outlines the X500 simulation model, discusses its underlying assumptions and describes initial experiments in using it to evaluate a network of X500 directory and user agents It also discusses the approach used, reports on experiences with the existing tools and identifies further enhancements to facilitate use of such tools for simulating other, similar distributed applications

Journal ArticleDOI
TL;DR: The design of an architecture for a homogeneous microcomputer-based multiprocessor system that includes both hardware and software is presented and the present status of the architecture is described and the future research areas are discussed.

Proceedings ArticleDOI
01 Jan 1991
TL;DR: In this article, the authors describe the results of an active structural control experiment performed for the Advanced Control Evaluation for Structures (ACES) testbed at NASA-Marshall as part of the NASA Control-Structure Interaction Guest Investigator Program.
Abstract: This paper describes the results of an active structural control experiment performed for the Advanced Control Evaluation for Structures (ACES) testbed at NASA-Marshall as part of the NASA Control-Structure Interaction Guest Investigator Program. The experimental results successfully demonstrate the effectiveness of a 'dipole' concept for line-of-sight control of a pointing system mounted on a flexible structure. The simplicity and effectiveness of a classical 'single-loop-at-a-time' approach for the active structural control design for a complex structure, such as the ACES testbed, are demonstrated.

Proceedings ArticleDOI
02 Dec 1991
TL;DR: An overview is presented of the gigabit network testbed project, the motivations for the project, current plans for network and application research in the testbeds, and the possible impact of the test beds on the architectures and applications of the stage-three US National Research and Education Network.
Abstract: An overview is presented of the gigabit network testbed project. A discussion is presented of the motivations for the project, current plans for network and application research in the testbeds, and the possible impact of the testbeds on the architectures and applications of the stage-three US National Research and Education Network. The five testbeds (AURORA, BLANCA, CASA, NECTAR and VISTAnet) are also discussed individually. >

Proceedings ArticleDOI
01 Feb 1991
TL;DR: A multiarm robotic testbed for space servicing applications provides the flexibility for autonomous control with operator interaction at different levels of abstraction and facilitates efficient transfer of teleoperation control to all levels in the system hierarchy, enabling the study of the relationship between the human operator and the remote system.
Abstract: A multi-arm robotic testbed for space servicing applications is presented. The system provides the flexibility forautonomous control with operator interaction at different levels of abstraction. We have integrated key technologies from theareas of artificial intelligence, robotic control, computer vision, and human factors in an architecture which has proven usefulfor resolving issues related to space-based servicing tasks. A system-level breakdown of testbed components is presented,outlining the function and role of each technology area. A key feature of the architecture is that it facilitates efficient transferof teleoperation control to all levels in the system hierarchy, enabling the study of the relationship between the humanoperator and the remote system. This includes the ability to perform autonomous situation assessment so that operatorcontrol activities at lower levels can be interpreted in terms of system model updates at higher levels. 1. INTRODUCTION As plans for the next generation of space exploration unfold it is evident that highly flexible robotic systems will play amajor role in reducing costs and extending mission capabilities. Planned orbital and planetary operations demand newapproaches to human-controlled remote operations. Long-duration missions such as earth observation and planetaryexploration require a new level of maintenance and assembly support in orbit. Programs such as the Space Station Freedom(SSF), Flight Telerobotic Servicer (Fl'S), Satellite Servicer System(SSS), Mars Sample Return Mission, and the SpaceExploration Initiative (SEI) require an evolution in robotic system capabilities with an efficient integration of intelligentautonomous control with man-in-the-loop teleoperation.While it is generally accepted that the productivity of an on-site crewman is higher than that of a remotely controlledoperation, there is significantly more time available for remote operation than direct on-site operation. For example, the SSFwill allow astronauts to perform some maintenance and servicing tasks in low earth orbit via extra-vehicular activity. Evenso, available crew time will probably not be sufficient to perform all required tasks. Remote ground-based servicing mayprove essential for routine station operations, reducing the burden on the station crew.Maintenance and servicing operations not located at 5SF will be directed from a terrestrial control center, with the actualtasks performed either in orbit or on another planetary surface. Human interaction will be required to observe and assist inhandling unanticipated, potentially mission threatening situations that are beyond the sensing and planning capabilities of theautonomous operations of the remote system. Combining teleoperation with autonomous control provides a capability thatcapitalizes on the strengths of each. Under teleoperation, human perception and reasoning capabilities enable man-in-the-loop control for delicate or ill-defined tasks while autonomy allows better defined tasks to be executed quickly andefficiently.These observations were the motivation for developing a testbed for studying tele-autonomous operations in space. TheTele-Autonomous Testhed provides a platform for identifying, developing and evaluating robotic system technologies forremote missions. Under the testbed, independently developed technologies, from a broad range of disciplines, are broughttogether and studied as an integrated entity. Technologies from artificial intelligence, robotic control, computer vision, andhuman factors are combined in a testhed architecture which allows the operator to efficiently transfer control betweendifferent levels of autonomy, enabling the study of the relationship between the human operator and task performance underdifferent tele-autonomous scenarios.82

Proceedings ArticleDOI
04 Nov 1991
TL;DR: The basic computer network testbed facility centralized core distributed inner core (CCDIC) simulation protocol characteristics are an emulation of real time, heavy load, simultaneous, synchronous and asynchronous parallel inputs in a deadlock and livelock free, contention resolved manner.
Abstract: The basic computer network testbed facility centralized core distributed inner core (CCDIC) simulation protocol characteristics are an emulation of real time, heavy load, simultaneous, synchronous and asynchronous parallel inputs in a deadlock and livelock free, contention resolved manner. The centralized distributed stars topology representation of the testbed configuration enables network growth with a topology comfortable to congestion prevention and control at various levels of service. The CCDIC protocol, a congestion-controlled, buffer management protocol, prevents the throughput degradation which is allowed to occur in the discrete event simulation illustrated in the performance comparison. A neural network forecaster protocol, within the CCDIC simulation framework, enables intelligent decision processes. The geometric CCDIC protocol structure enforces optimal Ada CCDIC software implementation for execution of message operation on processors and connectivity between analyzer resources. >

Proceedings ArticleDOI
09 Apr 1991
TL;DR: The SPDM ground testbed (SPDM-GT) was created to provide facilities for the proof-of-principle demonstrations of advanced technology areas for the SPDM.
Abstract: The mobile servicing system is being developed in order to assist with the construction and maintenance of the International Space Station Freedom. One important element of the mobile servicing system is the special-purpose dexterous manipulator (SPDM). It is designed to maintain both the mobile servicing system and the space station. The SPDM ground testbed (SPDM-GT) was created to provide facilities for the proof-of-principle demonstrations of advanced technology areas for the SPDM. The mechanical, computer, and software design of the SPDM-GT is discussed, and an overview is given of some of the control experiments that are currently underway. >


Proceedings ArticleDOI
02 Dec 1991
TL;DR: The paper discusses the activities, successes, and challenges associated with the testbed activities at Military Airlift command (MAC) at Scott Air Force Base, Illinois.
Abstract: The Joint MLS Technology Insertion Program was established by the Joint Staff J6 in January 1990. A key component of the Joint MLS program is the DoD testbed at Military Airlift command (MAC). Scott Air Force Base (AFB), Illinois. The testbed is addressing critical secure system integration issues associated with expediting the deployment of MLS capabilities and components into operational command and control (C/sup 2/) systems. The paper discusses the activities, successes, and challenges associated with the testbed activities at MAC. >

01 Dec 1991
TL;DR: This manual is designed to assist users in defining and using command procedures to perform structural analysis in the CSM Testbed User's Manual and the CSm Testbed Data Library Description.
Abstract: The purpose of this manual is to document the standard high level command language procedures of the Computational Structural Mechanics (CSM) Testbed software system. A description of each procedure including its function, commands, data interface, and use is presented. This manual is designed to assist users in defining and using command procedures to perform structural analysis in the CSM Testbed User's Manual and the CSM Testbed Data Library Description.

ReportDOI
01 Jan 1991
TL;DR: This report is intended to function both as a user’s guide and as a general explanation of the system and its capabilities.
Abstract: As part of the National PDES Testbed Project, the Information Services Center has established a facility to electronically distribute documents. This facility is based on software known as an “archive server” or “mail server.” Electronic documents may be requested and in turn received via electronic mail (e-mail). This report is intended to function both as a user’s guide and as a general explanation of the system and its capabilities.

Journal ArticleDOI
TL;DR: The objectives and the early technical results of a project for the realization of an ATM local network testbed are described, to investigate the critical issues related to the introduction of ATM techniques in the network, including signalling, interworking and service support, in addition to the basic switching technology.
Abstract: The objectives and the early technical results of a project for the realization of an ATM local network testbed are described. The aim of the project is to realize a flexible experimentation environment to investigate the critical issues related to the introduction of ATM techniques in the network, including signalling, interworking and service support, in addition to the basic switching technology. After presenting the general structure and the solutions adopted for the testbed, the paper describes the main characteristics of its building blocks currently under development: the switching node with its ATM switching fabric and control units, the access network with the related interfaces and protocols, the customer equipment composed by ATM workstations, ATM terminal adaptors and LAN gateways, the interworking unit connecting the testbed to the telephone network. Finally the resources allocation and policing functions adopted are also discussed.

Proceedings ArticleDOI
30 Sep 1991
TL;DR: The multi-expert distributed architecture (MEDA) tests predefined strategies and also builds and evaluates strategies by the automatic learning and self-organization provided by the entities emancipation process (EEP).
Abstract: The multi-expert distributed architecture (MEDA) tests predefined strategies and also builds and evaluates strategies by the automatic learning and self-organization provided by the entities emancipation process (EEP). The testing of strategies, together with the knowledge standardization mechanism provided by the association to the knowledge of additional dimensions within specific universes which grant the expansion of expert knowledge towards a common representation, allows MEDA to constitute a testbed for distributed artificialintelligence (DAI) techniques. The MEDA tool for automatic acquisition and automatic improvement of distributed solving techniques is intended to expand research in DAI toward automatic knowledge discovery. >