scispace - formally typeset
Search or ask a question

Showing papers on "Testbed published in 1989"


Journal ArticleDOI
TL;DR: An overview of the ARTS Kernel and ARTS real-time toolset is given and a real- time object model and the integrated time-driven scheduling model are introduced.
Abstract: ARTS is a distributed real-time operating system designed for a real-time systems testbed being developed at Camegle Mellon University. The objective of the testbed is to develop and verify advanced real-time computing technologies for a distributed environment. The tastbed consists of a set of SUN3 workstations connected by a real-time network based on IEEE 802.5 Token Ring and Ethernet. The goal of the ARTS Kernel is not to produce simply a fast real-time executive, but rather to provide users with a predictable, analyzable, and reliable distributed real-time computing environment. In particular, we have developed a real-time object model which is incorporated with a time fence protocol. The time fence protocol is used at every invocation in the object to detect the origin of timing errors. We also developed an integrated time-driven scheduling model and its scheduler based on the notion of policy/mechanism separation. Since each scheduling policy is implemented as a kernel object, a user can easily add policies or change the system's scheduling policy. A real-time toolset was also developed in order to predict the schedulability of the real-time activities.In this paper, we give an overview of the ARTS Kernel and ARTS real-time toolset. In particular, we introduce a real-time object model and the integrated time-driven scheduling model. We then describe the basic primitives and major components of the ARTS Kernel and the real-time toolset which consists of the schedulability analyzer, Scheduler 1-2-3, and the real-time monitor/debugger, ARM.

193 citations


Proceedings ArticleDOI
01 Sep 1989
TL;DR: The description of a first prototype of the Comandos architecture is focused on, which implements a subset of the architecture and is currently running on a set of personal computers and workstations at INESC.
Abstract: Comandos is a project within the European Strategic Programme for Research on Information Technology - ESPRIT and it stems from the identified need of providing simpler and more integrated environments for application development in large distributed systems.The fundamental goal of the project is the definition of an integrated platform providing support for distributed and concurrent processing in a LAN environment, extensible and distributed data management and tools for monitoring and administrating the distributed environment.An object oriented approach was used as the ground level for the integration of the multidisciplinary concepts addressed in the project.This paper starts by describing the basic model and architecture of Comandos, which results from a common effort by all the partners in the project. We focus then on the description of a first prototype of the system, which implements a subset of the architecture and is currently running on a set of personal computers and workstations at INESC. The prototype is a testbed for the architecture, providing dynamic linking, access to persistent objects and transparent distribution. Special attention is given to the performance aspects of object invocation in virtual memory.

47 citations


Book
01 Apr 1989
TL;DR: The Distributed Vehicle Monitoring Testbed serves as an example of how a testbed can be engineered to permit the empirical exploration of design issues in knowledge AI systems.
Abstract: : Cooperative distributed problem solving networks are distributed networks of semi-autonomous processing nodes that work together to solve a single problem. The Distributed Vehicle Monitoring Testbed is a flexible and fully-instrumented research tool for empirically evaluating alternative designs for these networks. The testbed simulates a class of a distributed knowledge-based problem solving systems operating on an abstracted version of a vehicle monitoring task. There are two important aspects to the testbed: (1) it implements a novel generic architecture for distributed problem solving networks that exploits the use of sophisticated local node control and meta-level control to improve global coherence in network problem solving; (2)it serves as an example of how a testbed can be engineered to permit the empirical exploration of design issues in knowledge-based AI systems. The testbed is capable of simulating different degrees of sophistication in problem solving knowledge and different focus-of-attention mechanisms, for varying the distribution and characteristics of error in its (simulated) input data, and for measuring the progress of problem solving. Node configurations and communication channel characteristics can also be independently varied in the simulated network. (Author)

29 citations


Journal ArticleDOI
TL;DR: A retrospective view is presented of the Charlotte distributed operating system, a testbed for developing techniques and tools to solve computation-intensive problems with large-grain parallelism.
Abstract: A retrospective view is presented of the Charlotte distributed operating system, a testbed for developing techniques and tools to solve computation-intensive problems with large-grain parallelism. The final version of Charlotte runs on the Crystal multicomputer, a collection of VAX-11/750 computers connected by a local area network. The kernel/process interface is unique in its support for symmetric, bidirectional communication paths (called links), and synchronous nonblocking communications. Several lessons were learned in implementing Charlotte. Links have proven to be a useful abstraction, but the primitives do not seem to be at quite the right level of abstraction. The implementation uses finite-state machines and a multitask kernel, both of which work well. It also maintains absolute distributed information which is more expensive that using hints. The development of high-level tools, particularly the Lynx distributed programming language, has simplified the use of kernal primitives and helps to manage concurrency at the process level. >

26 citations


Proceedings Article
01 Dec 1989

24 citations


Proceedings ArticleDOI
03 Jan 1989
TL;DR: Application of early versions of the prototyping tools in two case studies indicates that significant productivity gains can be realized from using testbed prototyping techniques to validate user interface requirements.
Abstract: A requirements engineering testbed being developed and used by Rome Air Development Center is described. The testbed is used to support the research, development, application, and evaluation of requirements engineering methods and tools. It is based on a novel process model for requirements engineering activities that provides a detailed description of the fundamental activities occurring during requirements engineering and their relationships to design activities. The testbed's implementation includes a requirements analysis method and two requirements validation techniques based on rapid prototyping and simulation. Each of these tools is described from the perspective of the user, and the status of the testbed is presented. Application of early versions of the prototyping tools in two case studies indicates that significant productivity gains can be realized from using testbed prototyping techniques to validate user interface requirements. >

23 citations


01 Mar 1989
TL;DR: An overview of the CSM testbed methods development environment is presented and some numerical methods developed on a CRAY-2 are described.
Abstract: The Computational Structural Mechanics (CSM) activity is developing advanced structural analysis and computational methods that exploit high-performance computers Methods are developed in the framework of the CSM testbed software system and applied to representative complex structural analysis problems from the aerospace industry An overview of the CSM testbed methods development environment is presented and some numerical methods developed on a CRAY-2 are described Selected application studies performed on the NAS CRAY-2 are also summarized

21 citations


Proceedings ArticleDOI
15 Oct 1989
TL;DR: A functional description of the MRC architecture is presented and the distributed algorithms used for network monitoring and control are described, including a network simulator that can emulate the presence and operation of up to 20 pseudonodes.
Abstract: The Media Resource Controller (MRC) project was initiated to develop an advanced development model (ADM) testbed to be used in the evaluation of reconstitutable network control techniques using multiple-media point-to-point radio links. The planned operational application of this technology is to the 21st century tactical air control system. The ADM MRC was required to integrate three distinct point-to-point radios (two specific VHF and EHF radios and a generic X.25 interface for future definition), a vocoder port for voice capability, and an X.25 interface for host traffic. Also included in the testbed development were an operator interface for the system control and evaluation, a host simulator to generate user datagram and virtual circuit traffic in the absence of an actual host connection, and a network simulator that can emulate the presence and operation of up to 20 pseudonodes. The authors present a functional description of the MRC architecture and describe the distributed algorithms used for network monitoring and control. >

16 citations


Proceedings ArticleDOI
01 Aug 1989
TL;DR: The lessons learned through the design and deployment of a wide area ATM network on top of primary rate ISDN are described.
Abstract: Asynchronous Transfer Mode (ATM) has been advocated as the basis of multiservice telecommunications in the next century. However, the public telecommunication technology of the 1990's will be ISDN, which presents a circuit switched subscriber interface. An obvious approach to building a wide area ATM network is to build it on top of primary rate ISDN. In this paper we describe the lessons learned through the design and deployment of such a network.

16 citations


01 Apr 1989
TL;DR: An overview of the CSM Testbed methods development environment is presented and some new numerical methods developed on a CRAY-2 are described.
Abstract: A research activity called Computational Structural Mechanics (CSM) conducted at the NASA Langley Research Center is described This activity is developing advanced structural analysis and computational methods that exploit high-performance computers Methods are developed in the framework of the CSM Testbed software system and applied to representative complex structural analysis problems from the aerospace industry An overview of the CSM Testbed methods development environment is presented and some new numerical methods developed on a CRAY-2 are described Selected application studies performed on the NAS CRAY-2 are also summarized

16 citations


Proceedings ArticleDOI
01 Oct 1989
TL;DR: The SPECTRUM Testbed has been designed to support the empirical study of parallel simulation protocols and applications, with the expectation that experience with the testbed will provide insights into the efficacy of various protocols and their interplay with classes of applications.
Abstract: Currently there is not a significant body of comparative, experimental performance results for parallel simulation protocols Nor is there a body of significant analytic studies The SPECTRUM Testbed [ReDi89] has been designed to support the empirical study of parallel simulation protocols and applications, with the expectation that experience with the testbed will provide insights into the efficacy of various protocols and their interplay with classes of applications We discuss our experience with the SPECTRUM Testbed, focusing primarily on an observed, unexpected degree of dependency between protocols and applications, and on an unexpected, large set of application design options The latter gives rise to the definition of a set of application design variables which describes a large design space We discuss its impact on the testbed design and we discuss a limited set of performance results that we have for selected sets of protocols and applications

Journal ArticleDOI
TL;DR: A test-based approach to the evaluation of fault-tolerant distributed-computing schemes is discussed, based on experimental incorporation of system structuring and design techniques into real-time distributed computing testbeds centered around tightly coupled microcomputer networks.
Abstract: A test-based approach to the evaluation of fault-tolerant distributed-computing schemes is discussed. The approach is based on experimental incorporation of system structuring and design techniques into real-time distributed computing testbeds centered around tightly coupled microcomputer networks. The effectiveness of this approach has been experimentally confirmed. Primary advantages of the testbed-based approach include the relatively high accuracy of the data obtained on timing and logical complexity, as well as the relatively high degree of assurance that can be obtained on the practical effectiveness of the scheme evaluated. Various design issues encountered in the course of establishing the basic microcomputer network testbed facilities are discussed, along with their augmentation to support some experiments. The shortcomings of the testbeds that have been recognized are also discussed together with the desired extensions of the testbeds. Some of the desired extensions are beyond the state-of-the-art in microcomputer network implementation. >

31 Jan 1989
TL;DR: The development currently in progress on the functional and implementation architectures of the NASA/OAST Testbed and capabilities planned for the coming years are presented.
Abstract: Through a phased development such as a laboratory-based research testbed, the NASA/OAST Telerobot Testbed provides an environment for system test and demonstration of the technology which will usefully complement, significantly enhance, or even replace manned space activities. By integrating advanced sensing, robotic manipulation and intelligent control under human-interactive supervision, the Testbed will ultimately demonstrate execution of a variety of generic tasks suggestive of space assembly, maintenance, repair, and telescience. The Testbed system features a hierarchical layered control structure compatible with the incorporation of evolving technologies as they become available. The Testbed system is physically implemented in a computing architecture which allows for ease of integration of these technologies while preserving the flexibility for test of a variety of man-machine modes. The development currently in progress on the functional and implementation architectures of the NASA/OAST Testbed and capabilities planned for the coming years are presented.

Proceedings ArticleDOI
06 Aug 1989
TL;DR: The integration of high-level control algorithms used for battery charge control into a real-time execution environment is discussed and the main emphasis is on the development of a sophisticated programming environment to control concurrent execution of multiple autonomous algorithms coupled with a continuous input/output data flow.
Abstract: The most recent developments in the Boeing Aerospace Autonomous Power System (APS) testbed are presented. The APS testbed is a 28 V DC system with 3 kW capability, assembled for use in developing improved control techniques for aerospace electrical power systems. The main emphasis is on the development of a sophisticated programming environment to properly control concurrent execution of multiple autonomous algorithms coupled with the continuous input/output (I/O) data flow. The integration of high-level control algorithms used for battery charge control into a real-time execution environment is discussed. This includes methods that allow several functions to respond to real-time input, affect/maintain expert system (shared) memory, and control the electrical power system configuration. >

Proceedings ArticleDOI
06 Aug 1989
TL;DR: The Ada language software developed to control the NASA Lewis Research Center's Power Management and Distribution testbed is described, a reduced-scale prototype of the electric power system to be used on Space Station Freedom.
Abstract: The Ada language software developed to control the NASA Lewis Research Center's Power Management and Distribution testbed is described. The testbed is a reduced-scale prototype of the electric power system to be used on Space Station Freedom. It is designed to develop and test hardware and software for a 20 kHz power distribution system. The distributed, multiprocessor, testbed control system has an easy-to-use operator interface with an understandable English-text format. A simple interface for algorithm writers that used the same commands as the operator interface is provided, encouraging interactive exploration of the system. >

01 Jan 1989
TL;DR: A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach and has been successfully used in power system management and fault diagnosis.
Abstract: A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach. The architecture includes a knowledge-based system and has been successfully used in power system management and fault diagnosis. Architectural issues which effect overall system activities and performance are examined. The knowledge-based system is discussed along with its associated automation implications, and interfaces throughout the system are presented.

Proceedings ArticleDOI
25 Sep 1989
TL;DR: An important aspect of the system approach to space system power management described, is that intelligent control is combined with fault management and activity and resource planning and scheduling to form the nucleus of a PMAD system manager.
Abstract: A description is given of a power system testbed, space station module power management and distribution (SSM/PMAD). This testbed, designed and implemented for space station Freedom modules, uses intelligent control in automating PMAD as well as providing power system fault diagnosis and recovery. The authors describe the supporting hardware and software of the breadboard in the context of its global goal of intelligent power automation. They then consider how intelligence is distributed among the various software components of the system. An important aspect of the system approach to space system power management described, is that intelligent control is combined with fault management and activity and resource planning and scheduling to form the nucleus of a PMAD system manager. Also, deterministic processes which may, by definition, belong to knowledge-based operations have been removed and distributed to more fundamental lower level activities wherever possible. >

31 Jan 1989
TL;DR: Using the Telerobot Testbed, researchers demonstrated several of the capabilities and technological advances in the control and integration of robotic systems which have been under development at JPL for several years.
Abstract: The Jet Propulsion Laboratory's (JPL) Telerobot Testbed is an integrated robotic testbed used to develop, implement, and evaluate the performance of advanced concepts in autonomous, tele-autonomous, and tele-operated control of robotic manipulators. Using the Telerobot Testbed, researchers demonstrated several of the capabilities and technological advances in the control and integration of robotic systems which have been under development at JPL for several years. In particular, the Telerobot Testbed was recently employed to perform a near completely automated, end-to-end, satellite grapple and repair sequence. The task of integrating existing as well as new concepts in robot control into the Telerobot Testbed has been a very difficult and timely one. Now that researchers have completed the first major milestone (i.e., the end-to-end demonstration) it is important to reflect back upon experiences and to collect the knowledge that has been gained so that improvements can be made to the existing system. It is also believed that the experiences are of value to the others in the robotics community. Therefore, the primary objective here will be to use the Telerobot Testbed as a case study to identify real problems and technological gaps which exist in the areas of robotics and in particular systems integration. Such problems have surely hindered the development of what could be reasonably called an intelligent robot. In addition to identifying such problems, researchers briefly discuss what approaches have been taken to resolve them or, in several cases, to circumvent them until better approaches can be developed.

Proceedings ArticleDOI
01 May 1989
TL;DR: In this article, a digital cellular radio experimental system has been developed for the North American market, which has been demonstrated for CTIA in Los Angeles in December 1988, and the authors present field-test results.
Abstract: A digital cellular radio experimental system has been developed for the North American market. It has been demonstrated for CTIA in Los Angeles in December 1988. After describing the experiment TDMA (time-division multiple access) system, the authors present field-test results. From the demonstration it was concluded that the speech quality of the digital system was at least as good as for the analog reference under all conditions. During the time of implementation of the testbed, work was done on an 8.7 kb/s speech codec, resulting in quality as good as for the 13 kb/s codec. This means a three-times direct capacity improvement in a 3-split TDMA system. >

31 Jan 1989
TL;DR: The robotics technology testbed is centered around the dual arm teleoperation of a pair of 7 degree-of-freedom manipulators, each with their own 6-DOF mini-master hand controllers.
Abstract: Much of the technology planned for use in NASA's Flight Telerobotic Servicer (FTS) and the Demonstration Test Flight (DTF) is relatively new and untested. To provide the answers needed to design safe, reliable, and fully functional robotics for flight, NASA/GSFC is developing a robotics technology testbed for research of issues such as zero-g robot control, dual arm teleoperation, simulations, and hierarchical control using a high level programming language. The testbed will be used to investigate these high risk technologies required for the FTS and DTF projects. The robotics technology testbed is centered around the dual arm teleoperation of a pair of 7 degree-of-freedom (DOF) manipulators, each with their own 6-DOF mini-master hand controllers. Several levels of safety are implemented using the control processor, a separate watchdog computer, and other low level features. High speed input/output ports allow the control processor to interface to a simulation workstation: all or part of the testbed hardware can be used in real time dynamic simulation of the testbed operations, allowing a quick and safe means for testing new control strategies. The NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) hierarchical control scheme, is being used as the reference standard for system design. All software developed for the testbed, excluding some of simulation workstation software, is being developed in Ada. The testbed is being developed in phases. The first phase, which is nearing completion, and highlights future developments is described.

Proceedings ArticleDOI
01 Mar 1989
TL;DR: In the context of ongoing research into the coordination of various approaches to learning into an integrated facility called the Learning Testbed, the centrality of the simulation performance engine NETSIM to the development of the Learning testbed is discussed.
Abstract: This paper discusses the role of simulation in machine learning studies and presents a view of simulation-based machine learning. Based on the concept of the intelligent agent, it is shown how each of a variety of learning subsystems interacts with a simulated performance engine and how they may interact with each other. In particular, in the context of ongoing research into the coordination of various approaches to learning into an integrated facility called the Learning Testbed, the centrality of the simulation performance engine NETSIM to the development of the Learning Testbed is discussed. NETSIM is a fine-grained simulation of the call placement process in a circuit-switched telecom munications network which allows observation of the effectiveness of various traffic control strategies on network performance when time-varying traffic patterns are encountered. The users of the NETSIM program are three learning programs, which embody three different ap proaches to how a specialized domain, such as network traffic control, might be learned.

Proceedings ArticleDOI
22 May 1989
TL;DR: The structure and capabilities of the RTMOS, the intertask communications protocols and structures, and the application programmer interface are discussed, including the goal of hiding architecture specifies from the applications programmer.
Abstract: The Flight Control Division of the Air Force Flight Dynamics Laboratory has been researching the application of fault-tolerant multiprocessor architectures and software to flight control and vehicle management systems. The Advanced Multiprocessor Control Architecture development (AMCAD) in-house project has developed a real-time multiprocessor operating system (RTMOS) targeted for the AMCAD laboratory testbed. The RTMOS is similar in structure to commercial real-time operating systems, but has been expanded to support coarse-grained multiprocessing, the reconfiguration of the workload in case of processor failure, and AMCAD's goal of hiding architecture specifies from the applications programmer. The authors discuss the structure and capabilities of the RTMOS, the intertask communications protocols and structures, and the application programmer interface. >

01 Jan 1989
TL;DR: This is the fourth of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed, composed of thecommand language CLAMP, the command language interpreter CLIP, and the data manager GAL.
Abstract: This is the fourth of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 4 describes the nominal-record data management component of the NICE software. It is intended for all users.

Proceedings ArticleDOI
12 Jun 1989
TL;DR: The mission simulation testbed described in this paper provides an inexpensive means of evaluating the performance of AUV designs and configurations for various mission profiles and will emphasize the development of new control architectures for Navy missions.
Abstract: The Navy is investigating the use of autonomous underwater vehicles (AUV's) for diverse tactical missions. The mission simulation testbed described in this paper provides an inexpensive means of evaluating the performance of AUV designs and configurations for various mission profiles. AUV missions under investigation include surveillance, search, and hazard-detection undertaken in a noisy and cluttered environment. An object-oriented software environment permits the rapid prototyping and evaluation of now control paradigms, sensor configurations, and signal processing algorithms for specific missions. Great flexibility is possible in creating scenarios and modifying objects such as the ambient environment model. Multiple independent processes are handled using hypothesis posting and queue processing in a blackboard-like architecture. Recent modifications to the simulator include the incorporation of a geodetic coordinate system, a Mercator-projection map display, an active sonar obstacle-avoidance model, and an energy-consumption model. Additional models for navigation and vehicle dynamics are proposed. Future work will emphasize the development of new control architectures for Navy missions.

Proceedings ArticleDOI
04 Dec 1989
TL;DR: The Security Model Development Environment (SMDE) provides an iterative modeling approach that increases the productivity of model designers, simultaneously causing the model development process to be more accessible.
Abstract: Introduces the Security Model Development Environment (SMDE); a suite of prototype tools for the development of secure systems. The development of the SMDE was performed under contract for the Rome Air Development Center and Strategic Defense Initiative. The SMDE is based on a methodology for the construction and analysis of security models, which supports the model developer via an iterative model design process. The methodology supports a concept of automatic rule base generation which required the development of the prototype tools and the Common Notation for the expression of security models. The prototype tools are the Model Translator Tool (MTT) and the Testbed. The MTT automatically generates a rule base from a security model and the Testbed simulates the activity of a system using a model's rule base. The methodology, together with an extended model description, provides support for the automated tools and the impact of the methodology on security model development is summarized. The SMDE provides an iterative modeling approach that increases the productivity of model designers, simultaneously causing the model development process to be more accessible. >

01 Feb 1989
TL;DR: This is the second of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed, composed of thecommand language (CLAMP), the command language interpreter (CLIP), and the data manager (GAL).
Abstract: This is the second of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language (CLAMP), the command language interpreter (CLIP), and the data manager (GAL). Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 2 describes the CLIP directives in detail. It is intended for intermediate and advanced users.

Proceedings ArticleDOI
06 Aug 1989
TL;DR: A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach and has been successfully used in power system management and fault diagnosis.
Abstract: A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach. The architecture includes a knowledge-based system and has been successfully used in power system management and fault diagnosis. Architectural issues which effect overall system activities and performance are examined. The knowledge-based system is discussed along with its associated automation implications, and interfaces throughout the system are presented. >

Proceedings ArticleDOI
A. Dupy1, J. Schwartz, Y. Yemini, D. Bacon
01 Oct 1989
TL;DR: Nest is a graphical environment for distributed networked systems simulation and rapid-prototyping that can develop and test distributed systems and protocols within simulated network scenarios and offers the generality of language-based simulation techniques and the efficiencies of model-based techniques.
Abstract: This paper describes Nest, a graphical environment for distributed networked systems simulation and rapid-prototyping. Nest users can develop and test distributed systems and protocols (from crude models to actual system code) within simulated network scenarios. Nest represents an environment-based approach to simulation. Users view Nest as an extension of their standard UnixTM environment. Nest offers the generality of language-based simulation techniques and the efficiencies of model-based techniques. Users interact with Nest through standardized graphical interfaces. Nest permits the users to modify and reconfigure the simulation during execution. Thus, it is possible to study the dynamic response of a distributed system to failures or burst-loads. Nest is organized as a simulation server, responsible for execution of complex simulation scenarios, and client monitors responsible for simulation control. The client/server model permits distribution of Nest over a network environment. This permits migration of simulations to powerful remote computational servers as well as development of a shared multi-site simulation/integration testbed. Nest is portable and extensible. It has been ported to virtually all UnixTM variants and distributed since 1987 to over 150 sites worldwide. It has been used in scores of studies ranging from communication protocols, to distributed databases and operating systems as well as distributed manufacturing systems.

Proceedings ArticleDOI
06 Aug 1989
TL;DR: An intelligent filtering system for filtering notifications from sensors on the basis of the user's current focus and the relative importance of the notification has been developed to reduce the telemetry data from the Hubble Space Telescope (HST) Electrical Power System testbed.
Abstract: An intelligent filtering system for filtering notifications from sensors on the basis of the user's current focus and the relative importance of the notification has been developed to reduce the telemetry data from the Hubble Space Telescope (HST) Electrical Power System testbed. The motivation, design, and development of this system are discussed. A particular area of interest is the health and performance of the six nickel-cadmium batteries as they undergo the charge and load functions of the simulated orbiting. The knowledge base in the present system is derived primarily from the warnings and alarms in NICBES, an expert system for health management and diagnosis of these batteries. Both systems get their input from a stream of 370 sensor readings emitted from the testbed every minute. Included in these streams are cell voltages and pressures, total battery voltages and currents, and bus voltages and currents. >

Journal ArticleDOI
TL;DR: The notion of a distributed intelligent object (DIO) is introduced to incorporate and integrate the necessary concepts in the management of time, knowledge and system distribution in real-time applications.