scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Integrated display for supervisory control of space operations.

01 Oct 1998-Vol. 42, Iss: 5, pp 481-485
TL;DR: An integrated overview display supporting supervisory monitoring and control tasks, providing a view of control information integrated over time, supporting a supervisor in following a thread of operational information extending from the past into the future.
Abstract: In recent years, intelligent software has been applied for monitoring and controlling space systems to reduce the workload of the crew and ground support personnel. The application of intelligent software to system control changes the human role in space operations. Instead of directly performing control tasks, humans now often supervise automated software performing control tasks. We have designed an integrated overview display supporting supervisory monitoring and control tasks. This design provides a view of control information integrated over time, supporting a supervisor in following a thread of operational information extending from the past into the future. It associates control activities with the resulting consequences of control on the environment and system configuration. And it provides a uniform way to access and display a heterogeneous set of archived information characterizing system control.
Citations
More filters
01 Jan 1998
TL;DR: On-going research at the NASA Ames Research Center and the Johnson Space Center in developing human-centered autonomous systems that can be used for a manned Mars mission are discussed.
Abstract: We expect a variety of autonomous systems, from rovers to life-support systems, to play a critical role in the success of manned Mars missions. The crew and ground support personnel will want to control and be informed by these systems at varying levels of detail depending on the situation. Moreover, these systems will need to operate safely in the presence of people and cooperate with them effectively. We call such autonomous systems human-centered in contrast with traditional Oblack-boxO autonomous systems. Our goal is to design a framework for human-centered autonomous systems that enables users to interact with these systems at whatever level of control is most appropriate whenever they so choose, but minimize the necessity for such interaction. This paper discusses on-going research at the NASA Ames Research Center and the Johnson Space Center in developing human-centered autonomous systems that can be used for a manned Mars mission.

153 citations

01 Jan 1999
TL;DR: The autonomous system must be able to move between various time scales and levels of abstraction, presenting the correct level of information to the user at the correct time.
Abstract: ion is also critical to the success of planning and scheduling activities. In our scenarios, the crew will often have to deal with planning and scheduling at a very high level (e.g., what crops do I need to plant now so they can be harvested in six months) and planning and scheduling at a detailed level (e.g., what is my next task). The autonomous system must be able to move between various time scales and levels of abstraction, presenting the correct level of information to the user at the correct time. Model-based diagnosis and recovery When something goes wrong, a robust autonomous should figure out what went wrong and recover as best as it can. A model-based diagnosis and recovery system, such as Livingstone [Williams and Nayak, 96], does this. It is analogous to the autonomic and immune systems of a living creature. If the autonomous system has a model of the system it controls, it can use this to figure out what is the most likely cause that explains the observed symptoms as well as how can the system recover given this diagnosis so its mission can continue. For example, if the pressure of a tank is low, it could be because the tank has a leak, the pump blew a fuse, a valve is not open to fill the tank or not closed to keep the tank from draining. However, it could be that the tank pressure is not low and the pressure sensor is defective. By analyzing the system from other sensors, it may say the pressure is normal or suggest closing a valve, resetting the pump circuit breaker, or requesting a crewmember to check the tank for a leak.

96 citations


Cites background or methods from "Integrated display for supervisory ..."

  • ...A more complete description of this on-going work is available in [Schreckenghost et al., 1998]....

    [...]

  • ...Additionally, autonomous systems, e.g., automated control of life support systems and robots, can reduce crew workload [Schreckenghost et al., 1998] at the remote site....

    [...]

  • ...We developed an automated control system for the product gas transfer system using the 3T control architecture [Schreckenghost et al., 98a]....

    [...]

Journal ArticleDOI
TL;DR: The article details the development approach, the successes and failures of the system, and the lessons learned, and concludes with a summary of spinoff benefits to the AI community and areas of AI research that can be useful for future ALS systems.
Abstract: This article discusses our experience building and running an intelligent control system during a three-year period for a National Aeronautics and Space Administration advanced life support (ALS) system. The system under test was known as the Integrated Water-Recovery System (IWRS). We used the 3T intelligent control architecture to produce software that operated autonomously, 24 hours a day, 7 days a week, for 16 months. The article details our development approach, the successes and failures of the system, and our lessons learned. We conclude with a summary of spinoff benefits to the AI community and areas of AI research that can be useful for future ALS systems.

54 citations


Cites background from "Integrated display for supervisory ..."

  • ...…values in the GUIs were helpful in gaining a quick overview of the system, it would have been more valuable to have information from the upper tiers of 3T, so that the GUI information could integrate high-level data with the observed device performance data (Schreckenghost and Thronesbery, 1998)....

    [...]

  • ...formation from the upper tiers of 3T, so that the GUI information could integrate high-level data with the observed device performance data (Schreckenghost and Thronesbery 1998)....

    [...]

Journal ArticleDOI
TL;DR: This work describes two cases of semiautonomous control software developed and fielded in test environments at the NASA Johnson Space Center and interacted closely with humans for months at a time.
Abstract: Future manned space operations will include a greater use of automation than we currently see. For example, semiautonomous robots and software agents will perform difficult tasks while operating unattended most of the time. As these automated agents become more prevalent, human contact with them will occur more often and become more routine, so designing these automated agents according to the principles of human-centered computing is important. We describe two cases of semiautonomous control software developed and fielded in test environments at the NASA Johnson Space Center. This software operated continuously at the JSC and interacted closely with humans for months at a time.

39 citations

Journal ArticleDOI
01 Sep 1999
TL;DR: This paper develops user interface software for an intelligent robotic control architecture and uses this software to interact with an automated robot during traded control tasks for experimentation and maintenance operations during space exploration.
Abstract: We have developed a user interface that assists a human and a robot in maintaining a shared understanding of situation while performing tasks jointly. This interface is based on the checklist artifact. Just as human teams use such artifacts to coordinate and remind themselves of ongoing activities and to document what was done for later inspection, we propose that humans and robots benefit from jointly manipulating a computer-based checklist during traded control. To evaluate this proposal, we developed user interface software for an intelligent robotic control architecture. We use this software to interact with an automated robot during traded control tasks for experimentation and maintenance operations during space exploration. The results of our evaluation are reported in this paper.

8 citations


Cites background from "Integrated display for supervisory ..."

  • ...By associating control actions with their environmental effects, the checklist summarizes important events during traded control and highlights inconsistencies between the intent of an action and its actual effects (Schreckenghost and Thronesbery, 1998)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This paper describes an implementation of the 3T robot architecture which has been under development for the last eight years and has been implemented on a variety of very different robot systems using different processors, operating systems, effectors and sensor suites.
Abstract: This paper describes an implementation of the 3T robot architecture which has been under development for the last eight years. The architecture uses three levels of abstraction and description languages which are compatible between levels. The makeup of the architecture helps to coordinate planful activities with real-time behaviours for dealing with dynamic environments. In recent years, other architectures have been created with similar attributes but two features distinguish the 3T architecture: (1) a variety of useful software tools have been created to help implement this architecture on multiple real robots; and (2) this architecture, or parts of it, have been implemented on a variety of very different robot systems using different processors, operating systems, effectors and sensor suites.

629 citations

Book ChapterDOI
19 Aug 1995
TL;DR: This paper briefly describes a robot architecture that has been under development for the last eight years and has been implemented on over half a dozen very different robot systems using a variety of processors, operating systems, effectors and sensor suites.
Abstract: This paper briefly describes a robot architecture that has been under development for the last eight years. This architecture uses several levels of abstraction and description languages that are compatible between levels. The makeup of the architecture helps to coordinate planful activities with real-time behaviors for dealing with dynamic environments. In recent years, many architectures have been created with similar attributes. The two features that distinguish this architecture from most of those are: 1) a variety of useful software tools have been created to help implement this architecture on multiple real robots; and 2) this architecture, or parts of it have been implemented on over half a dozen very different robot systems using a variety of processors, operating systems, effectors and sensor suites.

339 citations

01 Aug 1987
TL;DR: The Georgia Tech-Multisatellite Operations Control Center (GT-MSOCC) is a real-time interactive simulation of the operator interface to a NASA ground control system for unmanned Earth-orbiting satellites.
Abstract: Modeling and aiding operators in supervisory control environments are necessary prerequisites to the effective use of automation in complex dynamic systems. A research program is described that explores these issues within the context of the Georgia Tech-Multisatellite Operations Control Center (GT-MSOCC). GT-MSOCC is a real-time interactive simulation of the operator interface to a NASA ground control system for unmanned Earth-orbiting satellites. GT-MSOCC is a high fidelity domain in which a range of modeling, decision aiding, and workstation design issues addressing human-computer interaction may be explored. GT-MSOCC is described in detail. The use of high-fidelity domains as research tools is also discussed, and the validity and generalizability of such results to other domains are examined. In addition to a description of GT-MSOCC, several other related parts are included. A GT-MSOCC operator function model (OFM) is presented. The GT-MSOCC model illustrates an enhancement to the general operator function modeling methodology that extends the model's utility for design applications. The proposed methodology represents operator actions as the lowest level discrete control network nodes. Actions may be cognitive or manual; operator action nodes are linked to information needs or system reconfiguration commands. Thus augmented, the operator function model provides a formal representation of operator interaction with the controlled system and serves as the foundation for subsequent theoretical and empirical research. A brief overview of experimental and methodological studies using GT-MSOCC is also given.

132 citations

Journal ArticleDOI
01 May 1986
TL;DR: The analytic procedures required to build a discrete control model show promise as a basis of a design methodology for the definition of an information display system for supervisory control tasks.
Abstract: Recent advances in computer technology and the changing rule of the human in complex systems require changes in design strategies for information displays. The use of discrete control models to represent the human operator's cognitive and decisionmaking activities is described. The analytic procedures required to build a discrete control model show promise as a basis of a design methodology for the definition of an information display system for supervisory control tasks. The discrete control modeling procedures and their application for a simulated system is demonstrated.

131 citations

Book ChapterDOI
01 Jan 1997
TL;DR: This chapter concerns how the characteristics of computer-based networks of displays shape the cognition and performance of practitioners and discusses ways in which designers can coordinate different kinds of display frames within a virtual workspace.
Abstract: Publisher Summary The three trends in information visualization attempting to address the problems caused by large networks of displays of raw data are information animation, integrated representations, and coordination of multiple views. Coordination of multiple views is used to create a virtual perceptual field or a workspace in which practitioners carry out their work activity and is the theme of this chapter. This chapter concerns how the characteristics of computer-based networks of displays shape the cognition and performance of practitioners. The chapter reviews common findings from field studies that have shown how large networks of displays, available through a limited keyhole, can place new mental burdens on users. It also provide an overview of some of the techniques that designers can use to break down the keyhole and help users focus on the relevant portion of the data field as activities unfold. The work in human-computer interaction concerned with this level of analysis of a computer based information system often goes under labels such as navigation, browsing, hypertext, dialogue design, or window management. This chapter discusses ways in which designers can coordinate different kinds of display frames within a virtual workspace. The chapter refers to this level of analysis and design as workspace coordination—the integration of the set of displays and classes of views that can be seen together in parallel or in series as a function of context.

83 citations


"Integrated display for supervisory ..." refers background in this paper

  • ...The body of research on human interaction with complex system (Woods and Watts, 1997; Roth, Malin, and Schreckenghost, 1997; Jones and Mitchell, 1995) has influenced our work on displays supporting human supervision of intelligent software....

    [...]

Trending Questions (1)
Which is the popular way to gather information about actual use of a system?

And it provides a uniform way to access and display a heterogeneous set of archived information characterizing system control.