scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A robust compositional architecture for autonomous systems

TL;DR: In this article, the authors describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems, and combine state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques.
Abstract: Space exploration applications can benefit greatly from autonomous systems. Great distances, limited communications and high costs make direct operations impossible while mandating operations reliability and efficiency beyond what traditional commanding can provide. Autonomous systems can improve reliability and enhance spacecraft capability significantly. However, there is reluctance to utilizing autonomous systems. In part, this is due to general hesitation about new technologies, but a more tangible concern is the reliability and predictability of autonomous software. In this paper, we describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The work combines state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.

Summary (1 min read)

1. INTRODUCTION

  • Space exploration applications offer a unique opportunity for the development and deployment of autonomous systems, due to limited communications, great distances, and high cost of direct operation.
  • The authors describe an ongoing effort to develop a new approach to defining, implementing and maintaining compositional autonomous systems.
  • The other is a testing and validation methodology that allows the certification of new adaptations to be limited to the components and relations that are modified.

2. AN ARCHITECTURE FOR AUTONOMY

  • Autonomous systems vary greatly in the representation and reasoning techniques utilized in such systems.
  • The actions are implemented in the CLARAty framework (Coupled-Layer Architecture for Rover Autonomy), which also provides structured access to system states and sensory information.
  • – constraint-based planning The Extensible Universal Remote Operations Planning Architecture is a model-based planning and scheduling architecture descended from the Remote Agent Planner [14] .
  • The database provides mechanisms to efficiently query the plan state and modify the plan.
  • Finally, runtime models of devices are incorporated in the Functional Layer.

3. VALIDATION OF AUTONOMOUS SYSTEMS

  • In the autonomy architecture outlined here, an instantiation of an autonomous system consists of the following elements: (1) The core EUROPA planning and decision-making framework, which will largely be unchanged between applications and thus can be validated without reference to the specific instantiation.
  • The properties of the core components are known and validated, so the new components can be validated against these proven assumptions.
  • As outlined above, the authors rely heavily on these compositional verification techniques in their work, both to address computational cost issues, and to enable incremental validation of autonomy system instantiations.
  • In general, static program analyzers aim at checking all execution paths, sometimes at the cost of incompleteness (i.e., impossibility of determining the safety of all operations with exact precision).
  • It can generate code with a range of algorithmic characteristics and for several target platforms.

4. CONCLUDING REMARKS

  • The architecture has been defined and an initial version has been implemented.
  • 6 static analysis, compositional verification and automated synthesis to parts and aspects of the autonomy architecture.
  • Static analysis is being applied to modules of the EUROPA framework, providing initial analysis for the current implementation.
  • The results of this work are very promising, having demonstrated the ability to generate small assumptions that in turn enable very fast validation of system properties.
  • To summarize, the effort outlined here has only recently been started, but even at this early point in time, the results are promising.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

1
A Robust Compositional Architecture
for Autonomous Systems
Guillaume Brat, Ewen Denney, Kimberley Farrell, Dimitra Giannakopoulou, Ari Jónsson
Research Institute for Advanced Computer Science
NASA Ames Research Center, Mailstop 269-2
Moffett Field, CA 94035
{brat, edenney, kfarrell, dimitra, jonsson}@email.arc.nasa.gov
Jeremy Frank
NASA Ames Research Center, Mailstop 269-2
Moffett Field, CA 94035
frank@email.arc.nasa.gov
Mark Boddy, Todd Carpenter
Adventium Enterprises
100 Mill Place
111 Third Avenue South
Minneapolis, MN 55401 USA
{mark.boddy, todd.carpenter}@adventiumenterprises.com
Tara Estlin, Mihail Pivtoraiko
Jet Propulsion Laboratory M/S 126-347
4800 Oak Grove Drive
Pasadena CA 9110, USA,
{tara.estlin, mihail.n.pivtoraiko}@jpl.nasa.gov
Abstract
12
Space exploration applications can benefit
greatly from autonomous systems. Great distances, limited
communications and high costs make direct operations
impossible while mandating operations reliability and
efficiency beyond what traditional commanding can
provide. Autonomous systems can improve reliability and
enhance spacecraft capability significantly. However, there
is reluctance to utilizing autonomous systems. In part, this is
due to general hesitation about new technologies, but a
more tangible concern is the reliability and predictability of
autonomous software.
In this paper, we describe ongoing work aimed at increasing
robustness and predictability of autonomous software, with
the ultimate goal of building trust in such systems. The
work combines state-of-the-art technologies and capabilities
in autonomous systems with advanced validation and
synthesis techniques. The focus of this paper is on the
autonomous system architecture that has been defined, and
on how it enables the application of validation techniques
for resulting autonomous systems.
1
1
0-7803-9546-8/06/$20.00© 2006 IEEE
2
IEEEAC paper #1499, Version 9, Updated Dec 16, 2005
TABLE OF CONTENTS
1.
INTRODUCTION ..................................................... 1
2. AN ARCHITECTURE FOR AUTONOMY .................. 2
3. VALIDATION OF AUTONOMOUS SYSTEMS............ 4
4. CONCLUDING REMARKS ....................................... 5
REFERENCES............................................................. 6
BIOGRAPHY .............................................................. 7
1. INTRODUCTION
Space exploration applications offer a unique opportunity
for the development and deployment of autonomous
systems, due to limited communications, great distances,
and high cost of direct operation. At the same time, the
risk and cost of space missions leads to reluctance to taking
on new, complex and difficult-to-understand technology.
Consequently, there is a pressing need to address the issue
of designing robust architecture for autonomous systems
and demonstrate a design process that can provide the trust
and reliability that is required for manned and unmanned
space applications.
In this paper, we describe an ongoing effort to develop a
new approach to defining, implementing and maintaining
compositional autonomous systems. There are two key
elements to the approach. One is a modular compositional
autonomy architecture where adaptation to different
Authorized licensed use limited to: Carnegie Mellon Libraries. Downloaded on September 24, 2009 at 12:00 from IEEE Xplore. Restrictions apply.

2
applications is done in an incremental manner. The other is
a testing and validation methodology that allows the
certification of new adaptations to be limited to the
components and relations that are modified. Together, the
two elements will make future autonomy applications more
easily constructed and modified, while increasing reliability
and reducing cost of reconfiguration and maintenance.
The work will be grounded in a specific autonomy
architecture that integrates the EUROPA planning
framework and the functional layer of the CLARAty control
architecture. EUROPA supports incremental compositional
specification of the states, commands and associated
operations rules that define how it may control a given
system. CLARAty provides a compositional approach to
defining the functional control software that interfaces with
the underlying system.
Compositional verification techniques will be used to limit
the efforts required to validate and certify a new adaptation.
These methods use known properties of unchanged
modules to limit validation and certification efforts to
changes made. The validation of core system and
individual component properties is done with both formal
and empirical analysis.
Our approach will enable the increased use of autonomy in
future space explorations, thus reducing operations costs
and increasing reliability. In addition, the methodology of
composable components and associated incremental testing
and verification, will reduce the cost of system
development, maintenance, and reconfiguration.
2. AN ARCHITECTURE FOR AUTONOMY
Autonomous systems vary greatly in the representation and
reasoning techniques utilized in such systems. Furthermore,
the interface between autonomous control and underlying
systems can be radically different between architectures.
Both of these aspects impact the application of validation
techniques to autonomous systems instantiations.
Consequently, we define and use a general autonomous
systems architecture that uses specific representation and
reasoning approaches, combined with a structured well-
defined interface to the underlying system. While the
architecture provides a basis for defining validation
processes and techniques, many of the general notions of
how to validate autonomous systems will be applicable to
other architectures.
Our architecture uses a constraint-based planning
framework called EUROPA (Extendible Uniform Remote
Operations Planning Architecture) for the core
representation and reasoning. This provides the ability to
make decisions about what actions to take so as to achieve
mission goals, while ensuring that flight rules and
constraints are satisfied. The actions are implemented in the
CLARAty framework (Coupled-Layer Architecture for
Rover Autonomy), which also provides structured access to
system states and sensory information. The architecture is
shown in Figure 1.
The decision-making component, implemented in
EUROPA, uses the domain model to generate safe plans
and decisions that respect flight rules and other constraints
on operations. The domain model is a declarative
specification of actions and states implemented in the
functional layer, along with rules on how these actions can
be used. The domain model is compositional, meaning that
new actions and rules can be added without changing
existing content. The executive, implemented in an
execution framework called PLEXIL, executes the plans
and actions specified by the decision layer. It also monitors
the execution, ensuring that the constraints and assumptions
in the given plan are satisfied in the system during
execution. When deviations occur, the executive can either
recover or call on the decision layer to decide how to
proceed. The functional layer is part of the CLARAty
framework. It is a set of functional components, arranged in
a hierarchy where higher-level components utilize
capabilities and services of lower-level components. The
functional layer thus provides a compositional approach to
implementing interfaces to system functions.
The verification and validation methods are applied to all
levels of the architecture. The details are further described
here below in the section on validation. The automated
synthesis techniques are then used to generate, from high-
level specifications, both CLARAty functional layer
Functional Layer
(CLARAty)
Decision Layer
(EUROPA)
Executive
(PLEXIL)
System interfaces
Domain
Model
V&V
Static analysis
Compositional
Synthesis
Interface to users/operations
Figure1: Autonomy architecture outline
Authorized licensed use limited to: Carnegie Mellon Libraries. Downloaded on September 24, 2009 at 12:00 from IEEE Xplore. Restrictions apply.

3
components and the associated specifications in the domain
model. This approach, combined with the compositional
nature of the overall architecture, enables rapid adaptation
and reconfiguration of the autonomy system.
EUROPA – constraint-based planning
The Extensible Universal Remote Operations Planning
Architecture (EUROPA) is a model-based planning and
scheduling architecture descended from the Remote Agent
Planner [14] . Users of EUROPA can specify the rules of
planning domains using a rich domain description language
that supports time, resources, disjunctive preconditions and
conditional effects. EUROPA makes extensive use of
constraint based representation and reasoning, which allows
for more concise representation of planning models, and
more efficient reasoning during planning [11] . EUROPA
provides support for “foreign function” calls implementing
complex constraints such as power consumption and
generation.
EUROPA consists of a hierarchy of highly configurable
components, supporting the building many types of planners
and plan representations. The modeling language, NDDL,
contains a small number of elementary entity types,
providing ease in modeling. These types can be extended to
provide more specialized components, leading to a rich set
of modeling primitives. The plan database contains the
current plan and information about its state. The database
provides mechanisms to efficiently query the plan state and
modify the plan. Modification leads to inference, which is
performed by a rules engine module and a constraint
reasoning engine module. The rules engine determines
which rules in the domain description apply after each
modification of the plan, and updates the state accordingly.
The constraint reasoning module is further broken down
into specialized modules that efficiently handle particular
classes of constraints, such as temporal constraints. Finally,
EUROPA provides interfaces to specialized heuristics
modules that provide search control to planners.
EUROPA can also be customized to support both long-
range deliberative planners as well as short-horizon
continuous planners that may operate on the same model.
This approach partially resolves problems due to building
multiple models in different languages for the same
autonomy system (e.g. [12] ). For example, one planner may
have a time horizon limited to 5 minutes into the future, and
can delay subgoals. Another planner may only plan
activities for a hazard avoidance system, leaving other goals
to other planners. EUROPA supports customizations of this
form by limiting a planners’ “view” to a subset of the
model. EUROPA also allows multiple planners to modify
the same plan concurrently, by providing authority
mechanisms indicating what planners may modify.
Automated planning technology such as EUROPA has been
utilized as part of on-board autonomy architectures for deep
space probes [14] , robotic rovers [16] and free-flying
robots [17] .
CLARAty – layered architecture for robotics
Most robotic control systems employ a variant of the Three-
Layer Architecture pioneered by Brooks in 1987.
CLARAty is an evolution of the three-layer architecture that
provides a wide-range of robotic functionality and
simplifies the integration of new technologies on robotic
platforms. CLARAty is a joint project between the NASA
Jet Propulsion Laboratory, NASA Ames Research Center,
Carnegie Mellon University and a number of other
universities and has been designed specifically for space-
based robotic control applications. CLARAty features a
Functional Layer of robotic primitives, coupled with a
Decision Layer of planning and execution functionality;
each of these layers contains a hierarchy of components
ranging from the most elementary to the most “intelligent”.
The Functional Layer (FL) provides a set of standard,
generic robot capabilities that interface to system hardware.
These capabilities are organized as a software class
hierarchy of robotic components; for example, wheeled-
mobility is a subclass of mobility, and individual rover
wheel assemblies are child classes. As is natural in object-
oriented systems, the interface is separated from
implementation. Physical limitations of devices are
distinguished from algorithmic limitations. Finally, runtime
models of devices are incorporated in the Functional Layer.
Figure 2: CLARAty framework organization
Authorized licensed use limited to: Carnegie Mellon Libraries. Downloaded on September 24, 2009 at 12:00 from IEEE Xplore. Restrictions apply.

4
3. VALIDATION OF AUTONOMOUS SYSTEMS
In the autonomy architecture outlined here, an instantiation
of an autonomous system consists of the following
elements:
(1) The core EUROPA planning and decision-making
framework, which will largely be unchanged between
applications and thus can be validated without
reference to the specific instantiation.
(2) The CLARAty instantiation for the system in question,
which consists of a set of core CLARAty components
and the specific components used to operate the
system. The core CLARAty components can be
validated separately, while the specific components are
validated as part of the instantiation process. Some
components may be synthesized, which offers an
opportunity for easier validation of those components.
(3) The execution system that links CLARAty and
EUROPA and provides monitoring capabilities to
ensure that execution does not continue when the
assumptions supporting the plan no longer hold. The
core execution system is validated once, but online
validation and checking techniques can be used to
validate specific executable plans.
(4) The domain model describing possible actions and the
flight rules governing those actions and the related
system states. Model validation is a key element of
ensuring that the autonomy system instantiation is
robust and safe.
(5) The properties that should hold for the system and
various components. These define the criteria for
validation of the system.
The validation of an instantiation thus involves validating
core architecture systems, instantiation-specific CLARAty
components, the domain model used by the planner and
executive, and finally, the overall properties for the
instantiated system. To tackle this, we apply three kinds of
techniques. Model-based validation, using compositional
verification, can be applied to core software as well as
special-purpose components. In addition, compositional
techniques allow us to verify system-level properties from
component properties. Static analysis is a powerful
technique to directly analyze software code, without
requiring a formal modeling of the software components
and properties. Finally, automated synthesis techniques
allow us to generate instance-specific elements from high-
level specifications. In addition to simplifying the process
of implementing instantiations, synthesis offers an
additional level of validation by generating provably safe
code.
Compositional verification
Model-based verification techniques use exhaustive search
through possible execution trajectories to verify desired
system properties. While these techniques can provide the
formal validation desired for our autonomous systems, they
suffer from state-space explosion. As a consequence, they
are typically used to verify relatively small components of
an entire system, rather than the system itself. In addition,
they do not lend themselves to incremental validation, as we
desire to do for instantiations of autonomous systems.
To address these issues, we turn to compositional
verification techniques. Compositional verification
decomposes the properties of a system into properties of its
components, so that if each component satisfies its
respective property, then so does the entire system.
Components are thus model checked separately. Assume-
guarantee reasoning is a promising compositional
verification approach, the basic idea behind it being as
follows: Consider a system consisting of two components X
and Y. The desire is to prove that a property P is satisfied
by the overall system X|Y. In compositional verification,
this is done by identifying an intermediate property A,
called an assumption, and using that to split the validation
into two smaller problems. The first part is to prove that
given A, X satisfies P, and the second part is to prove that Y
satisfies (or guarantees) P.
This notion can be utilized in different ways. If the X and
Y components are already validated, it is likely that the
assumption A is already known and the compositional
verification techniques can be applied directly. This is
likely to be the case in situations such as where new
CLARAty components are being added on top of existing
ones. The properties of the core components are known and
validated, so the new components can be validated against
these proven assumptions.
A more interesting case is when the assumption A is not
known. To address that, we turn to techniques for
automatically generating such assumptions from the
components X and Y and the desired property P. Recently
developed techniques make this possible, and allow both the
automatic construction of a weakest valid assumption for a
given component [19] , and of an assumption generated in
an iterative fashion through the use of an automata learning
algorithm [20] . The assumptions generated by [20] do not
need to be the weakest. In fact, the iterative framework
converges to the weakest assumption, but may terminate
early, if it finds an assumption that is sufficient to prove that
X|Y satisfies P. The framework also guarantees that the
Authorized licensed use limited to: Carnegie Mellon Libraries. Downloaded on September 24, 2009 at 12:00 from IEEE Xplore. Restrictions apply.

5
assumptions that it generates have at most as many states as
the weakest assumption for X.
As outlined above, we rely heavily on these compositional
verification techniques in our work, both to address
computational cost issues, and to enable incremental
validation of autonomy system instantiations.
Static analysis
The goal of static program analysis is to assess properties of
a program without executing the program. Several
techniques can be used to perform static analysis. Theorem
proving, data flow analysis, constraint solving, and abstract
interpretation are among the most popular. Generally
speaking, a static program analyzer infers properties about
the execution of the program from its text (the source code)
and a formal specification of the semantics of the language
(which is typically built in the analyzer). Static program
analyzers are in general excellent for detecting runtime
errors.
Runtime errors are errors that cause exceptions at runtime.
Typically, in C, either they result in creating a core dump or
they cause data corruption that may cause crashes. The main
classes of runtime errors are accesses to un-initialized
variables, accesses to un-initialized pointers, out-of-bound
array accesses, arithmetic underflow/overflow, invalid
arithmetic operations, non-terminating loops, and non-
terminating calls.
In general, static program analyzers aim at checking all
execution paths, sometimes at the cost of incompleteness
(i.e., impossibility of determining the safety of all
operations with exact precision). In other words, the
analyzer can raise false alarms on some operations that are
actually safe. However, if the analyzer deems an operation
safe, then errors cannot occur on any execution path. The
program analyzer can also detect certain runtime errors
which occur every time the execution reaches some point in
the program.
Traditionally, there are two complementary uses of a
program analyzer:
(1) as a debugger that detects runtime errors statically
without executing the program, and
(2) as a preprocessor that reduces the number of
potentially dangerous operations that have to be
checked by a traditional validation process (code
reviewing, test writing, and so on).
The first use is akin to traditional debugging; the developer
tries to flush as many as bugs as he can from the code
before it gets to verification. The second use is called
certification; the goal is to prove the absence of errors of a
certain class, thus, alleviating the need for testing for this
class of errors. This requires that the static analyzer
achieves a good selectivity - the percentage of operations
which are proven to be safe by the program analyzer.
Indeed, if 50% of all operations in the program are marked
as potentially dangerous by the analyzer, there are no
benefits to using such techniques.
Automated synthesis
The overall aim of our work is to be able to reliably
reconfigure components in the functional layer of the
autonomy architecture. The previous sections have
described V&V techniques that are able to verify that
components are free of bugs. Another approach is to
generate the components in an inherently trustworthy
manner. We are developing the use of automated code
generation (also known as program synthesis) for this.
Control software is particularly appropriate for code
generation since it can be modeled concisely at a high-level,
while the code which implements it tends to be idiomatic.
A code generator takes as input a domain-specific high-
level description of a task (e.g., a set of differential
equations) and produces optimized and documented low-
level code (e.g., C or C++) that is based on algorithms
appropriate for the task (e.g., the extended Kalman filter).
This automation increases developer productivity and, in
principle, prevents the introduction of coding errors.
AutoFilter [18] is a domain-specific program synthesis
system that generates customized Kalman filters for state
estimation tasks specified in a high-level notation.
AutoFilter's specification language uses differential
equations for the process and measurement models and
statistical distributions to describe the noise characteristics.
It can generate code with a range of algorithmic
characteristics and for several target platforms. The tool has
been designed with reliability of the generated code in mind
and is able to automatically certify that the code it generates
is free from various error classes (most are programming
error, while some address functional concerns) using
automated theorem proving. Since documentation is an
important part of software assurance, AutoFilter can also
automatically generate various human-readable documents,
containing both design and safety related information
4. CONCLUDING REMARKS
The work described in this paper is an ongoing effort. The
architecture has been defined and an initial version has been
implemented. The process for validating new instantiations
has been defined and initial efforts are underway to apply
Authorized licensed use limited to: Carnegie Mellon Libraries. Downloaded on September 24, 2009 at 12:00 from IEEE Xplore. Restrictions apply.

Citations
More filters
01 Aug 2006
TL;DR: In this paper, the authors present the findings of a NASA-led capabilities assessment of Uninhabited Aerial Vehicles (UAVs) for civil (defined as non-DoD) use in Earth observations.
Abstract: This three-volume document, based on the draft document located on the website given on page 6, presents the findings of a NASA-led capabilities assessment of Uninhabited Aerial Vehicles (UAVs) for civil (defined as non-DoD) use in Earth observations. Volume 1 is the report that presents the overall assessment and summarizes the data. The second volume contains the appendices and references to address the technologies and capabilities required for viable UAV missions. The third volume is the living portion of this effort and contains the outputs from each of the Technology Working Groups (TWGs) along with the reviews conducted by the Universities Space Research Association (USRA). The focus of this report, intended to complement the Office of the Secretary of Defense UAV Roadmap, is four-fold: 1) To determine and document desired future Earth observation missions for all UAVs based on user-defined needs; 2) To determine and document the technologies necessary to support those missions; 3) To discuss the present state of the art platform capabilities and required technologies, including identifying those in progress, those planned, and those for which no current plans exist; 4) Provide the foundations for development of a comprehensive civil UAV roadmap. It is expected that the content of this report will be updated periodically and used to assess the feasibility of future missions. In addition, this report will provide the foundation to help influence funding decisions to develop those technologies that are considered enabling or necessary but are not contained within approved funding plans. This document is written such that each section will be supported by an Appendix that will give the reader a more detailed discussion of that section's topical materials.

25 citations

Journal ArticleDOI
TL;DR: In this paper, assume-guarantee testing is proposed to establish key properties of a component-based system before component assembly, so that the cost of fixing errors is smaller.
Abstract: Integration issues of component-based systems tend to be targeted at the later phases of the software development, mostly after components have been assembled to form an executable system. However, errors discovered at these phases are typically hard to localise and expensive to fix. To address this problem, the authors introduce assume-guarantee testing, a technique that establishes key properties of a component-based system before component assembly, when the cost of fixing errors is smaller. Assume-guarantee testing is based on the (automated) decomposition of system-level requirements into local component requirements at design time. The local requirements are in the form of assumptions and guarantees that each component makes on, or provides to the system, respectively. Checking requirements is performed during testing of individual components (i.e. unit testing) and it may uncover system-level violations prior to system testing. Furthermore, assume-guarantee testing may detect such violations with a higher probability than traditional testing. The authors also discuss an alternative technique, namely predictive testing, that uses the local component assumptions and guarantees to test assembled systems: given a non-violating system run, this technique can predict violations by alternative system runs without constructing those runs.The authors demonstrate the proposed approach and its benefits by means of two NASA case studies: a safety-critical protocol for autonomous rendez-vous and docking and the executive subsystem of the planetary rover controller K9.

20 citations


Cites methods from "A robust compositional architecture..."

  • ...In the context of a NASA project on verification and validation for autonomous systems [16], we have created models of a protocol for ARD at different levels of abstraction....

    [...]

Proceedings ArticleDOI
01 Mar 2008
TL;DR: In this article, the authors apply advanced formal verification techniques, such as model checking, to plans and procedures expressed in semantically well-defined languages such as PRL and PLEXIL.
Abstract: Procedures and plans are used across NASA missions. For example, astronaut activities on the International Space Station are regulated by procedures which are uploaded from the ground. It is critical that these procedures are verified and validated before being executed by astronauts. This paper describes how we are applying advanced formal verification techniques, such as model checking, to plans and procedures expressed in semantically well-defined languages such as PRL and PLEXIL.

17 citations


Cites methods from "A robust compositional architecture..."

  • ...This work stems from the work done on the verification of autonomous systems presented in 2006 [9,10]....

    [...]

Proceedings ArticleDOI
07 Mar 2009
TL;DR: In this article, the authors propose to use active agents to direct their own movement, schedule, and operation in space systems, while we may still determine the larger mission goals and priorities, but the systems themselves will be better able to direct themselves.
Abstract: Autonomous capability in space systems is rapidly becoming a necessity for continued research and exploration. While these systems have traditionally behaved as passive observers, their remoteness and unique access to unexplored environments will likely result in future systems that behave more like active agents employed on our behalf. We may still determine the larger mission goals and priorities, but the systems themselves will be better able to direct their own movement, schedule, and operation.

15 citations


Cites background from "A robust compositional architecture..."

  • ...Almost all space systems require some level of autonomy as a result of these and other issues [4], [8], [13], [14]....

    [...]

  • ...But the long round-trip communication times— 16 minutes at 1 AU or presently 29 hours for Voyager 1—are making that control increasingly difficult or even impossible [1], [8], [9]....

    [...]

Proceedings ArticleDOI
27 Jun 2018
TL;DR: A receding horizon scheme in which the underlying optimal control problem is solved in a fast and robust manner through the use of a novel dynamic model and an unconstrained optimization problem formulation for generating automated initial guesses to yield the optimal spacecraft trajectory over the planning horizon.
Abstract: In this paper, we consider the problem of trajectory control of a spacecraft enabled with solar electric propulsion in the presence of uncertainties. We propose a receding horizon scheme in which the underlying optimal control problem (non-linear, multi-phase) is solved in a fast and robust manner through the use of a novel dynamic model and an unconstrained optimization problem formulation for generating automated initial guesses. The generated guess trajectory is utilized to rapidly solve a constrained parameter optimization problem and yield the optimal spacecraft trajectory over the planning horizon. We demonstrate the performance of the algorithm through numerical simulations for low-thrust maneuver about celestial bodies with strong and weak gravitational fields. The simulations incorporate uncertainties due to unmodeled dynamics, specifically the oblateness of the celestial body and the presence of third body perturbation.

3 citations


Cites background from "A robust compositional architecture..."

  • ...Since then, a number of NASA missions have incorporated autonomy: autonomous navigation onboard the Mars exploration rovers (MER) [2], autonomous software on Earth Observing One (EO-1) mission [3], and autonomous science software onboard the Mars Science Laboratory (MSL) [4]....

    [...]

References
More filters
Journal ArticleDOI
04 Jun 1990
TL;DR: In this paper, a model-checking algorithm for mu-calculus formulas which uses R.E. Bryant's (1986) binary decision diagrams to represent relations and formulas symbolically is described.
Abstract: A general method that represents the state space symbolically instead of explicitly is described. The generality of the method comes from using a dialect of the mu-calculus as the primary specification language. A model-checking algorithm for mu-calculus formulas which uses R.E. Bryant's (1986) binary decision diagrams to represent relations and formulas symbolically is described. It is then shown how the novel mu-calculus model checking algorithm can be used to derive efficient decision procedures for CTL model checking, satisfiability of linear-time temporal logic formulas, strong and weak observational equivalence of finite transition systems, and language containment of finite omega -automata. This eliminates the need to describe complicated graph-traversal or nested fixed-point computations for each decision procedure. The authors illustrate the practicality of their approach to symbolic model checking by discussing how it can be used to verify a simple synchronous pipeline. >

2,698 citations

Journal ArticleDOI
TL;DR: The Remote Agent is described, a specific autonomous agent architecture based on the principles of model-based programming, on-board deduction and search, and goal-directed closed-loop commanding, that takes a significant step toward enabling this future of space exploration.

727 citations

Book ChapterDOI
07 Apr 2003
TL;DR: This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion and has implemented this approach in the LTSA tool and applied it to a NASA system.
Abstract: Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assume-guarantee style. However, the application of this technique is difficult because it involves non-trivial human input. This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to a NASA system.

440 citations


"A robust compositional architecture..." refers methods in this paper

  • ...To address that, we turn to techniques for automatically generating such assumptions from the components X and Y and the desired property P. Recently developed techniques make this possible, and allow both the automatic construction of a weakest valid assumption for a given component [19] , and of an assumption generated in an iterative fashion through the use of an automata learning algorithm [20] . The assumptions generated by [ 20 ] do ......

    [...]

  • ...To address that, we turn to techniques for automatically generating such assumptions from the components X and Y and the desired property P. Recently developed techniques make this possible, and allow both the automatic construction of a weakest valid assumption for a given component [19] , and of an assumption generated in an iterative fashion through the use of an automata learning algorithm [ 20 ] . The assumptions generated by [20] do ......

    [...]

Proceedings Article
14 Apr 2000
TL;DR: This paper describes the RAX Planner/Scheduler (RAX-PS), both in terms of the underlying planning framework and the fielded planner, as a system capable of building concurrent plans with over a hundred tasks within the performance requirements of operational, mission-critical software.
Abstract: On May 17th 1999, NASA activated for the first time an AI-based planner/scheduler running on the flight processor of a spacecraft. This was part of the Remote Agent Experiment (RAX), a demonstration of closed-loop planning and execution, and model-based state inference and failure recovery. This paper describes the RAX Planner/Scheduler (RAX-PS), both in terms of the underlying planning framework and in terms of the fielded planner. RAX-PS plans are networks of constraints, built incrementally by consulting a model of the dynamics of the spacecraft. The RAX-PS planning procedure is formally well defined and can be proved to be complete. RAX-PS generates plans that are temporally flexible, allowing the execution system to adjust to actual plan execution conditions without breaking the plan. The practical aspect, developing a mission critical application, required paying attention to important engineering issues such as the design of methods for programmable search control, knowledge acquisition and planner validation. The result was a system capable of building concurrent plans with over a hundred tasks within the performance requirements of operational, mission-critical software.

324 citations


"A robust compositional architecture..." refers methods in this paper

  • ...EUROPA – constraint-based planning The Extensible Universal Remote Operations Planning Architecture (EUROPA) is a model-based planning and scheduling architecture descended from the Remote Agent Planner [14] ....

    [...]

  • ...Automated planning technology such as EUROPA has been utilized as part of on-board autonomy architectures for deep space probes [14] , robotic rovers [16] and free-flying robots [17] ....

    [...]

  • ...The Extensible Universal Remote Operations Planning Architecture (EUROPA) is a model-based planning and scheduling architecture descended from the Remote Agent Planner [14] ....

    [...]

Journal ArticleDOI
TL;DR: This paper provides a theoretical foundation for the CAIP paradigm, a paradigm for representing and reasoning about plans, and shows how the plans are naturally expressed by networks of constraints, and that the process of planning maps directly to dynamic constraint reasoning.
Abstract: In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. We describe compatibilities, a compact mechanism for describing planning domains. We also demonstrate how this framework incorporates the use of constraint representation and reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.

253 citations


Additional excerpts

  • ...EUROPA makes extensive use of constraint based representation and reasoning, which allows for more concise representation of planning models, and more efficient reasoning during planning [11] ....

    [...]

Frequently Asked Questions (1)
Q1. What have the authors contributed in "A robust compositional architecture for autonomous systems" ?

In this paper, the authors describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.