scispace - formally typeset
Search or ask a question

Showing papers by "Brian C. Williams published in 2016"


Proceedings Article
12 Feb 2016
TL;DR: The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care, as well as self-driving cars, companion robots, and medical diagnosis support systems.
Abstract: The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans'), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles.

67 citations


Proceedings Article
12 Feb 2016
TL;DR: This work presents RAO*, a heuristic forward search algorithm producing optimal, deterministic, finite-horizon policies for CC-POMDP's, and demonstrates the usefulness of RAO* in two challenging domains of practical interest: power supply restoration and autonomous science agents.
Abstract: Autonomous agents operating in partially observable stochastic environments often face the problem of optimizing expected performance while bounding the risk of violating safety constraints. Such problems can be modeled as chance-constrained POMDP's (CC-POMDP's). Our first contribution is a systematic derivation of execution risk in POMDP domains, which improves upon how chance constraints are handled in the constrained POMDP literature. Second, we present RAO*, a heuristic forward search algorithm producing optimal, deterministic, finite-horizon policies for CC-POMDP's. In addition to the utility heuristic, RAO* leverages an admissible execution risk heuristic to quickly detect and prune overly-risky policy branches. Third, we demonstrate the usefulness of RAO* in two challenging domains of practical interest: power supply restoration and autonomous science agents.

65 citations


Proceedings ArticleDOI
20 Jul 2016
TL;DR: The proposed hybrid method combines a multi-population genetic algorithm with visibility graph by encoding all possible paths as individuals and solving a linear programming model to define the full path to be executed by the aircraft.
Abstract: This paper proposes a hybrid method to define a path planning for unmanned aerial vehicles in a non-convex environment with uncertainties. The environment becomes non-convex by the presence of no-fly zones such as mountains, cities and airports. Due to the uncertainties related to the path planning in real situations, risk of collision can not be avoided. Therefore, the planner must take into account a lower level of risk than one tolerated by the user. The proposed hybrid method combines a multi-population genetic algorithm with visibility graph. This is done by encoding all possible paths as individuals and solving a linear programming model to define the full path to be executed by the aircraft. The hybrid method is evaluated from a set of 50 maps and compared against an exact and heuristic approaches with promising results reported.

37 citations


Proceedings Article
12 Jun 2016
TL;DR: i-dual is presented, which, to the best of the knowledge, is the first heuristic search algorithm for constrained SSPs that operates in the space of dual variables describing the policy occupation measures, and retains the ability to use standard value function heuristics computed by well-known methods.
Abstract: We consider the problem of generating optimal stochastic policies for Constrained Stochastic Shortest Path problems, which are a natural model for planning under uncertainty for resource-bounded agents with multiple competing objectives. While unconstrained SSPs enjoy a multitude of efficient heuristic search solution methods with the ability to focus on promising areas reachable from the initial state, the state of the art for constrained SSPs revolves around linear and dynamic programming algorithms which explore the entire state space. In this paper, we present i-dual, which, to the best of our knowledge, is the first heuristic search algorithm for constrained SSPs. To concisely represent constraints and efficiently decide their violation, i-dual operates in the space of dual variables describing the policy occupation measures. It does so while retaining the ability to use standard value function heuristics computed by well-known methods. Our experiments on a suite of PPDDL problems augmented with constraints show that these features enable i-dual to achieve up to two orders of magnitude improvement in run-time and memory over linear programming algorithms.

35 citations


Proceedings Article
12 Jun 2016
TL;DR: The Probabilistic Simple Temporal Network with Uncertainty (PSTNU) is introduced, a temporal planning formalism that unifies the set-bounded and probabilistic temporal uncertainty models from the STNU and PSTN literature.
Abstract: Inspired by risk-sensitive, robust scheduling for planetary rovers under temporal uncertainty, this work introduces the Probabilistic Simple Temporal Network with Uncertainty (PSTNU), a temporal planning formalism that unifies the set-bounded and probabilistic temporal uncertainty models from the STNU and PSTN literature. By allowing any combination of these two types of uncertainty models, PSTNU's can more appropriately reflect the varying levels of knowledge that a mission operator might have regarding the stochastic duration models of different activities. We also introduce PARIS, a novel sound and provably polynomial-time algorithm for risk-sensitive strong scheduling of PSTNU's. Due to its fully linear problem encoding for typical temporal uncertainty models, PARIS is shown to outperform the current fastest algorithm for risk-sensitive strong PSTN scheduling by nearly four orders of magnitude in some instances of a popular probabilistic scheduling dataset, while results on a new PSTNU scheduling dataset indicate that PARIS is, indeed, amenable for deployment on resource-constrained hardware.

20 citations


Proceedings ArticleDOI
13 Sep 2016
TL;DR: This paper discusses some feasible, useful RSE configurations and deployments for a Mars rover case and an autonomous underwater vehicle case, and discusses additional capabilities that the architecture requires to support needed resiliency, such as onboard analysis and learning.
Abstract: In this paper we discuss the latest results from the Resilient Space Systems project, a joint effort between Caltech, MIT, NASA Jet Propulsion Laboratory (JPL), and the Woods Hole Oceanographic Institution (WHOI). The goal of the project is to define a resilient, risk-aware software architecture for onboard, real-time autonomous operations that can robustly handle uncertainty in spacecraft behavior within hazardous and unconstrained environments, without unnecessarily increasing complexity. The architecture, called the Resilient Spacecraft Executive (RSE), has been designed to support three functions: (1) adapting to component failures to allow graceful degradation, (2) accommodating environments, science observations, and spacecraft capabilities that are not fully known in advance, and (3) making risk-aware decisions without waiting for slow ground-based reactions. In implementation, the bulk of the RSE effort has focused on the parts of the architecture used for goal-directed execution and control, including the deliberative, habitual, and reflexive modules. We specify the capabilities and constraints needed for each module, and discuss how we have extended the current state-of-the-art algorithms so that they can supply the required functionality, such as risk-aware planning in the deliberative module that conforms to mission operator-supplied priorities and constraints. Furthermore, the RSE architecture is modular to enable extension and reconfiguration, as long as the embedded algorithmic components exhibit the required risk-aware behavior in the deliberative module and riskbounded behavior in the habitual module. To that end, we discuss some feasible, useful RSE configurations and deployments for a Mars rover case and an autonomous underwater vehicle case. We also discuss additional capabilities that the architecture requires to support needed resiliency, such as onboard analysis and learning. ∗Postdoctoral scholar, Department of Control and Dynamical Systems, 1200 E. California Blvd., Mail Code 305-16, Member. †Professor, Department of Control and Dynamical Systems, 1200 E. California Blvd., Mail Code 107-81. ‡Postdoctoral scholar, Department of Aeronautics and Astronautics, 32 Vassar Street, 32-224, Member. Joint appointment with Caltech. §Professor, Department of Aeronautics and Astronautics, 77 Massachusetts Avenue, 33-330, 32-227, Member. ¶Software Systems Engineer, 4800 Oak Grove Drive, Mail Stop 301-490, Associate Fellow. ‖Robotics Technologist, 4800 Oak Grove Drive, Mail Stop 198-219, Member. ∗∗Group Supervisor, 4800 Oak Grove Drive, Mail Stop 158-242, Member. ††Scientific Applications Software Engineer, 4800 Oak Grove Drive, Mail Stop 158-242, Member. ‡‡Robotics Technologist, 4800 Oak Grove Drive, Mail Stop 198-219, Member. ∗ ∗ ∗Software Systems Engineer, 4800 Oak Grove Drive, Mail Stop 179-206, Member.

11 citations


01 Jan 2016
TL;DR: In this article, a queue transmission model (QTM) is proposed to optimize traffic signal control jointly over an entire traffic network and specifically on improving the scalability of these methods for large numbers of intersections.
Abstract: Urban traffic congestion is on the increase worldwide; therefore, it is critical to maximize the capacity and throughput of the existing road infrastructure with optimized traffic signal control. For that purpose, this paper builds on the body of work in mixed integer linear programming (MILP) approaches that attempt to optimize traffic signal control jointly over an entire traffic network and specifically on improving the scalability of these methods for large numbers of intersections. The primary insight in this work stems from the fact that MILP-based approaches to traffic control used in a receding horizon control manner (that replan at fixed time intervals) need to compute high-fidelity control policies only for the early stages of the signal plan. Therefore, coarser time steps can be used to see over a long horizon to adapt preemptively to distant platoons and other predicted long-term changes in traffic flows. To that end, this paper contributes the queue transmission model (QTM), which blends elements of cell-based and link-based modeling approaches to enable a nonhomogeneous time MILP formulation of traffic signal control. Experimentation is then carried out with this novel QTM-based MILP control in a range of traffic networks, and it is demonstrated that the nonhomogeneous MILP formulation achieves (a) substantially lower delay solutions, (b) improved per vehicle delay distributions, and (c) more optimal travel times over a longer horizon in comparison with the homogeneous MILP formulation with the same number of binary and continuous variables.

6 citations



Posted Content
TL;DR: This work introduces the Time Resource Network (TRN), an encoding for resource-constrained scheduling problems, and proposes two algorithms for determining the consistency of a TRN, one based on Mixed Integer Programing and the other based on Constraint Programming.
Abstract: The problem of scheduling under resource constraints is widely applicable. One prominent example is power management, in which we have a limited continuous supply of power but must schedule a number of power-consuming tasks. Such problems feature tightly coupled continuous resource constraints and continuous temporal constraints. We address such problems by introducing the Time Resource Network (TRN), an encoding for resource-constrained scheduling problems. The definition allows temporal specifications using a general family of representations derived from the Simple Temporal network, including the Simple Temporal Network with Uncertainty, and the probabilistic Simple Temporal Network (Fang et al. (2014)). We propose two algorithms for determining the consistency of a TRN: one based on Mixed Integer Programing and the other one based on Constraint Programming, which we evaluate on scheduling problems with Simple Temporal Constraints and Probabilistic Temporal Constraints.

4 citations


Proceedings ArticleDOI
13 Sep 2016
TL;DR: The beginnings of an attempt to define and analyze the stability of an entire modular robotic system architecture - one which includes a three-tier (3T) layer breakdown of system architecture are discussed.
Abstract: In this paper we discuss the beginnings of an attempt to define and analyze the stability of an entire modular robotic system architecture - one which includes a three-tier (3T) layer breakdown of ...

3 citations


Journal ArticleDOI
TL;DR: This paper contributes the queue transmission model (QTM), which blends elements of cell-based and link-based modeling approaches to enable a nonhomogeneous time MILP formulation of traffic signal control, and demonstrates that it achieves substantially lower delay solutions, improved per vehicle delay distributions, and more optimal travel times over a longer horizon.
Abstract: Urban traffic congestion is on the increase worldwide; therefore, it is critical to maximize the capacity and throughput of the existing road infrastructure with optimized traffic signal control. For that purpose, this paper builds on the body of work in mixed integer linear programming (MILP) approaches that attempt to optimize traffic signal control jointly over an entire traffic network and specifically on improving the scalability of these methods for large numbers of intersections. The primary insight in this work stems from the fact that MILP-based approaches to traffic control used in a receding horizon control manner (that replan at fixed time intervals) need to compute high-fidelity control policies only for the early stages of the signal plan. Therefore, coarser time steps can be used to see over a long horizon to adapt preemptively to distant platoons and other predicted long-term changes in traffic flows. To that end, this paper contributes the queue transmission model (QTM), which blends elements of cell-based and link-based modeling approaches to enable a nonhomogeneous time MILP formulation of traffic signal control. Experimentation is then carried out with this novel QTM-based MILP control in a range of traffic networks, and it is demonstrated that the nonhomogeneous MILP formulation achieves (a) substantially lower delay solutions, (b) improved per vehicle delay distributions, and (c) more optimal travel times over a longer horizon in comparison with the homogeneous MILP formulation with the same number of binary and continuous variables.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: In this paper, the depth control algorithm used by the Dorado class autonomous underwater vehicle (AUV) in conducting bathymetric surveys and other remote sensing tasks was improved by planning for the depth profile that follows the desired depth best.
Abstract: We collaborated with the Monterey Bay Aquarium Research Institute (MBARI) to improve the depth control algorithm used by the Dorado class autonomous underwater vehicle (AUV) in conducting bathymetric surveys and other remote sensing tasks. The algorithm enables better bottom following by planning for the depth profile that follows the desired depth best, while pulling up safely for bathymetry. The algorithm allows the AUV to operate closer to the sea floor and in more variable submarine conditions. Deployment tests demonstrated improved AUV performance in a bathymetrically complex area three miles off shore in Monterey Bay, and highlighted areas where further research and development can enhance AUV operation.