Optimal control approximations for trainable manipulators
01 Dec 1977-Vol. 16, pp 749-754
TL;DR: A theoretical procedure is developed based on the monotonicity between the changes of the Hamiltonian and the value functions proposed by Rekasius, and may provide a procedure for selecting effective controls for non-linear systems.
Abstract: A theoretical procedure is developed for testing the quality of the approximation from the optimal solution of a nonlinear optimal control problem. It is based on the monotonicity between the changes of the Hamiltonian and the value functions proposed by Rekasius, and may provide a procedure for selecting effective controls for non-linear systems. The approach has been applied to the approximately optimal control of a trainable manipulator with seven degrees of freedom, where the controller is used for motion coordination and optimal execution of object-handling tasks.
Citations
More filters
••
07 Dec 1988TL;DR: Work aimed at formulating analytically the theory of intelligent machines is summarized, and the three levels of the intelligent control, i.e. the organization, coordination, and execution levels, are described as originally conceived.
Abstract: Work aimed at formulating analytically the theory of intelligent machines is summarized. The functions of an intelligent machine are executed by intelligent controls. The principle of increasing precision with decreasing intelligence is used to form a hierarchical structure of the control systems. Distributed intelligence is compatible with such a structure when it is used for teams of intelligent machines or cooperating coordinators within the machine. The three levels of the intelligent control, i.e. the organization, coordination, and execution levels, are described as originally conceived. Designs such as neural nets for the organization level and Petri nets for the coordination level are also proposed. Applications to intelligent robots for space exploration are considered. >
26 citations
•
TL;DR: This work treats infinite horizon optimal control problems by solving the associated stationary Hamilton-Jacobi-Bellman (HJB) equation numerically to compute the value function and an optimal feedback law, and uses low rank hierarchical tensor product approximation/tree-based tensor formats, in particular tensor trains (TT tensors), and multi-polynomials, together with high dimensional quadrature.
Abstract: We treat infinite horizon optimal control problems by solving the associated stationary Hamilton-Jacobi-Bellman (HJB) equation numerically, for computing the value function and an optimal feedback area law. The dynamical systems under consideration are spatial discretizations of nonlinear parabolic partial differential equations (PDE), which means that the HJB is suffering from the curse of dimensions. To overcome numerical infeasability we use low-rank hierarchical tensor product approximation, or tree-based tensor formats, in particular tensor trains (TT tensors) and multi-polynomials, since the resulting value function is expected to be smooth. To this end we reformulate the Policy Iteration algorithm as a linearization of HJB equations. The resulting linear hyperbolic PDE remains the computational bottleneck due to high-dimensions. By the methods of characteristics it can be reformulated via the Koopman operator in the spirit of dynamic programming. We use a low rank tensor representation for approximation of the value function. The resulting operator equation is solved using high-dimensional quadrature, e.g. Variational Monte-Carlo methods. From the knowledge of the value function at computable samples $x_i$ we infer the function $ x \mapsto v (x)$. We investigate the convergence of this procedure. By controlling destabilized versions of viscous Burgers and Schloegl equations numerical evidences are given.
18 citations
••
04 Sep 1989
TL;DR: A summary of the work, aimed to reformulate analytically the theory of intelligent machines, described as originally conceived, and new designs as Neural-nets for the organization level and Petri- nets for the coordination level are proposed.
Abstract: A summary of the work, aimed to reformulate analytically the theory of intelligent machines is presented. The functions of an Intelligent Machines are executed by Intelligent Controls. The Principle of Increasing Precision with Decreasing Intelligence is used to form a hierarchical structure of the control systems. Distributed Intelligence in compatible with such a structure when it is used for teams of intelligent machines or cooperating coordinators within the machine. The three levels of the Intelligent Control, e.g. the Organization, Coordination and Execution Levels are described as originally conceived. New designs as Neural-nets for the organization level and Petri- nets for the coordination level are proposed. Application to Intelligent Robots for space exploration are suggested.
18 citations
••
01 Mar 1985TL;DR: An optimal control formulation for robotic manipulators is proposed and the Computed Torque Control, Resolved Acceleration Control, and a PID End Point Control are derived as special cases of this formulation.
Abstract: An optimal control formulation for robotic manipulators is proposed in this paper. The Computed Torque Control, Resolved Acceleration Control, and a PID End Point Control are derived as special cases of this formulation. Torque and acceleration feedback are adopted to eliminate the on-line calculations of feed forward components in the above control algorithms, and to increase robustness of the control system. This controller can be implemented approximately without torque sensors and accelerometers. The resulting control system is proved to be bounded input bounded state stable. Simulation results on both the Scheinman MIT and PUMA 600 arms are presented.
14 citations
•
TL;DR: In this paper, the authors consider a finite horizon control system with associated Bellman equation and obtain a sequence of short time horizon problems, which they call local optimal control problems, and apply two different methods, one being the well-known policy iteration, where a fixed-point iteration is required for every time step.
Abstract: Controlling systems of ordinary differential equations (ODEs) is ubiquitous in science and engineering. For finding an optimal feedback controller, the value function and associated fundamental equations such as the Bellman equation and the Hamilton-Jacobi-Bellman (HJB) equation are essential. The numerical treatment of these equations poses formidable challenges due to their non-linearity and their (possibly) high-dimensionality.
In this paper we consider a finite horizon control system with associated Bellman equation. After a time-discretization, we obtain a sequence of short time horizon problems which we call local optimal control problems. For solving the local optimal control problems we apply two different methods, one being the well-known policy iteration, where a fixed-point iteration is required for every time step. The other algorithm borrows ideas from Model Predictive Control (MPC), by solving the local optimal control problem via open-loop control methods on a short time horizon, allowing us to replace the fixed-point iteration by an adjoint method.
For high-dimensional systems we apply low rank hierarchical tensor product approximation/tree-based tensor formats, in particular tensor trains (TT tensors) and multi-polynomials, together with high-dimensional quadrature, e.g. Monte-Carlo.
We prove a linear error propagation with respect to the time discretization and give numerical evidence by controlling a diffusion equation with unstable reaction term and an Allen-Kahn equation.
3 citations