scispace - formally typeset
Search or ask a question

Showing papers on "Bellman equation published in 1971"


Journal ArticleDOI
TL;DR: The control problem discussed in this paper is a variant of one considered by Bellman in a seminar at the RAND Corporation and will be considered here in a discrete version so the reader should have no difficulties developing its continuous analogue.
Abstract: The control problem discussed in this paper is a variant of one considered by Bellman in a seminar at the RAND Corporation. A solution was presented to the seminar by the author in October 1952 based on the idea of placing a “loose” string between end points and “pulling tight.” Recently, Arthur Veinott has greatly extended the class of problems which admit a “string” solution. It appeared of value that the author publish his original notes on Bellman's problem. The problem will be considered here in a discrete version. The reader should have no difficulties developing its continuous analogue.

18 citations



Journal ArticleDOI
Tohru Katayama1
TL;DR: In this article, the problem of maximizing the expected first passage time of a state to the boundary of a certain safe region is considered, where the dynamics of the system is described by a linear stochastic differential equation.
Abstract: The stochastic bang-bang control problem of maximizing the expectation of the first passage time of the state to the boundary of a certain safe region is considered. It is assumed that the dynamics of the system is described by a linear stochastic differential equation. By use of dynamic programming, the problem is reduced to a boundary-value problem of Dirichlet type for the Bellman equation. A difference scheme is applied in order to obtain the numerical solution of the boundary-value problem. It is found that for first-order systems the difference scheme gives excellent numerical results. Some switching curves are also obtained for a second-order plant l/s 2 with additive Gaussian white noises.

8 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider game problems in which the payoff is some function of the terminal state of a conflict-controlled system and show that optimal strategies exist if the corresponding Bellman equation has a solution.

4 citations


Journal ArticleDOI
TL;DR: In this paper, the sequential estimation of plants described by non-linear differential equations is treated as an optimal control problem with least-squares criterion, and the criterion function satisfies the Bellman equation at terminal time, which is considered as a running variable.
Abstract: This paper considers the sequential estimation of plants described by non-linear differential equations. The statistical nature of the disturbance being unknown, the problem is treated as an optimal control problem with least-squares criterion. It is seen that the criterion function satisfies the Bellman equation at terminal time, which is considered as a running variable. The sequential estimator equations are directly obtained by using Pearson's approximation solution.