scispace - formally typeset
Search or ask a question

Showing papers on "Bellman equation published in 1972"


Journal ArticleDOI
TL;DR: In this article, the expected increase in value of an asset in a period is a random variable whose distribution is a function of either the value or the age of the asset at the start of the period.
Abstract: Growth period models, previously treated in the literature, have assumed that the pattern of value increase of the growth asset is deterministic. In this paper, this assumption is relaxed by considering models in which the increase in value of an asset in a period is a random variable whose distribution is a function of either the value or the age of the asset at the start of the period. The expected increase in value is a decreasing function of the value or age of the asset so that the value of additional maturation time decreases as the asset ages. Dynamic programming is used to compute optimal policies as to when stochastic growth assets should be harvested. The steady state value function is shown to be directly analogous to that obtained when deterministic growth is assumed. Procedures for quickly computing both steady state policies and value functions are developed.

10 citations


Journal ArticleDOI
Tohru Katayama1
TL;DR: In this paper, the optimal bang-bang control problem is considered as one of maximizing the probability that the state hits a target manifold before the outer boundary of a safe region in the control interval [0, T].
Abstract: The optimal bang-bang control problem is considered as one of maximizing the probability that the state hits a target manifold before the outer boundary of a safe region in the control interval [0, T]; hitting the outer boundary corresponds to a breakdown of the control system, and T may or may not be finite. It is assumed that the dynamics of the controlled system can be expressed by a linear stochastic differential equation, and that all the state variables are accessible for direct measurements. Dynamic programming formulation leads to an initial and boundary value problem for the Bellman equation. A discussion is given for a simple scalar system. The initial and boundary value problem for a second-order plant 1/s 2 is solved by use of the finite-difference method. Some optimal switching curves are also demonstrated for different target manifolds.

8 citations


Journal ArticleDOI
TL;DR: The concept of optimality is extended to systems with uncertain parameters and it is shown that as the uncertainty approaches zero, the extended Optimality is reduced to the conventionally accepted definition of optimalities.

5 citations


Journal ArticleDOI
TL;DR: In this paper, an approximate method is proposed for synthesizing the optimal control for a dynamical system in the presence of external random perturbations and measurement errors, assuming that either the external perturbation acting on the system are sufficiently small or the measurement errors are large.

5 citations


Journal ArticleDOI
TL;DR: Kalaba as discussed by the authors converted the optimal control problem directly into an initial-value problem without utilizing the Euler equations, Pontryagin's maximum principle, or the principle of optimality.
Abstract: Numerical results are given comparing Kalaba's new approach with a classical method for evaluating a simple optimal control problem. Kalaba's new approach is to convert the optimal control problem directly into an initial-value problem without utilizing the Euler equations, Pontryagin's maximum principle, or the principle of optimality. The classical method utilizes the calculus of variations to obtain the Euler equations with the two-point boundary conditions. The results show that five-digit accuracy or better is obtained using Kalaba's approach, whereas the classical method gives large errors for large values of the terminal time.

4 citations


Journal ArticleDOI
TL;DR: In this article, a class of quadratic minimization problems whose optimal control functions are partially singular is studied, and sufficient conditions for the nonexistence of optimal singular controls in quad ratic minimisation problems are discussed.

2 citations


Journal ArticleDOI
TL;DR: In this paper, a general method for approximating a specified curve with a continuous sequence of arcs in such a way to minimize a suitable norm of the distance between the two over the approximation interval is described.