scispace - formally typeset
Search or ask a question

Showing papers on "Bellman equation published in 1983"


Journal ArticleDOI
TL;DR: In this article, the authors consider general optimal stochastic control problems and the associated Hamilton-Jacobi-Bellman equations and develop a general notion of week solutions called viscosity solutions.
Abstract: We consider general optimal stochastic control problems and the associated Hamilton–Jacobi–Bellman equations. We develop a general notion of week solutions – called viscosity solutions – of the ami...

424 citations




Journal ArticleDOI
TL;DR: In this paper, an approximation of the Hamilton-Jacobi-Bellman equation connected with the infinite horizon optimal control problem with discount is proposed, and the approximate solutions are shown to converge uniformly to the viscosity solution, in the sense of Crandall-Lions, of the original problem.
Abstract: An approximation of the Hamilton-Jacobi-Bellman equation connected with the infinite horizon optimal control problem with discount is proposed. The approximate solutions are shown to converge uniformly to the viscosity solution, in the sense of Crandall-Lions, of the original problem. Moreover, the approximate solutions are interpreted as value functions of some discrete time control problem. This allows to construct by dynamic programming a minimizing sequence of piecewise constant controls.

178 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider general problems of optimal stochastic control and the associated Hamilton-Jacobi-Bellman equations and derive continuity results for the optimal cost function, characterizations of the optimum cost function as the maximum subsolution, regularity results, and uniqueness results.
Abstract: We consider general problems of optimal stochastic control and the associated Hamilton-Jacobi-Bellman equations. We recall first the usual derivation of the Hamilton-Jacobi-Bellman equations from the Dynamic Programming Principle. We then show and explain various results, including (i) continuity results for the optimal cost function, (ii) characterizations of the optimal cost function as the maximum subsolution, (iii) regularity results, and (iv) uniqueness results. We also develop the recent notion of viscosity solutions of Hamilton-Jacobi-Bellman equations.

66 citations



Journal ArticleDOI
TL;DR: An estimate is obtained for the generalized subgradients of the optimal, value function associated with a parameterized nonlinear programming problem, which yields estimates for `marginal values' with respect to the parameters.
Abstract: Second-order necessary conditions in nonlinear programming are derived by a new method that does not require the usual sort of constraint qualification. In terms of the multiplier vectors appearing in such second-order conditions, an estimate, is obtained for the generalized subgradients of the optimal, value function associated with a parameterized nonlinear programming problem. This yields estimates for `marginal values' with respect to the parameters. The main theoretical tools are the augmented Lagrangian and, despite the assumption of second-order smoothness of objective constraints, the subdifferential calculus that has recently been developed for nonsmooth, nonconvex functions.

16 citations


Journal ArticleDOI
TL;DR: A lower bound on the achievable minimum of the loss function is given and a suboptimal control strategy is derived by using a truncated Taylor series to approximate the expected future loss in the Bellman equation.
Abstract: Optimal control of linear discrete stochastic systems with linear input constraints is considered. A lower bound on the achievable minimum of the loss function is given. A suboptimal control strategy is derived by using a truncated Taylor series to approximate the expected future loss in the Bellman equation. The performance of the suboptimal controller is studied using Monte Carlo simulation and the obtained loss is compared with the lower bound.

15 citations


Journal ArticleDOI
TL;DR: In this paper, the maximum bounded Lipschitz continuous solution to a system of first-order quasi-variational inequalities is proved, and the solution is interpreted as the value function of a deterministic optimal switching problem.

11 citations


Journal ArticleDOI
TL;DR: In this article, the authors prove the existence and uniqueness of the dynamic programming equation for control diffusion processes in Hilbert spaces, and prove that it is the same equation that is used in our case.
Abstract: We prove the existence and uniqueness of the dynamic programming equation for control diffusion processes in Hilbert spaces.

8 citations


Journal ArticleDOI
TL;DR: In this paper, an extension of some theorems of Hajek (1976) is given using the notion of Clarke's generalized gradient, and a feedback law for optimal controls and Bellman's equation are obtained.

Journal ArticleDOI
TL;DR: In this article, the minimization of the mean-square deviation of a prescribed function from the class of monotone functions is considered, and two problems are considered: the first problem places no restriction on the initial value of the controls, while the second problem assumes that all the control functions must start at a fixed initial value.
Abstract: We consider the minimization of the mean-square deviation of a prescribed function from the class of monotone functions. Two problems are considered. The first problem places no restriction on the initial value of the controls, while the second problem assumes that all the control functions must start at a fixed initial value. Optimal controls are exhibited in both problems. Finally, we consider the situation with general payoff and dynamics and give the heuristic characterization of the value function for such problems.

Journal ArticleDOI
TL;DR: In this article, the optimal trajectories and the optimal feedback for control problems whose Hamiltonians are stratified functions are proved using sufficient optimality conditions expressed in terms of pseudo-solutions of the corresponding stratified Hamilton-Jacobi-Bellman equations.
Abstract: Several results concerning the optimal trajectories and the optimal feedback for control problems whose Hamiltonians are stratified functions are proved using sufficient optimality conditions expressed in terms of « pseudo-solutions » of the corresponding stratified Hamilton-Jacobi-Bellman equations.


Journal ArticleDOI
TL;DR: In this paper, the conditions under which the value function of the optimal control problem has homogeneity were examined and a simple relation between the current Hamiltonian value and an aggregate value of the state variables was derived.

01 Oct 1983
TL;DR: This paper presents a general dynamic programming algorithm for the solution of optimal stochastic control problems concerning a class of discrete event systems.
Abstract: This paper presents a general dynamic programming algorithm for the solution of optimal stochastic control problems concerning a class of discrete event systems The emphasis is put on the numerical technique used for the approximation of the solution of the dynamic programming equation This approach can be efficiently used for the solution of optimal control problems concerning Markov renewal processes This is illustrated on a group preventive replacement model generalizing an earlier work of the authors


Book ChapterDOI
01 Jan 1983
TL;DR: A two-stage-combination of a nominal feedback control with a feedforward perturbation control is proposed in order to operate from the end of the planning horizon backward.
Abstract: In economic decision making the common approach to obtain adaptive regulators is to combine Bellman’s principle of optimality with an originally only technically motivated perturbation analysis [1]; [4]. In order to operate from the end of the planning horizon backward the main difficulty in practical applications of this methodology is caused by the lack of information. This paper presents a somewhat other approach. A two-stage-combination of a nominal feedback control with a feedforward perturbation control is proposed.

Journal ArticleDOI
TL;DR: In this paper, a one-dimensional Wiener plus independent Poisson control process has an integrated, discounted non-quadratic cost function with asymmetric bounds on the non-anticipative control, assumed to be a function of the current state.
Abstract: A one-dimensional Wiener plus independent Poisson control process has an integrated, discounted non-quadratic cost function with asymmetric bounds on the non-anticipative control, assumed to be a function of the current state. A Bellman equation and maximum principle for partial differential-difference equations may be used to obtain the optimal closed-loop control if some assumptions on the asymptotic behaviour of certain partial differential-difference equations are met. The finite and infinite integral cases are treated separately.

25 Feb 1983
TL;DR: In this paper, the optimal control of stochastic dynamic models with observable and unobservable coefficients is derived by means of the Stochastic Bellman Equation (SBEE).
Abstract: : The problem of optimization of stochastic dynamic systems with random coefficients is discussed. Systems with both Wiener processes and uncertain random-process disturbances are dealt with, and these include certain bilinear stochastic systems. It is the purpose to study the optimal control and, to some extent, state estimation of such bilinear stochastic systems. By means of the stochastic Bellman equation, the optimal control of stochastic dynamic models with observable and unobservable coefficients is derived. The stochastic-system model considered is the observable system with random coefficients that are a function of the solution of a certain unobservable Markov process with information data. Under the assumptions that the solution of the stochastic differential equation for the dynamic model involved in the problem formulation results in an admissible control and that the measurable information of all random parameters depend on the conditional-mean estimate to the unobservable stochastic process, the optimal control is a linear function of the observable states and a nonlinear function of random parameters.

Journal ArticleDOI
TL;DR: In this article, a one-dimensional Wiener control problem with integral discounted quadratic cost function and asymmetric bounds on the control is considered, and the optimal control is explicitly found.
Abstract: : A one-dimensional Wiener control problem with integral discounted quadratic cost function and asymmetric bounds on the control is considered, with infinite horizon. The optimal control is explicitly found. Bellman equations and Ito integrals are used to show optimality. (Author)

Journal ArticleDOI
TL;DR: In this paper, a one-dimensional Wiener plus independent Poisson noise control problem, with asymmetric control bounds and integral discounted quadratic cost over an infinite horizon, is considered, and the resultant Bellman equations are solved, allowing the optimal control to be expressed explicitly in closed-loop form.
Abstract: A one-dimensional Wiener plus independent Poisson noise control problem, with asymmetric control bounds and integral discounted quadratic cost over an infinite horizon, is considered. The resultant Bellman equations are solved, allowing the optimal control to be expressed explicitly in closed-loop form.