Topic
Bellman equation
About: Bellman equation is a research topic. Over the lifetime, 5884 publications have been published within this topic receiving 135589 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this article, a penalty term is added to the objective function in each minimization to discourage the optimizer from finding a solution in the regions of state space where the local data density is inadequately low.
68 citations
••
TL;DR: In this article, the authors prove optimality inequalities of dynamic programming for viscosity sub-and supersolutions of the associated Bellman-Isaacs equations for stochastic differential games.
68 citations
••
TL;DR: In this article, a nonlinear system suboptimal feedback control technique based on method for determining approximate solutions for Hamilton-Jacobi-Bellman equation is proposed for nonlinear systems.
Abstract: Nonlinear system suboptimal feedback control technique based on method for determining approximate solutions for Hamilton-Jacobi- Bellman equation
68 citations
••
11 Dec 1996TL;DR: In this article, a solution to the infinite-time linear quadratic optimal control (ITLQOC) problem with state and control constraints is presented, and it is shown that a single, finite dimensional, convex program of known size can yield this solution.
Abstract: This work presents a solution to the infinite-time linear quadratic optimal control (ITLQOC) problem with state and control constraints. It is shown that a single, finite dimensional, convex program of known size can yield this solution. Properties of the resulting value function, with respect to initial conditions, are also established and are shown to be useful in determining the aforementioned problem size. An example illustrating the method is finally presented.
68 citations
••
TL;DR: In this paper, the authors proposed a provably convergent approximate dynamic programming (ADP) algorithm called Monotone-ADP that exploits the monotonicity of the value functions to increase the rate of convergence.
Abstract: Many sequential decision problems can be formulated as Markov decision processes MDPs where the optimal value function or cost-to-go function can be shown to satisfy a monotone structure in some or all of its dimensions. When the state space becomes large, traditional techniques, such as the backward dynamic programming algorithm i.e., backward induction or value iteration, may no longer be effective in finding a solution within a reasonable time frame, and thus we are forced to consider other approaches, such as approximate dynamic programming ADP. We propose a provably convergent ADP algorithm called Monotone-ADP that exploits the monotonicity of the value functions to increase the rate of convergence. In this paper, we describe a general finite-horizon problem setting where the optimal value function is monotone, present a convergence proof for Monotone-ADP under various technical assumptions, and show numerical results for three application domains: optimal stopping, energy storage/allocation, and glycemic control for diabetes patients. The empirical results indicate that by taking advantage of monotonicity, we can attain high quality solutions within a relatively small number of iterations, using up to two orders of magnitude less computation than is needed to compute the optimal solution exactly.
68 citations