scispace - formally typeset
Search or ask a question
Topic

Bellman equation

About: Bellman equation is a research topic. Over the lifetime, 5884 publications have been published within this topic receiving 135589 citations.


Papers
More filters
Proceedings Article
29 Nov 1993
TL;DR: This work uses second order local trajectory optimization to generate locally optimal plans and local models of the value function and its derivatives, and maintains global consistency of the local Models of thevalue function, guaranteeing that the locally optimal Plans are actually globally optimal.
Abstract: Dynamic programming provides a methodology to develop planners and controllers for nonlinear systems. However, general dynamic programming is computationally intractable. We have developed procedures that allow more complex planning and control problems to be solved. We use second order local trajectory optimization to generate locally optimal plans and local models of the value function and its derivatives. We maintain global consistency of the local models of the value function, guaranteeing that our locally optimal plans are actually globally optimal, up to the resolution of our search procedures.

103 citations

Journal ArticleDOI
TL;DR: This example indicates that for some mechanical engineering optimization problems, using the multicriterion optimization approach, the authors can automatically obtain a solution which is optimal and acceptable to the designer.

102 citations

Journal ArticleDOI
TL;DR: It is shown that the proposed OPFB method is more powerful than the static OPFB as it is equivalent to a state-feedback control policy and is successfully used to solve a regulation and a tracking problem.
Abstract: A model-free off-policy reinforcement learning algorithm is developed to learn the optimal output-feedback (OPFB) solution for linear continuous-time systems. The proposed algorithm has the important feature of being applicable to the design of optimal OPFB controllers for both regulation and tracking problems. To provide a unified framework for both optimal regulation and tracking, a discounted performance function is employed and a discounted algebraic Riccati equation (ARE) is derived which gives the solution to the problem. Conditions on the existence of a solution to the discounted ARE are provided and an upper bound for the discount factor is found to assure the stability of the optimal control solution. To develop an optimal OPFB controller, it is first shown that the system state can be constructed using some limited observations on the system output over a period of the history of the system. A Bellman equation is then developed to evaluate a control policy and find an improved policy simultaneously using only some limited observations on the system output. Then, using this Bellman equation, a model-free Off-policy RL-based OPFB controller is developed without requiring the knowledge of the system state or the system dynamics. It is shown that the proposed OPFB method is more powerful than the static OPFB as it is equivalent to a state-feedback control policy. The proposed method is successfully used to solve a regulation and a tracking problem.

102 citations

Journal ArticleDOI
TL;DR: It is proved that any finite-horizon value function of the DSLQR problem is the pointwise minimum of a finite number of quadratic functions that can be obtained recursively using the so-called switched Riccati mapping.
Abstract: In this paper, we derive some important properties for the finite-horizon and the infinite-horizon value functions associated with the discrete-time switched LQR (DSLQR) problem. It is proved that any finite-horizon value function of the DSLQR problem is the pointwise minimum of a finite number of quadratic functions that can be obtained recursively using the so-called switched Riccati mapping. It is also shown that under some mild conditions, the family of the finite-horizon value functions is homogeneous (of degree 2), is uniformly bounded over the unit ball, and converges exponentially fast to the infinite-horizon value function. The exponential convergence rate of the value iterations is characterized analytically in terms of the subsystem matrices.

101 citations

Journal ArticleDOI
TL;DR: Convergence analysis is developed to show that the iterative value functions of heterogeneous multi-agent differential graphical games can converge to the Nash equilibrium.

101 citations


Network Information
Related Topics (5)
Optimal control
68K papers, 1.2M citations
87% related
Bounded function
77.2K papers, 1.3M citations
85% related
Markov chain
51.9K papers, 1.3M citations
85% related
Linear system
59.5K papers, 1.4M citations
84% related
Optimization problem
96.4K papers, 2.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023261
2022537
2021369
2020411
2019348
2018353