The Linear Programming Approach to Approximate Dynamic Programming
TLDR
In this article, an efficient method based on linear programming for approximating solutions to large-scale stochastic control problems is proposed. But the approach is not suitable for large scale queueing networks.Abstract:
The curse of dimensionality gives rise to prohibitive computational requirements that render infeasible the exact solution of large-scale stochastic control problems. We study an efficient method based on linear programming for approximating solutions to such problems. The approach "fits" a linear combination of pre-selected basis functions to the dynamic programming cost-to-go function. We develop error bounds that offer performance guarantees and also guide the selection of both basis functions and "state-relevance weights" that influence quality of the approximation. Experimental results in the domain of queueing network control provide empirical support for the methodology.read more
Citations
More filters
Journal ArticleDOI
Reinforcement learning and adaptive dynamic programming for feedback control
Frank L. Lewis,Draguna Vrabie +1 more
TL;DR: This work describes mathematical formulations for reinforcement learning and a practical implementation method known as adaptive dynamic programming that give insight into the design of controllers for man-made engineered systems that both learn and exhibit optimal behavior.
Book
Algorithms for Reinforcement Learning
TL;DR: This book focuses on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming, and gives a fairly comprehensive catalog of learning problems, and describes the core ideas, followed by the discussion of their theoretical properties and limitations.
Proceedings Article
Relative entropy policy search
TL;DR: The Relative Entropy Policy Search (REPS) method is suggested, which differs significantly from previous policy gradient approaches and yields an exact update step and works well on typical reinforcement learning benchmark problems.
Dissertation
On the Sample Complexity of Reinforcement Learning
TL;DR: Novel algorithms with more restricted guarantees are suggested whose sample complexities are again independent of the size of the state space and depend linearly on the complexity of the policy class, but have only a polynomial dependence on the horizon time.
Journal ArticleDOI
Robust Dynamic Programming
TL;DR: It is proved that when this set of measures has a certain "rectangularity" property, all of the main results for finite and infinite horizon DP extend to natural robust counterparts.
References
More filters
Journal ArticleDOI
New linear program performance bounds for queueing networks
James R. Morrison,P. R. Kumar +1 more
TL;DR: In this paper, the transition probabilities of queueing networks were shown to be shift invariant on the relative interiors of faces and the cost functions of interest were linear in the state.