Open AccessBook
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Reads0
Chats0
TLDR
Puterman as discussed by the authors provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models, focusing primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous time discrete state models.Abstract:
From the Publisher:
The past decade has seen considerable theoretical and applied research on Markov decision processes, as well as the growing use of these models in ecology, economics, communications engineering, and other fields where outcomes are uncertain and sequential decision-making processes are needed. A timely response to this increased activity, Martin L. Puterman's new work provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models. It discusses all major research directions in the field, highlights many significant applications of Markov decision processes models, and explores numerous important topics that have previously been neglected or given cursory coverage in the literature. Markov Decision Processes focuses primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous-time discrete state models. The book is organized around optimality criteria, using a common framework centered on the optimality (Bellman) equation for presenting results. The results are presented in a "theorem-proof" format and elaborated on through both discussion and examples, including results that are not available in any other book. A two-state Markov decision process model, presented in Chapter 3, is analyzed repeatedly throughout the book and demonstrates many results and algorithms. Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria. It also explores several topics that have received little or no attention in other books, including modified policy iteration, multichain models with average reward criterion, and sensitive optimality. In addition, a Bibliographic Remarks section in each chapter comments on relevant historicread more
Citations
More filters
Journal ArticleDOI
Markov Decision Processes
Nicole Bäuerle,Ulrich Rieder +1 more
TL;DR: The theory of Markov Decision Processes is the theory of controlled Markov chains as mentioned in this paper, which has found applications in various areas like e.g. computer science, engineering, operations research, biology and economics.
Journal Article
Near-optimal Regret Bounds for Reinforcement Learning
TL;DR: For undiscounted reinforcement learning in Markov decision processes (MDPs), this paper presented a reinforcement learning algorithm with total regret O(DS√AT) after T steps for any unknown MDP with S states, A actions per state, and diameter D.
Proceedings Article
Generative Adversarial Imitation Learning
Jonathan Ho,Stefano Ermon +1 more
TL;DR: The authors proposed a model-free imitation learning algorithm that obtains significant performance gains over existing model free methods in imitating complex behaviors in large, high-dimensional environments, and showed that a certain instantiation of their framework draws an analogy between imitation learning and generative adversarial networks.
Posted Content
Deep Reinforcement Learning: An Overview
TL;DR: This work discusses core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration, and important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn.
Proceedings ArticleDOI
SARSOP: Efficient Point-Based POMDP Planning by Approximating Optimally Reachable Belief Spaces
TL;DR: This work has developed a new point-based POMDP algorithm that exploits the notion of optimally reachable belief spaces to improve com- putational efficiency and substantially outperformed one of the fastest existing point- based algorithms.
References
More filters
Book
Dynamic Programming
TL;DR: The more the authors study the information processing aspects of the mind, the more perplexed and impressed they become, and it will be a very long time before they understand these processes sufficiently to reproduce them.
Journal ArticleDOI
Finding Optimal (s, S) Policies Is About As Simple As Evaluating a Single Policy
Yu-Sheng Zheng,Awi Federgruen +1 more
TL;DR: A new algorithm for computing optimal ( s , S ) policies is derived based upon a number of new properties of the infinite horizon cost function c as well as a new upper bound for optimal order-up-to levels S * and a new lower bound for ideal reorder levels s *.