scispace - formally typeset
Open AccessPosted Content

Mean-field Markov decision processes with common noise and open-loop controls

Reads0
Chats0
TLDR
The correspondence between CMKV-MDP and a general lifted MDP on the space of probability measures is proved, and the dynamic programming Bellman fixed point equation satisfied by the value function is established.
Abstract
We develop an exhaustive study of Markov decision process (MDP) under mean field interaction both on states and actions in the presence of common noise, and when optimization is performed over open-loop controls on infinite horizon. Such model, called CMKV-MDP for conditional McKean-Vlasov MDP, arises and is obtained here rigorously with a rate of convergence as the asymptotic problem of N-cooperative agents controlled by a social planner/influencer that observes the environment noises but not necessarily the individual states of the agents. We highlight the crucial role of relaxed controls and randomization hypothesis for this class of models with respect to classical MDP theory. We prove the correspondence between CMKV-MDP and a general lifted MDP on the space of probability measures, and establish the dynamic programming Bellman fixed point equation satisfied by the value function, as well as the existence of-optimal randomized feedback controls. The arguments of proof involve an original measurable optimal coupling for the Wasserstein distance. This provides a procedure for learning strategies in a large population of interacting collaborative agents. MSC Classification: 90C40, 49L20.

read more

Citations
More filters
Journal ArticleDOI

Old and New

R. Mateosian
- 01 Jul 2006 - 
TL;DR: Richard Mateosian reviews old and new books, including Weinberg on Writing--The Fieldstone Method, The Art of Computer Programming, From Java to Ruby--Things Every Manager Should Know, and Introduction to DITA--A User Guide to the Darwin Information Typing Architecture.
Posted Content

An Overview of Multi-Agent Reinforcement Learning from Game Theoretical Perspective

TL;DR: This work provides a self-contained assessment of the current state-of-the-art MARL techniques from a game theoretical perspective and expects this work to serve as a stepping stone for both new researchers who are about to enter this fast-growing domain and existing domain experts who want to obtain a panoramic view and identify new directions based on recent advances.
Posted Content

Model-Free Mean-Field Reinforcement Learning: Mean-Field MDP and Mean-Field Q-Learning

TL;DR: This work introduces generic model-free algorithms based on the state-action value function at the mean field level and proves convergence for a prototypical Q-learning method for mean field control problems.

Numerical methods for mean field games and mean field type control

TL;DR: Numerical schemes for forward-backward systems of partial differential equations (PDEs), optimization techniques for variational problems driven by a Kolmogorov-Fokker-Planck PDE, an approach based on a monotone operator viewpoint, and stochastic methods relying on machine learning tools are discussed.
References
More filters
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Book

Dynamic Programming and Optimal Control

TL;DR: The leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization.
Book

Foundations of modern probability

TL;DR: In this article, the authors discuss the relationship between Markov Processes and Ergodic properties of Markov processes and their relation with PDEs and potential theory. But their main focus is on the convergence of random processes, measures, and sets.
BookDOI

Mass transportation problems

TL;DR: In this article, a modification of the Monge-Kantorovich Problem with relaxed or additional constraints is presented. But this modification is restricted to the case where the Kantorovich-type metrics are applied to various Probabilistic-Type Limit Theorems.
Book

Mean Field Games and Mean Field Type Control Theory

TL;DR: In this article, the authors present a general presentation of mean field control problems in the mean field game and the Mean Field Type Control problem in Nash games with a large number of players.
Related Papers (5)