scispace - formally typeset
Journal ArticleDOI

Finitely additive stochastic games with Borel measurable payoffs

A. Maitra, +1 more
- 01 Jul 1998 - 
- Vol. 27, Iss: 2, pp 257-267
Reads0
Chats0
TLDR
It is proved that a two- person, zero-sum stochastic game with arbitrary state and action spaces, a finitely additive law of motion and a bounded Borel measurable payoff has a value.
Abstract
We prove that a two-person, zero-sum stochastic game with arbitrary state and action spaces, a finitely additive law of motion and a bounded Borel measurable payoff has a value.

read more

Citations
More filters
Journal ArticleDOI

The determinacy of Blackwell games

TL;DR: In this paper, it was shown that the complexity of the payoff function for the Blackwell game is approximately the same as that of the perfect information game with Borel measurable payoff functions.
Journal ArticleDOI

Tug-of-war with noise: A game-theoretic view of the $p$-Laplacian

TL;DR: For general bounded domains Ω and resolutive functions F, this paper showed that for sufficiently regular Ω, the functions ue converge uniformly to the unique p-harmonic extension of F and showed that the game ends when the game position reaches some y∈∂Ω, and player I's payoff is F(y).
Book ChapterDOI

On Nash equilibria in stochastic games

TL;DR: It is shown that if each player has a reachability objective, that is, if the goal for each player i is to visit some subset of the states, then there exists an e-Nash equilibrium in memoryless strategies, for every e >0, however, exact Nash equilibria need not exist.
Book ChapterDOI

Recursive markov decision processes and recursive stochastic games

TL;DR: Recursive Markov Decision Processes (RMDPs) and Recursive Simple Stochastic Games (RSSGs) are introduced, and the decidability and complexity of algorithms for their analysis and verification are studied.

Recursive Concurrent Stochastic Games

TL;DR: The upper bounds, even for qualitative (probability 1) termination, can not be improved, even to NP, without a major breakthrough, so the classic Hoffman-Karp strategy improvement approach from the finite to an infinite state setting is extended.