About: Zero-sum game is a(n) research topic. Over the lifetime, 1251 publication(s) have been published within this topic receiving 19042 citation(s). The topic is also known as: zero-sum mentality.
01 Nov 2002-Econometrica
Abstract: We establish global convergence results for stochastic fictitious play for four classes of games: games with an interior ESS, zero sum games, potential games, and supermodular games. We do so by appealing to techniques from stochastic approximation theory, which relate the limit behavior of a stochastic process to the limit behavior of a differential equation defined by the expected motion of the process. The key result in our analysis of supermodular games is that the relevant differential equation defines a strongly monotone dynamical system. Our analyses of the other cases combine Lyapunov function arguments with a discrete choice theory result: that the choice probabilities generated by any additive random utility model can be derived from a deterministic model based on payoff perturbations that depend nonlinearly on the vector of choice probabilities.
01 Mar 2007-
Abstract: In this paper, the optimal strategies for discrete-time linear system quadratic zero-sum games related to the H-infinity optimal control problem are solved in forward time without knowing the system dynamical matrices. The idea is to solve for an action dependent value function Q(x,u,w) of the zero-sum game instead of solving for the state dependent value function V(x) which satisfies a corresponding game algebraic Riccati equation (GARE). Since the state and actions spaces are continuous, two action networks and one critic network are used that are adaptively tuned in forward time using adaptive critic methods. The result is a Q-learning approximate dynamic programming model-free approach that solves the zero-sum game forward in time. It is shown that the critic converges to the game value function and the action networks converge to the Nash equilibrium of the game. Proofs of convergence of the algorithm are shown. It is proven that the algorithm ends up to be a model-free iterative algorithm to solve the (GARE) of the linear quadratic discrete-time zero-sum game. The effectiveness of this method is shown by performing an H-infinity control autopilot design for an F-16 aircraft.
Topics: Example of a game without a value (66%), Zero-sum game (61%), Algebraic Riccati equation (59%) ...read more
01 Jan 2011-Automatica
Abstract: In this paper, a new iterative adaptive dynamic programming (ADP) method is proposed to solve a class of continuous-time nonlinear two-person zero-sum differential games. The idea is to use the ADP technique to obtain the optimal control pair iteratively which makes the performance index function reach the saddle point of the zero-sum differential games. If the saddle point does not exist, the mixed optimal control pair is obtained to make the performance index function reach the mixed optimum. Stability analysis of the nonlinear systems is presented and the convergence property of the performance index function is also proved. Two simulation examples are given to illustrate the performance of the proposed method.
01 Aug 2011-Automatica
Abstract: In this paper we present an online adaptive control algorithm based on policy iteration reinforcement learning techniques to solve the continuous-time (CT) multi player non-zero-sum (NZS) game with infinite horizon for linear and nonlinear systems. NZS games allow for players to have a cooperative team component and an individual selfish component of strategy. The adaptive algorithm learns online the solution of coupled Riccati equations and coupled Hamilton-Jacobi equations for linear and nonlinear systems respectively. This adaptive control method finds in real-time approximations of the optimal value and the NZS Nash-equilibrium, while also guaranteeing closed-loop stability. The optimal-adaptive algorithm is implemented as a separate actor/critic parametric network approximator structure for every player, and involves simultaneous continuous-time adaptation of the actor/critic networks. A persistence of excitation condition is shown to guarantee convergence of every critic to the actual optimal value function for that player. A detailed mathematical analysis is done for 2-player NZS games. Novel tuning algorithms are given for the actor/critic networks. The convergence to the Nash equilibrium is proven and stability of the system is also guaranteed. This provides optimal adaptive control solutions for both non-zero-sum games and their special case, the zero-sum games. Simulation examples show the effectiveness of the new algorithm.
01 Jan 1994-IEEE Transactions on Automatic Control
Abstract: The established theory of nonzero sum games is used to solve a mixed H/sub 2//H/sub /spl infin//, control problem. Our idea is to use the two pay-off functions associated with a two-player Nash game to represent the H/sub 2/ and H/sub /spl infin// criteria separately. We treat the state-feedback problem and we find necessary and sufficient conditions for the existence of a solution. Both the finite and infinite time problems are considered. In the infinite horizon case we present a full stability analysis. The resulting controller is a constant state-feedback law, characterized by the solution to a pair of cross-coupled Riccati equations, which may be solved using a standard numerical integration procedure. We begin our development by considering strategy sets containing linear controllers only. At the end of the paper we broaden the strategy sets to include a class of nonlinear controls. It turns out that this extension has no effect on the necessary and sufficient conditions for the existence of a solution or on the nature of the controllers. >