scispace - formally typeset
Book ChapterDOI

A Gradient Method for Approximating Saddle Points and Constrained Maxima

Kenneth J. Arrow, +1 more
- pp 45-60
TLDR
In this paper, the authors define functions with suitable differentiability properties, where fj(X)≥0 for all X, and define a set of vectors with components Xi, Yj.
Abstract
In the following, X and Y will be vectors with components Xi, Yj. By X ≥ 0 will be meant X ≥ 0 for all i. Let g(X), fj(X) (j=1, •••) be functions with suitable differentiability properties, where fj(X)≥0 for all X, and define \( {\rm F}(\rm X, Y)=g(X)+{\sum_{j=1}^{m}} Y_{j}\Big\{1-[{f_{j}}(X)]^{l+\eta} \Big\}\).

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A Passivity-Based Approach to Nash Equilibrium Seeking Over Networks

TL;DR: This paper proposes an augmented gradient-play dynamics with correction, in which players communicate locally only with their neighbors to compute an estimate of the other players’ actions, and exploits incremental passivity properties and shows that a synchronizing, distributed Laplacian feedback can be designed using relative estimates of the neighbors.
Journal ArticleDOI

Distributed convergence to Nash equilibria in two-network zero-sum games

TL;DR: In this article, the authors considered a class of strategic scenarios in which two networks of agents have opposing objectives with regard to the optimization of a common objective function and synthesized a distributed saddle-point strategy and established its convergence to the Nash equilibrium for the class of strictly concaveconvex and locally Lipschitz objective functions.
Journal ArticleDOI

A Passivity-Based Approach to Nash Equilibrium Seeking over Networks

TL;DR: In this paper, the authors consider the problem of distributed Nash equilibrium seeking over networks, a setting in which players have limited local information (i.e., instantaneous all-to-all player communication) and consider how to modify this gradient-play dynamics in the case of partial or networked information between players.
Book ChapterDOI

Reduction of Constrained Maxima to Saddle-point Problems

TL;DR: The usual applications of the method of Lagrangian multipliers, used in locating constrained extrema (say maxima), involve the setting up of the Lagrangians expression, where f(x) is being (say) maximized with respect to the (vector) variable x = {x1, • • •, xN, subject to the constraint g(x)= O, where g (x) maps the points of the N-dimenaional x-space into an.M-dimensional space, and y ={y1,• • •
Journal ArticleDOI

Gradient Methods for Constrained Maxima

TL;DR: In this paper, the authors consider a nonlinear game where player 1 has the choice of a certain set of numbers x 1, X 2, X 3, X 4, X 5, X 6, X 7, X 8, X 9, X 10, X 11, X 12, X 14, X 15, X 16, X 17, X 18, X 19, X 20, X 21, X 22, X 24, X 25, X 26, X 27, X 28, X 29, X 30, X 31, y 1, Y 30,
Related Papers (5)