scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Inexact Proximal Point Methods for Variational Inequality Problems

01 Apr 2010-Siam Journal on Optimization (Society for Industrial and Applied Mathematics)-Vol. 20, Iss: 5, pp 2653-2678
TL;DR: The algorithm has a relative error tolerance criterion in solving the proximal subproblems and the assumption of paramonotonicity is not used, which is standard in proving convergence of Bregman-based proximal methods.
Abstract: We present a new family of proximal point methods for solving monotone variational inequalities. Our algorithm has a relative error tolerance criterion in solving the proximal subproblems. Our convergence analysis covers a wide family of regularization functions, including double regularizations recently introduced by Silva, Eckstein, and Humes, Jr. [SIAM J. Optim., 12 (2001), pp. 238-261] and the Bregman distance induced by $h(x)=\sum_{i=1}^{n}x_{i}\log x_{i}$. We do not use in our analysis the assumption of paramonotonicity, which is standard in proving convergence of Bregman-based proximal methods.
Citations
More filters
Posted Content
24 Oct 2018
TL;DR: This work proposes an algorithmic framework motivated by the inexact proximal point method, which solves the weakly monotone variational inequality corresponding to the original min-max problem by approximately solving a sequence of strongly monot one variational inequalities constructed by adding a strongly monOTone mapping to theOriginal gradient mapping.
Abstract: In this paper, we consider first-order algorithms for solving a class of non-convex non-concave min-max saddle-point problems, whose objective function is weakly convex (resp. weakly concave) in terms of the variable of minimization (resp. maximization). It has many important applications in machine learning, statistics, and operations research. One such example that attracts tremendous attention recently in machine learning is training Generative Adversarial Networks. We propose an algorithmic framework motivated by the inexact proximal point method, which solves the weakly monotone variational inequality corresponding to the original min-max problem by approximately solving a sequence of strongly monotone variational inequalities constructed by adding a strongly monotone mapping to the original gradient mapping. In this sequence, each strongly monotone variational inequality is defined with a proximal center that is updated using the approximate solution of the previous variational inequality. Our algorithm generates a sequence of solution that provably converges to a nearly stationary solution of the original min-max problem. The proposed framework is flexible because various subroutines can be employed for solving the strongly monotone variational inequalities. The overall computational complexities of our methods are established when the employed subroutines are subgradient method, stochastic subgradient method, gradient descent method and Nesterov's accelerated method and variance reduction methods for a Lipschitz continuous operator. To the best of our knowledge, this is the first work that establishes the non-asymptotic convergence to a nearly stationary point of a non-convex non-concave min-max problem.

39 citations


Cites methods from "Inexact Proximal Point Methods for ..."

  • ...When the set-valued mapping F is monotone, many efficient algorithms with non-asymptotic convergence guarantee have been developed for a VI problem itself or under the setting of a min-max problem [43, 65, 41, 55, 8, 53, 51, 50, 9]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a projection-type algorithm for solving the variational inequality problem for point-to-set operators is introduced, and its convergence properties are established under the assumption that the dual solution set is not empty.
Abstract: We introduce a projection-type algorithm for solving the variational inequality problem for point-to-set operators, and establish its convergence properties. Namely, we assume that the operator of the variational inequality is continuous in the point-to-set sense, i.e., inner- and outer-semicontinuous. Under the assumption that the dual solution set is not empty, we prove that our method converges to a solution of the variational inequality. Instead of the monotonicity assumption, we require the non-emptiness of the solution set of the dual formulation of the variational inequality. We provide numerical experiments illustrating the behaviour of our iterates. Moreover, we compare our new method with a recent similar one.

17 citations

Journal ArticleDOI
TL;DR: The main objective of the present paper is to provide a convergence analysis only using a weaker assumption called quasimonotonicity of the Bregman-function-based Proximal Point Algorithm for variational inequalities.
Abstract: The Bregman-function-based Proximal Point Algorithm for variational inequalities is studied. Classical papers on this method deal with the assumption that the operator of the variational inequality is monotone. Motivated by the fact that this assumption can be considered to be restrictive, e.g., in the discussion of Nash equilibrium problems, the main objective of the present paper is to provide a convergence analysis only using a weaker assumption called quasimonotonicity. To the best of our knowledge, this is the first algorithm established for this general and frequently studied class of problems.

14 citations


Cites background from "Inexact Proximal Point Methods for ..."

  • ...When K = R+ and h is a suitably chosen function (Kullback–Leibler entropy), then the cutting plane property is not of importance as well, at least if T is maximal monotone (see the analysis in [34])....

    [...]

Journal ArticleDOI
TL;DR: The use of the inexact method is demonstrated, which proves the existence of relative error thresholds for the two inexact steps to ensure the convergence, and is fast, reliable, memory-efficient, GPU-friendly, flexible with different elastic models, scalable to a large parameter space, and parallelizable for multiple data samples.
Abstract: Elastic parameter optimization has revealed its importance in 3D modeling, virtual reality, and additive manufacturing in recent years. Unfortunately, it is known to be computationally expensive, especially if there are many parameters and data samples. To address this challenge, we propose to introduce the inexactness into descent methods, by iteratively solving a forward simulation step and a parameter update step in an inexact manner. The development of such inexact descent methods is centered at two questions: 1) how accurate/inaccurate can the two steps be; and 2) what is the optimal way to implement an inexact descent method. The answers to these questions are in our convergence analysis, which proves the existence of relative error thresholds for the two inexact steps to ensure the convergence. This means we can simply solve each step by a fixed number of iterations, if the iterative solver is at least linearly convergent. While the use of the inexact idea speeds up many descent methods, we specifically favor a GPU-based one powered by state-of-the-art simulation techniques. Based on this method, we study a variety of implementation issues, including backtracking line search, initialization, regularization, and multiple data samples. We demonstrate the use of our inexact method in elasticity measurement and design applications. Our experiment shows the method is fast, reliable, memory-efficient, GPU-friendly, flexible with different elastic models, scalable to a large parameter space, and parallelizable for multiple data samples.

11 citations

Posted Content
TL;DR: In this article, the convergence properties of a projection-type algorithm for solving the variational inequality problem for point-to-set operators are studied. But no monotoni\-city assumption is used in their analysis.
Abstract: We introduce and study the convergence properties of a projection-type algorithm for solving the variational inequality problem for point-to-set operators. No monotoni\-city assumption is used in our analysis. The operator defining the problem is only assumed to be continuous in the point-to-set sense, i.e., inner- and outer-semicontinuous. Additionally, we assume non-emptiness of the so-called dual solution set. We prove that the whole sequence of iterates converges to a solution of the variational inequality. Moreover, we provide numerical experiments illustrating the behavior of our iterates. Through several examples, we provide a comparison with a recent similar algorithm.

9 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, the proximal point algorithm in exact form is investigated in a more general form where the requirement for exact minimization at each iteration is weakened, and the subdifferential $\partial f$ is replaced by an arbitrary maximal monotone operator T.
Abstract: For the problem of minimizing a lower semicontinuous proper convex function f on a Hilbert space, the proximal point algorithm in exact form generates a sequence $\{ z^k \} $ by taking $z^{k + 1} $ to be the minimizes of $f(z) + ({1 / {2c_k }})\| {z - z^k } \|^2 $, where $c_k > 0$. This algorithm is of interest for several reasons, but especially because of its role in certain computational methods based on duality, such as the Hestenes-Powell method of multipliers in nonlinear programming. It is investigated here in a more general form where the requirement for exact minimization at each iteration is weakened, and the subdifferential $\partial f$ is replaced by an arbitrary maximal monotone operator T. Convergence is established under several criteria amenable to implementation. The rate of convergence is shown to be “typically” linear with an arbitrarily good modulus if $c_k $ stays large enough, in fact superlinear if $c_k \to \infty $. The case of $T = \partial f$ is treated in extra detail. Applicati...

3,238 citations

Journal ArticleDOI
TL;DR: An alternative convergence proof of a proximal-like minimization algorithm using Bregman functions, recently proposed by Censor and Zenios, is presented and allows the establishment of a global convergence rate of the algorithm expressed in terms of function values.
Abstract: An alternative convergence proof of a proximal-like minimization algorithm using Bregman functions, recently proposed by Censor and Zenios, is presented. The analysis allows the establishment of a global convergence rate of the algorithm expressed in terms of function values.

481 citations

Journal ArticleDOI
TL;DR: Applying this generalization of the proximal point algorithm to convex programming, one obtains the D-function proximal minimization algorithm of Censor and Zenios, and a wide variety of new multiplier methods.
Abstract: A Bregman function is a strictly convex, differentiable function that induces a well-behaved distance measure or D-function on Euclidean space. This paper shows that, for every Bregman function, there exists a "nonlinear" version of the proximal point algorithm, and presents an accompanying convergence theory. Applying this generalization of the proximal point algorithm to convex programming, one obtains the D-function proximal minimization algorithm of Censor and Zenios, and a wide variety of new multiplier methods. These multiplier methods are different from those studied by Kort and Bertsekas, and include nonquadratic variations on the proximal method of multipliers.

340 citations

Journal ArticleDOI
TL;DR: A class of interior gradient algorithms is derived which exhibits an $O(k^{-2})$ global convergence rate estimate and is illustrated with many applications and examples, including some new explicit and simple algorithms for conic optimization problems.
Abstract: Interior gradient (subgradient) and proximal methods for convex constrained minimization have been much studied, in particular for optimization problems over the nonnegative octant. These methods are using non-Euclidean projections and proximal distance functions to exploit the geometry of the constraints. In this paper, we identify a simple mechanism that allows us to derive global convergence results of the produced iterates as well as improved global rates of convergence estimates for a wide class of such methods, and with more general convex constraints. Our results are illustrated with many applications and examples, including some new explicit and simple algorithms for conic optimization problems. In particular, we derive a class of interior gradient algorithms which exhibits an $O(k^{-2})$ global convergence rate estimate.

307 citations


"Inexact Proximal Point Methods for ..." refers background or methods in this paper

  • ...11) were studied, for instance, in [4, 5, 7, 2]....

    [...]

  • ..., [4, 2]), the T ε can be D ow nl oa de d 08 /3 1/ 14 to 1 31 ....

    [...]

  • ...Using the concept of proximal distances introduced in [2], we have presented here two families of inexact proximal point methods....

    [...]

  • ...Following the analysis given in [2], we associate with every d ∈ D(C) an induced proximal distance Hd which we define below....

    [...]

  • ...It is proved in [2] that the proximal distance induced by d is Hd := Dh, and hence the proximal methods generated by these distances are called in [2] self-proximal....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors consider methods for minimizing a convex function f that generate a sequence {xk} by taking xk+1 to be an approximate minimizer of f(x)+Dh(x,xk)/ck, where ck > 0 and Dh is the D-function of a Bregman function h.
Abstract: We consider methods for minimizing a convex function f that generate a sequence {xk} by taking xk+1 to be an approximate minimizer of f(x)+Dh(x,xk)/ck, where ck > 0 and Dh is the D-function of a Bregman function h. Extensions are made to B-functions that generalize Bregman functions and cover more applications. Convergence is established under criteria amenable to implementation. Applications are made to nonquadratic multiplier methods for nonlinear programs.

251 citations


"Inexact Proximal Point Methods for ..." refers background in this paper

  • ..., [1, 9, 14, 16, 22, 26]), φ-divergences (see [30, 7, 19, 20, 21, 31, 32]), log-quadratic distances (also known as second order homogeneous kernels) [4, 5], and double regularizations, which extend the latter ones, and were recently introduced in [17]....

    [...]