scispace - formally typeset
Search or ask a question

Showing papers on "Monotone polygon published in 2019"


Journal ArticleDOI
TL;DR: Two simple proofs of the triangle inequality for the Jaccard distance in terms of nonnegative, monotone, submodular functions are given and discussed in this paper, where they are shown to be equivalent.

147 citations


Proceedings Article
08 Dec 2019
TL;DR: A synthetic view of Extra-Gradient algorithms is developed, and it is shown that they retain a $\mathcal{O}(1/t)$ ergodic convergence rate in smooth, deterministic problems.
Abstract: Variational inequalities have recently attracted considerable interest in machine learning as a flexible paradigm for models that go beyond ordinary loss function minimization (such as generative adversarial networks and related deep learning systems). In this setting, the optimal O(1/t) convergence rate for solving smooth monotone variational inequalities is achieved by the Extra-Gradient (EG) algorithm and its variants. Aiming to alleviate the cost of an extra gradient step per iteration (which can become quite substantial in deep learning), several algorithms have been proposed as surrogates to Extra-Gradient with a single oracle call per iteration. In this paper, we develop a synthetic view of such algorithms, and we complement the existing literature by showing that they retain a $O(1/t)$ ergodic convergence rate in smooth, deterministic problems. Subsequently, beyond the monotone deterministic case, we also show that the last iterate of single-call, stochastic extra-gradient methods still enjoys a $O(1/t)$ local convergence rate to solutions of non-monotone variational inequalities that satisfy a second-order sufficient condition.

124 citations


Journal ArticleDOI
TL;DR: In this article, a fully adaptive algorithm for monotone variational inequalities is presented, which uses two previous iterates for an approximation of the local Lipschitz constant without running a linesearch.
Abstract: The paper presents a fully adaptive algorithm for monotone variational inequalities. In each iteration the method uses two previous iterates for an approximation of the local Lipschitz constant without running a linesearch. Thus, every iteration of the method requires only one evaluation of a monotone operator F and a proximal mapping g. The operator F need not be Lipschitz continuous, which also makes the algorithm interesting in the area of composite minimization. The method exhibits an ergodic O(1 / k) convergence rate and R-linear rate under an error bound condition. We discuss possible applications of the method to fixed point problems as well as its different generalizations.

103 citations


Journal ArticleDOI
Jun Yang1, Hongwei Liu1
TL;DR: A strong convergence theorem for the algorithm for solving classical variational inequalities problem with Lipschitz-continuous and monotone mapping in real Hilbert space is proved without any requirement of additional projections and the knowledge of the LipsChitz constant of the mapping.
Abstract: In this paper, we study strong convergence of the algorithm for solving classical variational inequalities problem with Lipschitz-continuous and monotone mapping in real Hilbert space. The algorithm is inspired by Tseng’s extragradient method and the viscosity method with a simple step size. A strong convergence theorem for our algorithm is proved without any requirement of additional projections and the knowledge of the Lipschitz constant of the mapping. Finally, we provide some numerical experiments to show the efficiency and advantage of the proposed algorithm.

102 citations


Journal ArticleDOI
TL;DR: In this paper, a projection-type approximation method is introduced for solving a variational inequality problem, which involves only one projection per iteration and the underline projection is used for each iteration.
Abstract: In this paper, a projection-type approximation method is introduced for solving a variational inequality problem. The proposed method involves only one projection per iteration and the underline op...

100 citations


Journal ArticleDOI
TL;DR: A monotonicity-based method for studying input-to-state stability (ISS) of nonlinear parabolic equations with boundary inputs and shows that the PDE backstepping controller which stabilizes linear reaction-diffusion equations from the boundary is robust with respect to additive actuator disturbances.
Abstract: We introduce a monotonicity-based method for studying input-to-state stability (ISS) of nonlinear parabolic equations with boundary inputs. We first show that a monotone control system is ISS if an...

81 citations


Journal ArticleDOI
TL;DR: A derivative-free iterative method for large-scale nonlinear monotone equations with convex constraints, which can generate a sufficient descent direction at each iteration, which is efficient and promising.
Abstract: In this paper, based on the projection strategy, we propose a derivative-free iterative method for large-scale nonlinear monotone equations with convex constraints, which can generate a sufficient descent direction at each iteration. Due to its lower storage and derivative-free information, the proposed method can be used to solve large-scale non-smooth problems. The global convergence of the proposed method is proved under the Lipschitz continuity assumption. Moreover, if the local error bound condition holds, the proposed method is shown to be linearly convergent. Preliminary numerical comparison shows that the proposed method is efficient and promising.

80 citations


Journal ArticleDOI
TL;DR: This paper designs novel center-free distributed GNE seeking algorithms for equality and inequality affine coupling constraints, respectively, and proves their convergence by showing that the two algorithms can be seen as specific instances of preconditioned proximal point algorithms for finding zeros of monotone operators.
Abstract: In this paper, we investigate distributed generalized Nash equilibrium (GNE) computation of monotone games with affine coupling constraints. Each player can only utilize its local objective function, local feasible set, and a local block of the coupling constraint, and can only communicate with its neighbors. We assume the game has monotone pseudo-subdifferential without Lipschitz continuity restrictions. We design novel center-free distributed GNE seeking algorithms for equality and inequality affine coupling constraints, respectively. A proximal alternating direction method of multipliers is proposed for the equality case, while for the inequality case, a parallel splitting type algorithm is proposed. In both algorithms, the GNE seeking task is decomposed into a sequential Nash equilibrium (NE) computation of regularized subgames and distributed update of multipliers and auxiliary variables, based on local data and local communication. Our two double-layer GNE algorithms need not specify the inner loop NE seeking algorithm, and moreover, only require that the strongly monotone subgames are inexactly solved. We prove their convergence by showing that the two algorithms can be seen as specific instances of preconditioned proximal point algorithms for finding zeros of monotone operators. Applications and numerical simulations are given for illustration.

75 citations


Journal ArticleDOI
TL;DR: In this article, an extragradient-based stochastic approximation scheme was proposed and proved to converge to a solution of the original problem under either pseudomonotonicity requirements or a suitably defined acute angle condition.
Abstract: We consider the stochastic variational inequality problem in which the map is expectation-valued in a component-wise sense. Much of the available convergence theory and rate statements for stochastic approximation schemes are limited to monotone maps. However, non-monotone stochastic variational inequality problems are not uncommon and are seen to arise from product pricing, fractional optimization problems, and subclasses of economic equilibrium problems. Motivated by the need to address a broader class of maps, we make the following contributions: (1) we present an extragradient-based stochastic approximation scheme and prove that the iterates converge to a solution of the original problem under either pseudomonotonicity requirements or a suitably defined acute angle condition. Such statements are shown to be generalizable to the stochastic mirror-prox framework; (2) under strong pseudomonotonicity, we show that the mean-squared error in the solution iterates produced by the extragradient SA scheme converges at the optimal rate of $${{\mathcal {O}}}\left( \frac{1}{{K}}\right) $$, statements that were hitherto unavailable in this regime. Notably, we optimize the initial steplength by obtaining an $$\epsilon $$-infimum of a discontinuous nonconvex function. Similar statements are derived for mirror-prox generalizations and can accommodate monotone SVIs under a weak-sharpness requirement. Finally, both the asymptotics and the empirical rates of the schemes are studied on a set of variational problems where it is seen that the theoretically specified initial steplength leads to significant performance benefits.

61 citations


Journal ArticleDOI
TL;DR: This study provides an algorithmic version of the convergence results obtained by Attouch–Cabot (J Differ Equ 264:7138–7182, 2018) in the case of continuous dynamical systems.
Abstract: In a Hilbert space $${\mathcal {H}}$$ , given $$A{:}\;{\mathcal {H}}\rightarrow 2^{\mathcal {H}}$$ a maximally monotone operator, we study the convergence properties of a general class of relaxed inertial proximal algorithms. This study aims to extend to the case of the general monotone inclusion $$Ax i 0$$ the acceleration techniques initially introduced by Nesterov in the case of convex minimization. The relaxed form of the proximal algorithms plays a central role. It comes naturally with the regularization of the operator A by its Yosida approximation with a variable parameter, a technique recently introduced by Attouch–Peypouquet (Math Program Ser B, 2018. https://doi.org/10.1007/s10107-018-1252-x ) for a particular class of inertial proximal algorithms. Our study provides an algorithmic version of the convergence results obtained by Attouch–Cabot (J Differ Equ 264:7138–7182, 2018) in the case of continuous dynamical systems.

60 citations


Journal ArticleDOI
TL;DR: The progressive hedging algorithm is demonstrated here to be applicable also to solving multistage stochastic variational inequality problems under monotonicity, thus increasing the range of applications for progressive hedges.
Abstract: The concept of a stochastic variational inequality has recently been articulated in a new way that is able to cover, in particular, the optimality conditions for a multistage stochastic programming problem. One of the long-standing methods for solving such an optimization problem under convexity is the progressive hedging algorithm. That approach is demonstrated here to be applicable also to solving multistage stochastic variational inequality problems under monotonicity, thus increasing the range of applications for progressive hedging. Stochastic complementarity problems as a special case are explored numerically in a linear two-stage formulation.

Posted Content
Donghwan Kim1
TL;DR: The proximal point method includes various well-known convex optimization methods, such as the proximal method of multipliers and the alternating direction method ofmultipliers, and thus the proposed acceleration has wide applications.
Abstract: This paper proposes an accelerated proximal point method for maximally monotone operators. The proof is computer-assisted via the performance estimation problem approach. The proximal point method includes various well-known convex optimization methods, such as the proximal method of multipliers and the alternating direction method of multipliers, and thus the proposed acceleration has wide applications. Numerical experiments are presented to demonstrate the accelerating behaviors.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a statistical approach to the seriation problem where the matrix of interest is observed with noise and study the corresponding minimax rate of estimation of the matrices.
Abstract: Given a matrix, the seriation problem consists in permuting its rows in such way that all its columns have the same shape, for example, they are monotone increasing. We propose a statistical approach to this problem where the matrix of interest is observed with noise and study the corresponding minimax rate of estimation of the matrices. Specifically, when the columns are either unimodal or monotone, we show that the least squares estimator is optimal up to logarithmic factors and adapts to matrices with a certain natural structure. Finally, we propose a computationally efficient estimator in the monotonic case and study its performance both theoretically and experimentally. Our work is at the intersection of shape constrained estimation and recent work that involves permutation learning, such as graph denoising and ranking.

Journal ArticleDOI
TL;DR: In this paper, the trajectories of a second-order differential equation with vanishing damping were studied, governed by the Yosida regularization of a maximally monotone operator with time-varying index, along with a new Regularized Inertial Proximal Algorithm obtained by means of a convenient finite difference discretization.
Abstract: We study the behavior of the trajectories of a second-order differential equation with vanishing damping, governed by the Yosida regularization of a maximally monotone operator with time-varying index, along with a new Regularized Inertial Proximal Algorithm obtained by means of a convenient finite-difference discretization. These systems are the counterpart to accelerated forward–backward algorithms in the context of maximally monotone operators. A proper tuning of the parameters allows us to prove the weak convergence of the trajectories to zeroes of the operator. Moreover, it is possible to estimate the rate at which the speed and acceleration vanish. We also study the effect of perturbations or computational errors that leave the convergence properties unchanged. We also analyze a growth condition under which strong convergence can be guaranteed. A simple example shows the criticality of the assumptions on the Yosida approximation parameter, and allows us to illustrate the behavior of these systems compared with some of their close relatives.

Journal ArticleDOI
TL;DR: The regularization method for exact as well as for inexact proximal point algorithms for finding the singularities of maximal monotone set-valued vector fields is considered and it is proved that the sequences generated by these algorithms converge to an element of the set of singularities.
Abstract: In this paper, we consider the regularization method for exact as well as for inexact proximal point algorithms for finding the singularities of maximal monotone set-valued vector fields. We prove that the sequences generated by these algorithms converge to an element of the set of singularities of a maximal monotone set-valued vector field. A numerical example is provided to illustrate the inexact proximal point algorithm with regularization. Applications of our results to minimization problems and saddle point problems are given in the setting of Hadamard manifolds.

Journal ArticleDOI
TL;DR: In this article, a monotone finite volume method for the time fractional Fokker-Planck equations was developed and theoretically proved its unconditional stability, and the convergence rate of this method was shown to be of order 1 in the space and if the space grid becomes suffciently fine, it can be improved to order 2.
Abstract: We develop a monotone finite volume method for the time fractional Fokker-Planck equations and theoretically prove its unconditional stability. We show that the convergence rate of this method is of order 1 in the space and if the space grid becomes suffciently fine, the convergence rate can be improved to order 2. Numerical results are given to support our theoretical findings. One characteristic of our method is that it has monotone property such that it keeps the nonnegativity of some physical variables such as density, concentration, etc.

Journal ArticleDOI
TL;DR: A hyperbolic tangent function, a bounded monotone differentiable function, is usually taken as a neuron activation function, whose activation gradient, i.e. gain scaling parameter, can reflect the activation gradient of a neuron.
Abstract: Hyperbolic tangent function, a bounded monotone differentiable function, is usually taken as a neuron activation function, whose activation gradient, i.e. gain scaling parameter, can reflect the re...

Journal ArticleDOI
TL;DR: In this article, the authors consider constructing separable Lyapunov functions for monotone systems that are also contractive, that is, the distance between any pair of trajectories exponentially decreases.

Proceedings Article
05 Feb 2019
TL;DR: This work considers variational inequalities coming from monotone operators, a setting that includes convex minimization and convex-concave saddle-point problems, and presents a universal algorithm based on the Mirror-Prox algorithm that achieves the optimal rates for the smooth/non-smooth, and noisy/noiseless settings.
Abstract: We consider variational inequalities coming from monotone operators, a setting that includes convex minimization and convex-concave saddle-point problems. We assume an access to potentially noisy unbiased values of the monotone operators and assess convergence through a compatible gap function which corresponds to the standard optimality criteria in the aforementioned subcases. We present a universal algorithm for these inequalities based on the Mirror-Prox algorithm. Concretely, our algorithm simultaneously achieves the optimal rates for the smooth/non-smooth, and noisy/noiseless settings. This is done without any prior knowledge of these properties, and in the general set-up of arbitrary norms and compatible Bregman divergences. For convex minimization and convex-concave saddle-point problems, this leads to new adaptive algorithms. Our method relies on a novel yet simple adaptive choice of the step-size, which can be seen as the appropriate extension of AdaGrad to handle constrained problems.

Journal ArticleDOI
TL;DR: In this paper, the authors present a generic framework to extend existing algorithms to solve more general nonlinear, possibly nonconvex, optimization problems by incorporating a local search step (gradient descent or Quasi-Newton iteration) into the uniformly optimal convex programming methods, and then enforcing a monotone decreasing property of the function values computed along the trajectory.
Abstract: Uniformly optimal convex programming algorithms have been designed to achieve the optimal complexity bounds for convex optimization problems regardless of the level of smoothness of the objective function. In this paper, we present a generic framework to extend such existing algorithms to solve more general nonlinear, possibly nonconvex, optimization problems. The basic idea is to incorporate a local search step (gradient descent or Quasi-Newton iteration) into the uniformly optimal convex programming methods, and then enforce a monotone decreasing property of the function values computed along the trajectory. While optimal methods for nonconvex programming are not generally known, algorithms of these types will achieve the best known complexity for nonconvex problems, and the optimal complexity for convex ones without requiring any problem parameters. As a consequence, we can have a unified treatment for a general class of nonlinear programming problems regardless of their convexity and smoothness level. In particular, we show that the accelerated gradient and level methods, both originally designed for solving convex optimization problems only, can be used for solving both convex and nonconvex problems uniformly. In a similar vein, we show that some well-studied techniques for nonlinear programming, e.g., Quasi-Newton iteration, can be embedded into optimal convex optimization algorithms to possibly further enhance their numerical performance. Our theoretical and algorithmic developments are complemented by some promising numerical results obtained for solving a few important nonconvex and nonlinear data analysis problems in the literature.

Proceedings ArticleDOI
01 Jan 2019
TL;DR: In this article, the authors considered the problem of maximizing a monotone submodular function subject to a knapsack constraint and proposed an algorithm that achieves a nearly-optimal, 1 - 1/e - epsilon approximation, using (1/epsilon) √ O(1/ε 4 ) n log √ n log n n) function evaluations and arithmetic operations.
Abstract: We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, 1 - 1/e - epsilon approximation, using (1/epsilon)^{O(1/epsilon^4)} n log^2{n} function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to Omega(n^2) running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle.

Proceedings ArticleDOI
06 Jan 2019
TL;DR: In this article, the adaptivity of an algorithm is defined as the number of sequential rounds of queries it makes to the evaluation oracle of the function, where in every round the algorithm is allowed to make polynomially many parallel queries.
Abstract: In this paper, we study the tradeoff between the approximation guarantee and adaptivity for the problem of maximizing a monotone submodular function subject to a cardinality constraint. The adaptivity of an algorithm is the number of sequential rounds of queries it makes to the evaluation oracle of the function, where in every round the algorithm is allowed to make polynomially-many parallel queries. Adaptivity is an important consideration in settings where the objective function is estimated using samples and in applications where adaptivity is the main running time bottleneck. Previous algorithms achieving a nearly-optimal 1 − 1/e − e approximation require Ω(n) rounds of adaptivity. In this work, we give the first algorithm that achieves a 1 − 1/e − e approximation using O(In n/e2) rounds of adaptivity. The number of function evaluations and additional running time of the algorithm are O(n poly(logn, 1/e)).

Journal ArticleDOI
TL;DR: This algorithm naturally arises from a non-standard discretization of a continuous dynamical system associated with the Douglas--Rachford splitting algorithm by performing an explicit, rather than implicit, discretized with respect to one of the operators involved.
Abstract: In this work, we propose a new algorithm for finding a zero of the sum of two monotone operators where one is assumed to be single-valued and Lipschitz continuous. This algorithm naturally arises from a non-standard discretization of a continuous dynamical system associated with the Douglas–Rachford splitting algorithm. More precisely, it is obtained by performing an explicit, rather than implicit, discretization with respect to one of the operators involved. Each iteration of the proposed algorithm requires the evaluation of one forward and one backward operator.

Proceedings Article
24 May 2019
TL;DR: This paper presents an algorithm for maximizing g - c under a $k$-cardinality constraint which produces a random feasible set of $\gamma$ values and describes an algorithm with the same approximation guarantees and faster $O(\frac{n}{\epsilon} \log\frac{1}{\EPsilon})$ runtime.
Abstract: It is generally believed that submodular functions -- and the more general class of $\gamma$-weakly submodular functions -- may only be optimized under the non-negativity assumption $f(S) \geq 0$. In this paper, we show that once the function is expressed as the difference $f = g - c$, where $g$ is monotone, non-negative, and $\gamma$-weakly submodular and $c$ is non-negative modular, then strong approximation guarantees may be obtained. We present an algorithm for maximizing $g - c$ under a $k$-cardinality constraint which produces a random feasible set $S$ such that $\mathbb{E} \left[ g(S) - c(S) \right] \geq (1 - e^{-\gamma} - \epsilon) g(OPT) - c(OPT)$, whose running time is $O (\frac{n}{\epsilon} \log^2 \frac{1}{\epsilon})$, i.e., independent of $k$. We extend these results to the unconstrained setting by describing an algorithm with the same approximation guarantees and faster $O(\frac{n}{\epsilon} \log\frac{1}{\epsilon})$ runtime. The main techniques underlying our algorithms are two-fold: the use of a surrogate objective which varies the relative importance between $g$ and $c$ throughout the algorithm, and a geometric sweep over possible $\gamma$ values. Our algorithmic guarantees are complemented by a hardness result showing that no polynomial-time algorithm which accesses $g$ through a value oracle can do better. We empirically demonstrate the success of our algorithms by applying them to experimental design on the Boston Housing dataset and directed vertex cover on the Email EU dataset.

Journal ArticleDOI
TL;DR: In this article, two modified Tseng's extragradient methods (also known as Forward-Backward-Forward methods) for solving non-Lipschitzian and pseudo-monotone variational inequalities in real Hilb...
Abstract: We propose two modified Tseng's extragradient methods (also known as Forward–Backward–Forward methods) for solving non-Lipschitzian and pseudo-monotone variational inequalities in real Hilb...

Journal ArticleDOI
TL;DR: In this paper, a modified forward-backward splitting method using the viscosity method was proposed to solve the monotone inclusion problems in the framework of real Hilbert spaces, which does not require the co-coercivity of the single-valued operator.
Abstract: In this paper, our interest is in investigating the monotone inclusion problems in the framework of real Hilbert spaces. To solve this problem, we propose a new modified forward–backward splitting method using the viscosity method (Moudafi in J Math Anal Appl 241(527):46–55, 2000). Under some mild conditions, we establish the strong convergence of the iterative sequence generated by the proposed algorithm. The advantage of our algorithm is that it does not require the co-coercivity of the single-valued operator. Our result improves related results in the literature. Finally, the performances of our proposed method are presented through numerical experiments in signal recovery.

Journal ArticleDOI
TL;DR: Two new iterative algorithms for solving monotone variational inequality problems in real Hilbert spaces are introduced, based on the inertial subgradient extragradient algorithm, the viscosity approximation method and the Mann type method, and some strong convergence theorems are proved.
Abstract: In this paper, we introduce two new iterative algorithms for solving monotone variational inequality problems in real Hilbert spaces, which are based on the inertial subgradient extragradient algorithm, the viscosity approximation method and the Mann type method, and prove some strong convergence theorems for the proposed algorithms under suitable conditions. The main results in this paper improve and extend some recent works given by some authors. Finally, the performances and comparisons with some existing methods are presented through several preliminary numerical experiments.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the problem of finding a common solution to a fixed point problem involving demi-contractive operator and a variational inequality with monotone and Lipschitz continuo...
Abstract: In this paper, we investigate the problem of finding a common solution to a fixed point problem involving demi-contractive operator and a variational inequality with monotone and Lipschitz continuo...

Journal ArticleDOI
TL;DR: In this article, a modified Hestenes-Stiefel (HS) spectral conjugate gradient (CG) method for monotone nonlinear equations with convex constraints is proposed based on projection technique.

Journal ArticleDOI
TL;DR: In this article, the authors unify and clarify the recent advances in the analysis of the fractional and generalized fractional Partial Differential Equations of Caputo and Riemann-Liouville type arising essentially from the probabilistic point of view.
Abstract: This paper aims at unifying and clarifying the recent advances in the analysis of the fractional and generalized fractional Partial Differential Equations of Caputo and Riemann-Liouville type arising essentially from the probabilistic point of view. This point of view leads to the path integral representation for the solutions of these equations, which is seen to be stable with respect to the initial data and key parameters and is directly amenable to numeric calculations (Monte-Carlo simulation). In many cases these solutions can be compactly presented via the wide class of operator-valued analytic functions of the Mittag-Leffler type, which are proved to be expressed as the Laplace transforms of the exit times of monotone Markov processes.