scispace - formally typeset
Search or ask a question

Showing papers on "Strongly monotone published in 2022"


Journal ArticleDOI
01 Jan 2022
TL;DR: In this article, the problem of zeroing an error output of a nonlinear discrete-time system in the presence of constant exogenous disturbances, subject to hard convex constraints on the input signal, is formulated as a variational inequality, and a forward-backward splitting algorithm is adapted to act as an integral controller which ensures that the input constraints are met at each time step.
Abstract: We consider the problem of zeroing an error output of a nonlinear discrete-time system in the presence of constant exogenous disturbances, subject to hard convex constraints on the input signal. The design specification is formulated as a variational inequality, and we adapt a forward-backward splitting algorithm to act as an integral controller which ensures that the input constraints are met at each time step. We establish a low-gain stability result for the closed-loop system when the plant is exponentially stable, generalizing previously known results for integral control of discrete-time systems. Specifically, it is shown that if the composition of the plant equilibrium input-output map and the integral feedback gain is strongly monotone, then the closed-loop system is exponentially stable for all sufficiently small integral gains. The method is illustrated via application to a four-tank process.

7 citations


Journal ArticleDOI
TL;DR: In this paper , two new iterative schemes for finding an element of the set of solutions of a pseudo-monotone , Lipschitz continuous variational inequality problem in real Hilbert spaces are presented.

7 citations


Journal ArticleDOI
TL;DR: An explicit proximal method which requires only one proximal step and one mapping evaluation per iteration and also uses an adaptive step-size rule that enables to avoid the prior knowledge of the Lipschitz constant of the involved mapping is proposed.

6 citations


Journal ArticleDOI
TL;DR: In this article , strongly convergent versions of the forward-reflected-backward splitting method of Malitsky and Tam for finding a zero of the sum of two monotone operators in a real Hilbert space are studied.
Abstract: In this paper, we propose and study several strongly convergent versions of the forward–reflected–backward splitting method of Malitsky and Tam for finding a zero of the sum of two monotone operators in a real Hilbert space. Our proposed methods only require one forward evaluation of the single-valued operator and one backward evaluation of the set-valued operator at each iteration; a feature that is absent in many other available strongly convergent splitting methods in the literature. We also develop inertial versions of our methods and strong convergence results are obtained for these methods when the set-valued operator is maximal monotone and the single-valued operator is Lipschitz continuous and monotone. Finally, we discuss some examples from image restorations and optimal control regarding the implementations of our methods in comparisons with known related methods in the literature.

5 citations


Journal ArticleDOI
TL;DR: In this paper , an explicit proximal method was proposed for the generalized variational inequality problem in real Hilbert spaces, which requires only one proximal step and one mapping evaluation per iteration and also uses an adaptive step-size rule that enables to avoid the prior knowledge of the Lipschitz constant of the involved mapping.

5 citations



Journal ArticleDOI
TL;DR: In this article , a new subgradient extragradient method for approximating a common solution of pseudo-monotone and Lipschitz continuous variational inequalities and fixed point problem in real Hilbert spaces is proposed.
Abstract: <p style='text-indent:20px;'>Using the concept of Bregman divergence, we propose a new subgradient extragradient method for approximating a common solution of pseudo-monotone and Lipschitz continuous variational inequalities and fixed point problem in real Hilbert spaces. The algorithm uses a new self-adjustment rule for selecting the stepsize in each iteration and also, we prove a strong convergence result for the sequence generated by the algorithm without prior knowledge of the Lipschitz constant. Finally, we provide some numerical examples to illustrate the performance and accuracy of our algorithm in finite and infinite dimensional spaces.</p>

3 citations


Journal ArticleDOI
TL;DR: In this paper , an algorithm for solving variational inequalities problem with Lipschitz continuous and pseudomonotone mapping in Banach space is introduced, and a numerical experiment is given in support of their results.
Abstract: <abstract><p>In this paper, we introduce an algorithm for solving variational inequalities problem with Lipschitz continuous and pseudomonotone mapping in Banach space. We modify the subgradient extragradient method with a new and simple iterative step size, and the strong convergence to a common solution of the variational inequalities and fixed point problems is established without the knowledge of the Lipschitz constant. Finally, a numerical experiment is given in support of our results.</p></abstract>

3 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed two simple methods for finding a zero of the sum of two monotone operators in real reflexive Banach spaces, which require only one evaluation of the single-valued operator at each iteration.
Abstract: We propose two very simple methods, the first one with constant step sizes and the second one with self-adaptive step sizes, for finding a zero of the sum of two monotone operators in real reflexive Banach spaces. Our methods require only one evaluation of the single-valued operator at each iteration. Weak convergence results are obtained when the set-valued operator is maximal monotone and the single-valued operator is Lipschitz continuous, and strong convergence results are obtained when either one of these two operators is required, in addition, to be strongly monotone. We also obtain the rate of convergence of our proposed methods in real reflexive Banach spaces. Finally, we apply our results to solving generalized Nash equilibrium problems for gas markets.

3 citations



Journal ArticleDOI
TL;DR: In this paper , an inertial proximal point method for variational inclusion involving difference of two maximal monotone vector fields in Hadamard manifolds is proposed, and sufficient conditions for boundedness and full convergence of the sequence are presented.
Abstract: We propose an inertial proximal point method for variational inclusion involving difference of two maximal monotone vector fields in Hadamard manifolds. We prove that if the sequence generated by the method is bounded, then every cluster point is a solution of the non-monotone variational inclusion. Some sufficient conditions for boundedness and full convergence of the sequence are presented. The efficiency of the method is verified by numerical experiments comparing its performance with classical versions of the method for monotone and non-monotone problems.

Journal ArticleDOI
TL;DR: In this article , a new kind of split monotone variational inclusion problem involving Cayley operator in the setting of infinite-dimensional Hilbert spaces was introduced, and a general iterative method was developed to approximate the solution of the split-monotone VIN problem involving CCA.
Abstract: Abstract In this article, we introduce a new kind of split monotone variational inclusion problem involving Cayley operator in the setting of infinite-dimensional Hilbert spaces. We develop a general iterative method to approximate the solution of the split monotone variational inclusion problem involving Cayley operator. Under some suitable conditions, a convergence theorem for the sequences generated by the proposed iterative scheme is established, which also solves certain variational inequality problems related to strongly positive linear operators. Finally, a numerical example is presented to study the efficiency of the proposed algorithm through MATLAB programming.


Journal ArticleDOI
TL;DR: In this paper , the authors investigated and analyzed the strong convergence of the sequence generated by an inexact proximal point method with possible unbounded errors for finding zeros of monotone operators in Hadamard spaces.
Abstract: In this paper, we investigate and analyse the strong convergence of the sequence generated by an inexact proximal point method with possible unbounded errors for finding zeros of monotone operators in Hadamard spaces. We show that the boundedness of the generated sequence is equivalent to the zero set of the operator to be nonempty. In this case, we prove the strong convergence of the generated sequence to a zero of the operator. We also provide some applications of our main results and give a numerical example to show the performance of the proposed algorithm.

Posted ContentDOI
25 Mar 2022
TL;DR: When the atom poset has 5 or more elements, there are infinitely many distinct monotone game values as mentioned in this paper , and there are only finitely many such values when the poset of atoms is linearly ordered with 4 or fewer elements.
Abstract: A notion of combinatorial game over a partially ordered set of atomic outcomes was recently introduced by Selinger. These games are appropriate for describing the value of positions in Hex and other monotone set coloring games. It is already known that there are infinitely many distinct monotone game values when the poset of atoms is not linearly ordered, and that there are only finitely many such values when the poset of atoms is linearly ordered with 4 or fewer elements. In this short paper, we settle the remaining case: when the atom poset has 5 or more elements, there are infinitely many distinct monotone values.

Posted ContentDOI
17 May 2022
TL;DR: In this paper , it was shown that the last-iterate convergence of PEG for monotone variational inequalities with Lipschitz Jacobians can be obtained without any assumption on the Jacobian of the operator.
Abstract: The Past Extragradient (PEG) [Popov, 1980] method, also known as the Optimistic Gradient method, has known a recent gain in interest in the optimization community with the emergence of variational inequality formulations for machine learning. Recently, in the unconstrained case, Golowich et al. [2020] proved that a $O(1/N)$ last-iterate convergence rate in terms of the squared norm of the operator can be achieved for Lipschitz and monotone operators with a Lipschitz Jacobian. In this work, by introducing a novel analysis through potential functions, we show that (i) this $O(1/N)$ last-iterate convergence can be achieved without any assumption on the Jacobian of the operator, and (ii) it can be extended to the constrained case, which was not derived before even under Lipschitzness of the Jacobian. The proof is significantly different from the one known from Golowich et al. [2020], and its discovery was computer-aided. Those results close the open question of the last iterate convergence of PEG for monotone variational inequalities.



Journal ArticleDOI
TL;DR: In this paper , a parallel iterative algorithm for approximating the solution of a split feasibility problem on the zero of monotone operators, generalized mixed equilibrium problem and fixed point problem is introduced.
Abstract: The main purpose of this paper is to introduce a parallel iterative algorithm for approximating the solution of a split feasibility problem on the zero of monotone operators, generalized mixed equilibrium problem and fixed point problem. Using our algorithm, we state and prove a strong convergence theorem for approximating a common element in the set of solutions of a problem of finding zeroes of sum of two monotone operators,generalized mixed equilibrium problem and fixed point problem for a finite family of $\eta$-demimetric mappings in the frame work of a reflexive, strictly convex and smooth Banach spaces. We also give a numerical experiment applying our main result. Our result improves, extends and unifies other results in this direction in the literature.

Journal ArticleDOI
TL;DR: In this paper , a hybrid technique for computing the common solution of fixed point of a finite family of two non-expansive mapping and variational inequality problem for inverse strongly monotone mapping in a real Hilbert space is provided.
Abstract: In the recent work, a new hybrid technique for computing the common solution of fixed point of a finite family of two non-expansive mapping and variational inequality problem for inverse strongly monotone mapping in a real Hilbert space is provided. We also demonstrate the convergence of the hybrid approach using an example.

Journal ArticleDOI
TL;DR: In this article , two subgradient extragradient-type algorithms for solving variational inequality problems in the real Hilbert space were proposed. But the first one can only be applied when the mapping f is strongly pseudomonotone (not monotone) and Lipschitz continuous.
Abstract: Abstract In this paper, we introduce two subgradient extragradient-type algorithms for solving variational inequality problems in the real Hilbert space. The first one can be applied when the mapping f is strongly pseudomonotone (not monotone) and Lipschitz continuous. The first algorithm only needs two projections, where the first projection onto closed convex set C and the second projection onto a half-space $C_{k}$ C k . The strong convergence theorem is also established. The second algorithm is relaxed and self-adaptive; that is, at each iteration, calculating two projections onto some half-spaces and the step size can be selected in some adaptive ways. Under the assumption that f is monotone and Lipschitz continuous, a weak convergence theorem is provided. Finally, we provide numerical experiments to show the efficiency and advantage of the proposed algorithms.

Journal ArticleDOI
TL;DR: In this article , the authors used the nearest point projection to force the strong convergence of a Mann-based iteration for nonexpansive and monotone operators in an infinite dimensional Hilbert space.
Abstract: Mann iteration is weakly convergent in infinite dimensional spaces. We, in this paper, use the nearest point projection to force the strong convergence of a Mann-based iteration for nonexpansive and monotone operators. A strong convergence theorem of common elements is obtained in an infinite dimensional Hilbert space. No compact conditions are needed.



Posted ContentDOI
20 Feb 2022
TL;DR: In this paper , a stochastic version of the classical Tseng's forward-backward-forward method with inertial term was proposed for solving monotone inclusions given by the sum of a maximal non-monotone operator and a single-valued operator in real Hilbert spaces.
Abstract: In this paper, we propose a stochastic version of the classical Tseng's forward-backward-forward method with inertial term for solving monotone inclusions given by the sum of a maximal monotone operator and a single-valued monotone operator in real Hilbert spaces. We obtain the almost sure convergence for the general case and the rate $\mathcal{O}(1/n)$ in expectation for the strong monotone case. Furthermore, we derive $\mathcal{O}(1/n)$ rate convergence of the primal-dual gap for saddle point problems.

Journal ArticleDOI
TL;DR: In this paper , a framework of generalized proximal point algorithms associated with a maximally monotone operator is proposed and studied, and sufficient conditions on the regularization and relaxation parameters of GPs are given for the equivalence of the boundedness of the sequence of iterations generated by this algorithm and the nonemptiness of the zero set of the maximally non-emptiness operator.
Abstract: In this work, we propose and study a framework of generalized proximal point algorithms associated with a maximally monotone operator. We indicate sufficient conditions on the regularization and relaxation parameters of generalized proximal point algorithms for the equivalence of the boundedness of the sequence of iterations generated by this algorithm and the non-emptiness of the zero set of the maximally monotone operator, and for the weak and strong convergence of the algorithm. Our results cover or improve many results on generalized proximal point algorithms in our references. Improvements of our results are illustrated by comparing our results with related known ones.

Posted ContentDOI
31 Mar 2022
TL;DR: In this article , a method for finding the zero of the sum of two monotone operators on real Hilbert spaces is described and some illustrative examples are given at the end of this paper.
Abstract: Abstract This paper describes and presents a method for finding the zero of the sum of two monotone operators on real Hilbert spaces. In addition, some illustrative examples are given at the end of this paper.

Journal ArticleDOI
TL;DR: In this paper , the convergence of a stochastic algorithm for solving monotone inclusions that are the sum of a maximal Monotone operator and a Lipschitzian operator is analyzed.
Abstract: We propose and analyze the convergence of a novel stochastic algorithm for solving monotone inclusions that are the sum of a maximal monotone operator and a monotone, Lipschitzian operator. The propose algorithm requires only unbiased estimations of the Lipschitzian operator. We obtain the rate $${\mathcal {O}}(log(n)/n)$$ in expectation for the strongly monotone case, as well as almost sure convergence for the general case. Furthermore, in the context of application to convex–concave saddle point problems, we derive the rate of the primal–dual gap. In particular, we also obtain $${\mathcal {O}}(1/n)$$ rate convergence of the primal–dual gap in the deterministic setting.

Posted ContentDOI
11 Jul 2022
TL;DR: In this paper , it was shown that the inductive bias of having positive parameters can lead to a superpolynomial blow-up in the number of neurons when approximating monotone functions.
Abstract: Monotone functions and data sets arise in a variety of applications. We study the interpolation problem for monotone data sets: The input is a monotone data set with $n$ points, and the goal is to find a size and depth efficient monotone neural network, with non negative parameters and threshold units, that interpolates the data set. We show that there are monotone data sets that cannot be interpolated by a monotone network of depth $2$. On the other hand, we prove that for every monotone data set with $n$ points in $\mathbb{R}^d$, there exists an interpolating monotone network of depth $4$ and size $O(nd)$. Our interpolation result implies that every monotone function over $[0,1]^d$ can be approximated arbitrarily well by a depth-4 monotone network, improving the previous best-known construction of depth $d+1$. Finally, building on results from Boolean circuit complexity, we show that the inductive bias of having positive parameters can lead to a super-polynomial blow-up in the number of neurons when approximating monotone functions.

Journal ArticleDOI
TL;DR: In this article , a method for finding the zero of the sum of finite family of maximal monotone operators on real Hilbert spaces is presented. But this method is restricted to the case where the number of operators is fixed.
Abstract: In this paper, we present a method for finding the zero of the sum of finite family of maximal monotone operators on real Hilbert spaces. In the case where the number of maximal monotone operators is three, we define a function such that its fixed points are solutions of our problem. Some illustrative examples are given at the end of this paper.