scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Inertial extragradient method via viscosity approximation approach for solving equilibrium problem in Hilbert space

01 Feb 2021-Optimization (Taylor & Francis)-Vol. 70, Iss: 2, pp 387-412
TL;DR: In this paper, a new viscosity type inertial extragradient method with Armijo line-search technique for approximating a common solution of equilibrium problem with pseudo-monotone bifunc...
Abstract: In this paper, we propose a new viscosity type inertial extragradient method with Armijo line-search technique for approximating a common solution of equilibrium problem with pseudo-monotone bifunc...
Citations
More filters
Journal ArticleDOI
TL;DR: A Halpern-type algorithm with two self-adaptive stepsizes for obtaining solution of the split common fixed point and monotone variational inclusion problem in uniformly convex and 2-uniformly smooth Banach spaces is proposed and strong convergence theorem for the algorithm is proved.
Abstract: In this paper, we study the split common fixed point and monotone variational inclusion problem in uniformly convex and 2-uniformly smooth Banach spaces. We propose a Halpern-type algorithm with two self-adaptive stepsizes for obtaining solution of the problem and prove strong convergence theorem for the algorithm. Many existing results in literature are derived as corollary to our main result. In addition, we apply our main result to split common minimization problem and fixed point problem and illustrate the efficiency and performance of our algorithm with a numerical example. The main result in this paper extends and generalizes many recent related results in the literature in this direction.

67 citations

Journal ArticleDOI
TL;DR: This paper presents two new methods with inertial steps for solving the split variational inequality problems in real Hilbert spaces without any product space formulation and proves that the sequence generated by these methods converges strongly to a minimum-norm solution of the problem when the operators are pseudomonotone and Lipschitz continuous.
Abstract: In solving the split variational inequality problems, very few methods have been considered in the literature and most of these few methods require the underlying operators to be co-coercive. This restrictive co-coercive assumption has been dispensed with in some methods, many of which require a product space formulation of the problem. However, it has been discovered that this product space formulation may cause some potential difficulties during implementation and its approach may not fully exploit the attractive splitting structure of the split variational inequality problem. In this paper, we present two new methods with inertial steps for solving the split variational inequality problems in real Hilbert spaces without any product space formulation. We prove that the sequence generated by these methods converges strongly to a minimum-norm solution of the problem when the operators are pseudomonotone and Lipschitz continuous. Also, we provide several numerical experiments of the proposed methods in comparison with other related methods in the literature.

49 citations

Journal ArticleDOI
TL;DR: In this paper, an inertial extrapolation method for solving generalized split feasibility problems over the solution set of monotone variational inclusion problems in real Hilbert space is proposed. But this method is not suitable for real Hilbert spaces.
Abstract: In this paper, we propose a new inertial extrapolation method for solving the generalized split feasibility problems over the solution set of monotone variational inclusion problems in real Hilbert...

47 citations

Posted Content
TL;DR: In this article, the authors incorporate inertial terms in the hybrid proximal-extragradient algorithm and investigate the convergence properties of the resulting iterative scheme designed for finding the zeros of a maximally monotone operator in real Hilbert spaces.
Abstract: We incorporate inertial terms in the hybrid proximal-extragradient algorithm and investigate the convergence properties of the resulting iterative scheme designed for finding the zeros of a maximally monotone operator in real Hilbert spaces. The convergence analysis relies on extended Fejer monotonicity techniques combined with the celebrated Opial Lemma. We also show that the classical hybrid proximal-extragradient algorithm and the inertial versions of the proximal point, the forward-backward and the forward-backward-forward algorithms can be embedded in the framework of the proposed iterative scheme.

46 citations

Journal ArticleDOI
TL;DR: In this paper, an iterative scheme which combines the inertial subgradient extragradient method with viscosity technique and with self-adaptive stepsize was proposed.
Abstract: In this paper, we study a classical monotone and Lipschitz continuous variational inequality and fixed point problems defined on a level set of a convex function in the framework of Hilbert spaces. First, we introduce a new iterative scheme which combines the inertial subgradient extragradient method with viscosity technique and with self-adaptive stepsize. Unlike in many existing subgradient extragradient techniques in literature, the two projections of our proposed algorithm are made onto some half-spaces. Furthermore, we prove a strong convergence theorem for approximating a common solution of the variational inequality and fixed point of an infinite family of nonexpansive mappings under some mild conditions. The main advantages of our method are: the self-adaptive stepsize which avoids the need to know a priori the Lipschitz constant of the associated monotone operator, the two projections made onto some half-spaces, the strong convergence and the inertial technique employed which accelerates convergence rate of the algorithm. Second, we apply our theorem to solve generalised mixed equilibrium problem, zero point problems and convex minimization problem. Finally, we present some numerical examples to demonstrate the efficiency of our algorithm in comparison with other existing methods in literature. Our results improve and extend several existing works in the current literature in this direction.

42 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, the authors propose viscosity approximation methods to select a particular fixed point of a given nonexpansive self-mapping, which can be used to obtain a better self-map.

765 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that the proximal point algorithm (PPA) converges in general if and only if σ σ n = √ √ n √ σ k = 1} √ k √ ε − √ lk √ Lk − σn √ lambda k to \infty k, where lk is a lower semicontinuous function.
Abstract: The proximal point algorithm (PPA) for the convex minimization problem $\min _{x \in H} f(x)$, where $f:H \to R \cup \{ \infty \} $ is a proper, lower semicontinuous (lsc) function in a Hilbert space H is considered. Under this minimal assumption on f, it is proved that the PPA, with positive parameters $\{ \lambda _k \} _{k = 1}^\infty $, converges in general if and only if $\sigma _n = \sum_{k = 1}^n {\lambda _k \to \infty } $. Global convergence rate estimates for the residual $f(x_n ) - f(u)$, where $x_n $ is the nth iterate of the PPA and $ u \in H $ is arbitrary are given. An open question of Rockafellar is settled by giving an example of a PPA for which $x_n $ converges weakly but not strongly to a minimizes of f.

676 citations

Journal ArticleDOI
TL;DR: In this paper, a regularized variant of projected subgradient method for nonsmooth, nonstrictly convex minimization in real Hilbert spaces is presented, where only one projection step is needed per iteration and involved stepsizes are controlled so that the algorithm is of practical interest.
Abstract: In this paper, we establish a strong convergence theorem regarding a regularized variant of the projected subgradient method for nonsmooth, nonstrictly convex minimization in real Hilbert spaces. Only one projection step is needed per iteration and the involved stepsizes are controlled so that the algorithm is of practical interest. To this aim, we develop new techniques of analysis which can be adapted to many other non-Fejerian methods.

591 citations

Journal ArticleDOI
TL;DR: Under a monotonicity hypothesis it is shown that equilibrium solutions can be found via iterative convex minimization via iteratives convex maximization.
Abstract: We compute constrained equilibria satisfying an optimality condition. Important examples include convex programming, saddle problems, noncooperative games, and variational inequalities. Under a monotonicity hypothesis we show that equilibrium solutions can be found via iterative convex minimization. In the main algorithm each stage of computation requires two proximal steps, possibly using Bregman functions. One step serves to predict the next point; the other helps to correct the new prediction. To enhance practical applicability we tolerate numerical errors.

452 citations

Journal ArticleDOI
TL;DR: A simple modification of iterative methods arising in numerical mathematics and optimization that makes them strongly convergent without additional assumptions is presented.
Abstract: We consider a wide class of iterative methods arising in numerical mathematics and optimization that are known to converge only weakly. Exploiting an idea originally proposed by Haugazeau, we present a simple modification of these methods that makes them strongly convergent without additional assumptions. Several applications are discussed.

428 citations