scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A new method for solving split variational inequality problems without co-coerciveness

01 Dec 2020-Journal of Fixed Point Theory and Applications (Springer International Publishing)-Vol. 22, Iss: 4, pp 1-23
TL;DR: In this article, a method for solving the split variational inequality problem in two real Hilbert spaces without the co-coerciveness assumption on the operators is proposed and the proposed method converges strongly to a solution.
Abstract: In solving the split variational inequality problems in real Hilbert spaces, the co-coercive assumption of the underlying operators is usually required and this may limit its usefulness in many applications. To have these operators freed from the usual and restrictive co-coercive assumption, we propose a method for solving the split variational inequality problem in two real Hilbert spaces without the co-coerciveness assumption on the operators. We prove that the proposed method converges strongly to a solution of the problem and give some numerical illustrations of it in comparison with other methods in the literature to support our strong convergence result.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper presents two new methods with inertial steps for solving the split variational inequality problems in real Hilbert spaces without any product space formulation and proves that the sequence generated by these methods converges strongly to a minimum-norm solution of the problem when the operators are pseudomonotone and Lipschitz continuous.
Abstract: In solving the split variational inequality problems, very few methods have been considered in the literature and most of these few methods require the underlying operators to be co-coercive. This restrictive co-coercive assumption has been dispensed with in some methods, many of which require a product space formulation of the problem. However, it has been discovered that this product space formulation may cause some potential difficulties during implementation and its approach may not fully exploit the attractive splitting structure of the split variational inequality problem. In this paper, we present two new methods with inertial steps for solving the split variational inequality problems in real Hilbert spaces without any product space formulation. We prove that the sequence generated by these methods converges strongly to a minimum-norm solution of the problem when the operators are pseudomonotone and Lipschitz continuous. Also, we provide several numerical experiments of the proposed methods in comparison with other related methods in the literature.

49 citations

Journal ArticleDOI
TL;DR: In this paper, an iterative scheme which combines the inertial subgradient extragradient method with viscosity technique and with self-adaptive stepsize was proposed.
Abstract: In this paper, we study a classical monotone and Lipschitz continuous variational inequality and fixed point problems defined on a level set of a convex function in the framework of Hilbert spaces. First, we introduce a new iterative scheme which combines the inertial subgradient extragradient method with viscosity technique and with self-adaptive stepsize. Unlike in many existing subgradient extragradient techniques in literature, the two projections of our proposed algorithm are made onto some half-spaces. Furthermore, we prove a strong convergence theorem for approximating a common solution of the variational inequality and fixed point of an infinite family of nonexpansive mappings under some mild conditions. The main advantages of our method are: the self-adaptive stepsize which avoids the need to know a priori the Lipschitz constant of the associated monotone operator, the two projections made onto some half-spaces, the strong convergence and the inertial technique employed which accelerates convergence rate of the algorithm. Second, we apply our theorem to solve generalised mixed equilibrium problem, zero point problems and convex minimization problem. Finally, we present some numerical examples to demonstrate the efficiency of our algorithm in comparison with other existing methods in literature. Our results improve and extend several existing works in the current literature in this direction.

42 citations

Journal ArticleDOI
TL;DR: In this article, a modified self-adaptive inertial subgradient extragradient algorithm is presented, in which the two projections are made onto some half spaces and under mild conditions, the sequence generated by the proposed algorithm for approximating a common solution of variational inequality problem and common fixed point of a finite family of demicontractive mappings in a real Hilbert space.
Abstract: In this paper, we present a new modified self-adaptive inertial subgradient extragradient algorithm in which the two projections are made onto some half spaces Moreover, under mild conditions, we obtain a strong convergence of the sequence generated by our proposed algorithm for approximating a common solution of variational inequality problem and common fixed point of a finite family of demicontractive mappings in a real Hilbert space The main advantages of our algorithm are: strong convergence result obtained without prior knowledge of the Lipschitz constant of the related monotone operator, the two projections made onto some half-spaces and the inertial technique which speeds up rate of convergence Finally, we present an application and a numerical example to illustrate the usefulness and applicability of our algorithm

34 citations


Cites background from "A new method for solving split vari..."

  • ...The Problem (1) is a fundamental problem which has a wide range of applications in applied field of mathematics such as network equilibrium problems, complementary problems, optimization theory and systems of nonlinear equations (see [5, 13, 19, 24, 27, 28, 40, 48])....

    [...]

Journal ArticleDOI
TL;DR: In this article, an inertial-type shrinking projection algorithm was proposed for solving the two-set split common fixed point problems and proved a strong convergence theorem. But it does not solve the split monotone inclusion problem.
Abstract: In this paper, motivated by the works of Kohsaka and Takahashi (SIAM J Optim 19:824–835, 2008) and Aoyama et al. (J Nonlinear Convex Anal 10:131–147, 2009) on the class of mappings of firmly nonexpansive type, we explore some properties of firmly nonexpansive-like mappings [or mappings of type (P)] in p-uniformly convex and uniformly smooth Banach spaces. We then study the split common fixed point problems for mappings of type (P) and Bregman weak relatively nonexpansive mappings in p-uniformly convex and uniformly smooth Banach spaces. We propose an inertial-type shrinking projection algorithm for solving the two-set split common fixed point problems and prove a strong convergence theorem. Also, we apply our result to the split monotone inclusion problems and illustrate the behaviour of our algorithm with several numerical examples s. The implementation of the algorithm does not require a prior knowledge of the operator norm. Our results complement many recent results in the literature in this direction. To the best of our knowledge, it seems to be the first to use the inertial technique to solve the split common fixed point problems outside Hilbert spaces.

22 citations

References
More filters
Book
26 Apr 2011
TL;DR: This book provides a largely self-contained account of the main results of convex analysis and optimization in Hilbert space, and a concise exposition of related constructive fixed point theory that allows for a wide range of algorithms to construct solutions to problems in optimization, equilibrium theory, monotone inclusions, variational inequalities, and convex feasibility.
Abstract: This book provides a largely self-contained account of the main results of convex analysis and optimization in Hilbert space. A concise exposition of related constructive fixed point theory is presented, that allows for a wide range of algorithms to construct solutions to problems in optimization, equilibrium theory, monotone inclusions, variational inequalities, best approximation theory, and convex feasibility. The book is accessible to a broad audience, and reaches out in particular to applied scientists and engineers, to whom these tools have become indispensable.

3,905 citations

Journal ArticleDOI
TL;DR: The Krasnoselskii?Mann (KM) approach to finding fixed points of nonlinear continuous operators on a Hilbert space was introduced in this article, where a wide variety of iterative procedures used in signal processing and image reconstruction and elsewhere are special cases of the KM iterative procedure.
Abstract: Let T be a (possibly nonlinear) continuous operator on Hilbert space . If, for some starting vector x, the orbit sequence {Tkx,k = 0,1,...} converges, then the limit z is a fixed point of T; that is, Tz = z. An operator N on a Hilbert space is nonexpansive?(ne) if, for each x and y in , Even when N has fixed points the orbit sequence {Nkx} need not converge; consider the example N = ?I, where I denotes the identity operator. However, for any the iterative procedure defined by converges (weakly) to a fixed point of N whenever such points exist. This is the Krasnoselskii?Mann (KM) approach to finding fixed points of ne operators. A wide variety of iterative procedures used in signal processing and image reconstruction and elsewhere are special cases of the KM iterative procedure, for particular choices of the ne operator N. These include the Gerchberg?Papoulis method for bandlimited extrapolation, the SART algorithm of Anderson and Kak, the Landweber and projected Landweber algorithms, simultaneous and sequential methods for solving the convex feasibility problem, the ART and Cimmino methods for solving linear systems of equations, the CQ algorithm for solving the split feasibility problem and Dolidze's procedure for the variational inequality problem for monotone operators.

1,100 citations

Journal ArticleDOI
TL;DR: Using an extension of Pierra's product space formalism, it is shown here that a multiprojection algorithm converges and is fully simultaneous, i.e., it uses in each iterative stepall sets of the convex feasibility problem.
Abstract: Generalized distances give rise to generalized projections into convex sets. An important question is whether or not one can use within the same projection algorithm different types of such generalized projections. This question has practical consequences in the area of signal detection and image recovery in situations that can be formulated mathematically as a convex feasibility problem. Using an extension of Pierra's product space formalism, we show here that a multiprojection algorithm converges. Our algorithm is fully simultaneous, i.e., it uses in each iterative stepall sets of the convex feasibility problem. Different multiprojection algorithms can be derived from our algorithmic scheme by a judicious choice of the Bregman functions which govern the process. As a by-product of our investigation we also obtain blockiterative schemes for certain kinds of linearly constraned optimization problems.

1,085 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed a block-iterative version of the split feasibility problem (SFP) called the CQ algorithm, which involves only the orthogonal projections onto C and Q, which we shall assume are easily calculated, and involves no matrix inverses.
Abstract: Let C and Q be nonempty closed convex sets in R N and R M , respectively, and A an M by N real matrix. The split feasibility problem (SFP) is to find x ∈ C with Ax ∈ Q ,i f suchx exist. An iterative method for solving the SFP, called the CQ algorithm, has the following iterative step: x k+1 = PC (x k + γ A T (PQ − I )Ax k ), where γ ∈ (0, 2/L) with L the largest eigenvalue of the matrix A T A and PC and PQ denote the orthogonal projections onto C and Q, respectively; that is, PC x minimizesc − x� ,o ver allc ∈ C.T heCQ algorithm converges to a solution of the SFP, or, more generally, to a minimizer ofPQ Ac − Acover c in C, whenever such exist. The CQ algorithm involves only the orthogonal projections onto C and Q, which we shall assume are easily calculated, and involves no matrix inverses. If A is normalized so that each row has length one, then L does not exceed the maximum number of nonzero entries in any column of A, which provides a helpful estimate of L for sparse matrices. Particular cases of the CQ algorithm are the Landweber and projected Landweber methods for obtaining exact or approximate solutions of the linear equations Ax = b ;t healgebraic reconstruction technique of Gordon, Bender and Herman is a particular case of a block-iterative version of the CQ algorithm. One application of the CQ algorithm that is the subject of ongoing work is dynamic emission tomographic image reconstruction, in which the vector x is the concatenation of several images corresponding to successive discrete times. The matrix A and the set Q can then be selected to impose constraints on the behaviour over time of the intensities at fixed voxels, as well as to require consistency (or near consistency) with measured data.

884 citations