scispace - formally typeset
Search or ask a question
Journal Article•DOI•

Error bounds for monotone linear complementarity problems

01 Oct 1986-Mathematical Programming (Springer-Verlag)-Vol. 36, Iss: 1, pp 81-89
TL;DR: A bound on the distance between an arbitrary point and the solution set of a monotone linear complementarity problem is given in terms of a condition constant that depends on the problem data only and a residual function of the violations of the complementary problem conditions by the point considered.
Abstract: We give a bound on the distance between an arbitrary point and the solution set of a monotone linear complementarity problem in terms of a condition constant that depends on the problem data only and a residual function of the violations of the complementary problem conditions by the point considered. When the point satisfies the linear inequalities of the complementarity problem, the residual consists of the complementarity condition plus its square root. This latter term is essential and without it the error bound cannot hold. We also show that another natural residual that has been employed to bound errors for strictly monotone linear complementarity problems fails to bound errors for the monotone case considered here.

Content maybe subject to copyright    Report






Citations
More filters
Journal Article•DOI•
Jong-Shi Pang1•
TL;DR: This paper gives a comprehensive, state-of-the-art survey of the extensive theory and rich applications of error bounds for inequality and optimization systems and solution sets of equilibrium problems.
Abstract: Originated from the practical implementation and numerical considerations of iterative methods for solving mathematical programs, the study of error bounds has grown and proliferated in many interesting areas within mathematical programming. This paper gives a comprehensive, state-of-the-art survey of the extensive theory and rich applications of error bounds for inequality and optimization systems and solution sets of equilibrium problems.

514 citations

Journal Article•DOI•
TL;DR: A general approach to analyzing the convergence and the rate of convergence of feasible descent methods that does not require any nondegeneracy assumption on the problem is surveyed and extended.
Abstract: We survey and extend a general approach to analyzing the convergence and the rate of convergence of feasible descent methods that does not require any nondegeneracy assumption on the problem. This approach is based on a certain error bound for estimating the distance to the solution set and is applicable to a broad class of methods.

477 citations


Cites background from "Error bounds for monotone linear co..."

  • ...Error bounds have been studied extensively but the focus has been on global bounds (i.e., bounds that hold everywhere) and on using the bounds to terminate iterative algorithms and to extract sensitivity/stability information near the solution set (see [20, 41, 44, 45 , 56, 58])....

    [...]

  • ...(See theorem 2.1 for a summary of these results.) In addition, this error bound can be extended to certain linear complementarity problems and variational inequality problems (see [31, 43, 45 , 49, 56])....

    [...]

Book Chapter•DOI•
01 Jan 1998
TL;DR: In this article, the existence of a globed error bound for convex inequality systems was studied using convex analysis and a necessary and sufficient condition was established for a closed convex set defined by a closed proper convex function to possess a global error bound in terms of a natural residual.
Abstract: Using convex analysis, this paper gives a systematic and unified treatment for the existence of a globed error bound for a convex inequality system. We establish a necessary and sufficient condition for a closed convex set defined by a closed proper convex function to possess a globed error bound in terms of a natural residual. We derive many special cases of the main characterization, including the case where a Slater assumption is in place. Our results show clearly the essential conditions needed for convex inequality systems to satisfy global error bounds; they unify and extend a large number of existing results on global error bounds for such systems.1

220 citations


Cites background from "Error bounds for monotone linear co..."

  • ...The latter systems include polynomial systems [26], analytic systems [27] and their generalizations to \subanalytic sets" [29], convex quadratic inequalities without the Slater assumption [36], convex piecewise quadratic systems [24], and the solution system of a monotone linear complementarity problem [32]....

    [...]

Journal Article•DOI•
Zhi-Quan Luo1, Paul Tseng1•
TL;DR: The linear convergence of both the gradient projection algorithm of Goldstein and Levitin and Polyak, and a matrix splitting algorithm using regular splitting, is established, which does not require that the cost function be strongly convex or that the optimal solution set be bounded.
Abstract: Consider the problem of minimizing, over a polyhedral set, the composition of an affine mapping with a strictly convex essentially smooth function. A general result on the linear convergence of descent methods for solving this problem is presented. By applying this result, the linear convergence of both the gradient projection algorithm of Goldstein and Levitin and Polyak, and a matrix splitting algorithm using regular splitting, is established. The results do not require that the cost function be strongly convex or that the optimal solution set be bounded. The key to the analysis lies in a new error bound for estimating the distance from a feasible point to the optimal solution set.

220 citations

Journal Article•DOI•
TL;DR: Preliminary numerical tests on two small nonmonotone problems from the published literature converged to degenerate or nondegenerate solutions from all attempted starting points in 7 to 28 steps of a BFGS quasi-Newton method for unconstrained optimization.
Abstract: The nonlinear complementarity problem is cast as an unconstrained minimization problem that is obtained from an augmented Lagrangian formulation. The dimensionality of the unconstrained problem is the same as that of the original problem, and the penalty parameter need only be greater than one. Another feature of the unconstrained problem is that it has global minima of zero at precisely all the solution points of the complementarity problem without any monotonicity assumption. If the mapping of the complementarity problem is differentiable, then so is the objective of the unconstrained problem, and its gradient vanishes at all solution points of the complementarity problem. Under assumptions of nondegeneracy and linear independence of gradients of active constraints at a complementarity problem solution, the corresponding global unconstrained minimum point is locally unique. A Wolfe dual to a standard constrained optimization problem associated with the nonlinear complementarity problem is also formulated under a monotonicity and differentiability assumption. Most of the standard duality results are established even though the underlying constrained optimization problem may be nonconvex. Preliminary numerical tests on two small nonmonotone problems from the published literature converged to degenerate or nondegenerate solutions from all attempted starting points in 7 to 28 steps of a BFGS quasi-Newton method for unconstrained optimization.

191 citations


Cites background from "Error bounds for monotone linear co..."

  • ...It would be interesting to investigate the computational potential of this dual problem, as well as the potential of both the implicit Lagrangian and the dual problem in generating residual bounds for the nonlinear complementarity problem in the spirit of [23] [24] [22] [17] [18]....

    [...]

References
More filters
Book•
01 Jan 1978
TL;DR: This report contains a description of the typical topics covered in a two-semester sequence in Numerical Analysis, and describes the accuracy, efficiency and robustness of these algorithms.
Abstract: Introduction. Mathematical approximations have been used since ancient times to estimate solutions, but with the rise of digital computing the field of numerical analysis has become a discipline in its own right. Numerical analysts develop and study algorithms that provide approximate solutions to various types of numerical problems, and they analyze the accuracy, efficiency and robustness of these algorithms. As technology becomes ever more essential for the study of mathematics, learning algorithms that provide approximate solutions to mathematical problems and understanding the accuracy of such approximations becomes increasingly important. This report contains a description of the typical topics covered in a two-semester sequence in Numerical Analysis.

7,315 citations

Book•
01 Jan 1964

1,573 citations


"Error bounds for monotone linear co..." refers background in this paper

  • ...(by Lemmas 2.6, 2.5 and monotonicity of the 2-norm [ 3 ])....

    [...]

  • ...For a norm Ilxll~ on R", Ilxll~* wiU denote the dual norm [ 3 ,7] on R ~, that is Ilxll~:= maxlb%= ~ xy, where xy denotes the scalar product ~7=~ x~v;....

    [...]

Journal Article•DOI•
TL;DR: The role of problems of the form w and z satisfying w = q + Mz, w = or 0, z = or0, zw = 0 play a fundamental role in mathematical programming.

753 citations


"Error bounds for monotone linear co..." refers background in this paper

  • ...Proof. By [ 2 ], S # 0 since S # 0. Let ff c S. Then by Proposition 2.4 above, for each x in R" there exists an ~(x) in S that is independent of the choice of X such that...

    [...]

  • ...Consider the monotone linear complementarity problem [ 2 ] of finding an x in the n-dimensional real space R n such that...

    [...]

  • ...It is well known [ 2 ] that the solution set S is nonempty if and only if the feasible set S is nonempty, provided that M is positive semidefinite....

    [...]

Journal Article•DOI•
TL;DR: In this paper, it was shown that solutions of linear inequalities, linear programs and certain linear complementarity problems are Lipschitz continuous with respect to changes in the right-hand side data of the problem.
Abstract: It is shown that solutions of linear inequalities, linear programs and certain linear complementarity problems (e.g. those with P-matrices or Z-matrices but not semidefinite matrices) are Lipschitz continuous with respect to changes in the right-hand side data of the problem. Solutions of linear programs are not Lipschitz continuous with respect to the coefficients of the objective function. The Lipschitz constant given here is a generalization of the role played by the norm of the inverse of a nonsingular matrix in bounding the perturbation of the solution of a system of equations in terms of a right-hand side perturbation.

200 citations


"Error bounds for monotone linear co..." refers methods in this paper

  • ...By using the polyhedral characterization (2.4) and the condition number result for linear inequalities and equalities of either [4] or [ 6 ], we are able to obtain a preliminary bound on the distance between any point in R" and the solution set of (M, q)....

    [...]