scispace - formally typeset
Search or ask a question
Journal ArticleDOI

On M-functions and their application to nonlinear Gauss-Seidel iterations and to network flows☆

01 Nov 1970-Journal of Mathematical Analysis and Applications (Academic Press)-Vol. 32, Iss: 2, pp 274-307
TL;DR: The convergence condition of the Gauss-Seidel and Jacobi iterations for nonlinear elliptic boundary value problems has been studied by various authors, and no historical survey shall be attempted here as discussed by the authors.
About: This article is published in Journal of Mathematical Analysis and Applications.The article was published on 1970-11-01 and is currently open access. It has received 113 citations till now. The article focuses on the topics: Gauss–Seidel method & Positive-definite matrix.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper studies both the local and global convergence of various iterative methods for solving the variational inequality and the nonlinear complementarity problems and several convergence results are obtained for some nonlinear approximation methods.
Abstract: In this paper, we study both the local and global convergence of various iterative methods for solving the variational inequality and the nonlinear complementarity problems. Included among such methods are the Newton and several successive overrelaxation algorithms. For the most part, the study is concerned with the family of linear approximation methods. These are iterative methods in which a sequence of vectors is generated by solving certain linearized subproblems. Convergence to a solution of the given variational or complementarity problem is established by using three different yet related approaches. The paper also studies a special class of variational inequality problems arising from such applications as computing traffic and economic spatial equilibria. Finally, several convergence results are obtained for some nonlinear approximation methods.

310 citations


Cites background from "On M-functions and their applicatio..."

  • ...Let f : R" ~ R ~. Then f is called a Z-function [ 43 ] if for each x E R ", the scalar-valued function...

    [...]

Journal ArticleDOI
TL;DR: A constructive proof of the Brouwer fixed-point theorem is given in this paper, which leads to an algorithm for finding the fixed point and some properties of the algorithm and some numerical results are also presented.
Abstract: A constructive proof of the Brouwer fixed-point theorem is given, which leads to an algorithm for finding the fixed point. Some properties of the algorithm and some numerical results are also presented.

197 citations

Journal ArticleDOI
TL;DR: In this paper, the authors give a number of equivalent conditions which reproduce, and in some cases strengthen, many consequences of recent generalizations of the property of diagonally dominant n x n complex matrices.

158 citations

Journal ArticleDOI
TL;DR: This work proves O(1/k) convergence rates for two variants of cyclic coordinate descent under an isotonicity assumption by comparing the objective values attained by the two variants with each other, as well as with the gradient descent algorithm.
Abstract: Cyclic coordinate descent is a classic optimization method that has witnessed a resurgence of interest in signal processing, statistics, and machine learning. Reasons for this renewed interest include the simplicity, speed, and stability of the method, as well as its competitive performance on $\ell_1$ regularized smooth optimization problems. Surprisingly, very little is known about its nonasymptotic convergence behavior on these problems. Most existing results either just prove convergence or provide asymptotic rates. We fill this gap in the literature by proving $O(1/k)$ convergence rates (where $k$ is the iteration count) for two variants of cyclic coordinate descent under an isotonicity assumption. Our analysis proceeds by comparing the objective values attained by the two variants with each other, as well as with the gradient descent algorithm. We show that the iterates generated by the cyclic coordinate descent methods remain better than those of gradient descent uniformly over time.

141 citations

Posted Content
TL;DR: In this article, a tensor complementarity problem with $Z$-tensors has been considered, and it is shown that to find the sparsest solution is equivalent to solving polynomial programming with a linear objective function.
Abstract: Finding the sparsest solutions to a tensor complementarity problem is generally NP-hard due to the nonconvexity and noncontinuity of the involved $\ell_0$ norm. In this paper, a special type of tensor complementarity problems with $Z$-tensors has been considered. Under some mild conditions, we show that to pursuit the sparsest solutions is equivalent to solving polynomial programming with a linear objective function. The involved conditions guarantee the desired exact relaxation and also allow to achieve a global optimal solution to the relaxed nonconvex polynomial programming problem. Particularly, in comparison to existing exact relaxation conditions, such as RIP-type ones, our proposed conditions are easy to verify.

126 citations

References
More filters
Book
30 Nov 1961
TL;DR: In this article, the authors propose Matrix Methods for Parabolic Partial Differential Equations (PPDE) and estimate of Acceleration Parameters, and derive the solution of Elliptic Difference Equations.
Abstract: Matrix Properties and Concepts.- Nonnegative Matrices.- Basic Iterative Methods and Comparison Theorems.- Successive Overrelaxation Iterative Methods.- Semi-Iterative Methods.- Derivation and Solution of Elliptic Difference Equations.- Alternating-Direction Implicit Iterative Methods.- Matrix Methods for Parabolic Partial Differential Equations.- Estimation of Acceleration Parameters.

5,317 citations

Journal ArticleDOI

150 citations