scispace - formally typeset
Search or ask a question
Topic

Nonlinear conjugate gradient method

About: Nonlinear conjugate gradient method is a research topic. Over the lifetime, 2997 publications have been published within this topic receiving 78725 citations.


Papers
More filters
01 Mar 1994
TL;DR: The Conjugate Gradient Method as discussed by the authors is the most prominent iterative method for solving sparse systems of linear equations and is a composite of simple, elegant ideas that almost anyone can understand.
Abstract: The Conjugate Gradient Method is the most prominent iterative method for solving sparse systems of linear equations. Unfortunately, many textbook treatments of the topic are written so that even their own authors would be mystified, if they bothered to read their own writing. For this reason, an understanding of the method has been reserved for the elite brilliant few who have painstakingly decoded the mumblings of their forebears. Nevertheless, the Conjugate Gradient Method is a composite of simple, elegant ideas that almost anyone can understand. Of course, a reader as intelligent as yourself will learn them almost effortlessly. The idea of quadratic forms is introduced and used to derive the methods of Steepest Descent, Conjugate Directions, and Conjugate Gradients. Eigenvectors are explained and used to examine the convergence of the Jacobi Method, Steepest Descent, and Conjugate Gradients. Other topics include preconditioning and the nonlinear Conjugate Gradient Method. I have taken pains to make this article easy to read. Sixty-two illustrations are provided. Dense prose is avoided. Concepts are explained in several different ways. Most equations are coupled with an intuitive interpretation.

2,535 citations

Journal ArticleDOI
TL;DR: The main purpose of this paper is to suggest a method for finding the minimum of a functionf(x) subject to the constraintg(x)=0, which consists of replacingf byF=f+λg+1/2cg2, and computing the appropriate value of the Lagrange multiplier.
Abstract: The main purpose of this paper is to suggest a method for finding the minimum of a functionf(x) subject to the constraintg(x)=0. The method consists of replacingf byF=f+λg+1/2cg 2, wherec is a suitably large constant, and computing the appropriate value of the Lagrange multiplier. Only the simplest algorithm is presented. The remaining part of the paper is devoted to a survey of known methods for finding unconstrained minima, with special emphasis on the various gradient techniques that are available. This includes Newton's method and the method of conjugate gradients.

2,282 citations

Journal ArticleDOI
TL;DR: The gradient projection method was originally presented to the American Mathematical Society for solving linear programming problems by Dantzig et al. as discussed by the authors, and has been applied to nonlinear programming problems as well.
Abstract: more constraints or equations, with either a linear or nonlinear objective function. This distinction is made primarily on the basis of the difficulty of solving these two types of nonlinear problems. The first type is the less difficult of the two, and in this, Part I of the paper, it is shown how it is solved by the gradient projection method. It should be noted that since a linear objective function is a special case of a nonlinear objective function, the gradient projection method will also solve a linear programming problem. In Part II of the paper [16], the extension of the gradient projection method to the more difficult problem of nonlinear constraints and equations will be described. The basic paper on linear programming is the paper by Dantzig [5] in which the simplex method for solving the linear programming problem is presented. The nonlinear programming problem is formulated and a necessary and sufficient condition for a constrained maximum is given in terms of an equivalent saddle value problem in the paper by Kuhn and Tucker [10]. Further developments motivated by this paper, including a computational procedure, have been published recently [1]. The gradient projection method was originally presented to the American Mathematical Society

1,142 citations

Journal ArticleDOI
TL;DR: This paper presents a new version of the conjugate gradient method, which converges globally, provided the line search satisfies the standard Wolfe conditions.
Abstract: Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. The strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, which converges globally, provided the line search satisfies the standard Wolfe conditions. The conditions on the objective function are also weak, being similar to those required by the Zoutendijk condition.

1,065 citations


Network Information
Related Topics (5)
Differential equation
88K papers, 2M citations
79% related
Partial differential equation
70.8K papers, 1.6M citations
79% related
Optimization problem
96.4K papers, 2.1M citations
78% related
Nonlinear system
208.1K papers, 4M citations
77% related
Boundary value problem
145.3K papers, 2.7M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202367
2022147
202135
202031
201928
201853