scispace - formally typeset
Search or ask a question

Showing papers on "Conjugate gradient method published in 1988"


Book
30 Apr 1988
TL;DR: The Conjugate Gradient Algorithm and the Iterative Methods for Linear Equations are described, which simplify the derivation of linear algebra to simple linear algebra.
Abstract: 1. Introduction.- 2. Direct Methods for Linear Equations.- 3. Iterative Methods for Linear Equations.- Appendix 2. Convergence of Iterative Methods.- Appendix 3. The Conjugate Gradient Algorithm.- Appendix 4. Basic Linear Algebra.

583 citations


Journal ArticleDOI
TL;DR: This paper provides a preconditioned iterative technique for the solution of saddle point problems by reformulated as a symmetric positive definite system, which is then solved by conjugate gradient iteration.
Abstract: This paper provides a preconditioned iterative technique for the solution of saddle point problems. These problems typically arise in the numerical approximation of partial differential equations by Lagrange multiplier techniques and/or mixed methods. The saddle point problem is reformulated as a symmetric positive definite system, which is then solved by conjugate gradient iteration. Applications to the equations of elasticity and Stokes are discussed and the results of numerical experiments are given.

428 citations


Journal ArticleDOI
TL;DR: The preconditioned conjugate gradient method is used to solve the system of linear equations Ax = b, where A is a singular symmetric positive semi- definite matrix and the theory is applied to the discretized semi-definite Neumann problem.

207 citations


Journal ArticleDOI
TL;DR: In this paper, a robust regularised inversion algorithm based on least-absolute deviation is proposed, which has a property known in the statistical literature as robustness, and the key computational technique turns out to be preconditioned conjugate gradient.
Abstract: Due to ill-conditioning of the linearised forward problem and the presence of noise in most data, inverse problems generally require some kind of 'regularisation' in order to generate physically plausible solutions. The most popular method of regularised inversion is damped least squares. Damping sometimes forces the solution to be smoother than it otherwise would be by raising all of the eigenvalues in an ad hoc fashion. An alternative is described, based upon the method of least-absolute deviation, which has a property known in the statistical literature as robustness. An account of robust inversion methods-their history and computational developments-is given. The key computational technique turns out to be preconditioned conjugate gradient, an algorithm which had as its genesis 'the method of orthogonal vectors' by Fox, Huskey and Wilkinson (1948). Applications are illustrated from seismic tomography and inverse scattering, two of the most computationally intensive tasks in inverse theory.

185 citations


Journal ArticleDOI
TL;DR: The iteratively reweighted least squares (IRLS) algorithm as mentioned in this paper provides a means of computing approximate lp solutions (1 ⩽ p), which can be used to solve large, sparse, rectangular systems of linear, algebraic equations very efficiently.

151 citations


Journal ArticleDOI
TL;DR: An incomplete LQ factorization is proposed and some of its implementation details are described and a number of experiments are reported to compare these methods.

136 citations


Journal ArticleDOI
TL;DR: In this article, the authors address the problem of optimal selection of shape functions for p-type finite elements and discuss the effectivity of the conjugate gradient and multilevel iteration method for solving the corresponding linear system.
Abstract: : The paper addresses the question of the optimal selection of the shape functions for p-type finite elements and discusses the effectivity of the conjugate gradient and multilevel iteration method for solving the corresponding linear system. The selection of the shape functions is of major importance for the performance of the solver based on iterative methods. Neither the theory nor practice of the optimal selection of the shape functions is available yet. We have seen that the condensation approach which has obvious advantages from the point of parallel computations is a very effective tool for keeping the condition number under the control and is especially advantageous for the conjugate gradient method.

126 citations


Journal ArticleDOI
TL;DR: The Fourier acceleration method is explained for the Jacobi relaxation and conjugate gradient methods and is applied to two models: the random resistor network and the random central-force network.
Abstract: Technical details are given on how to use Fourier acceleration with iterative processes such as relaxation and conjugate gradient methods. These methods are often used to solve large linear systems of equations, but become hopelessly slow very rapidly as the size of the set of equations to be solved increases. Fourier acceleration is a method designed to alleviate these problems and result in a very fast algorithm. The method is explained for the Jacobi relaxation and conjugate gradient methods and is applied to two models: the random resistor network and the random central-force network. In the first model, acceleration works very well; in the second, little is gained. We discuss reasons for this. We also include a discussion of stopping criteria.

116 citations


Journal ArticleDOI
TL;DR: The paper is supplied with results of a numerical experiment which suggest that the conjugate gradient method may be efficient in practical computations and an algorithm described and proved to be correctness.
Abstract: Preconditioning of the conjugate gradient method by a conjugate projector has been suggested. We describe an algorithm and prove its correctness. An estimate of the preconditioning effect in terms of the gap between the invariant subspace of smooth eigenvectors of a matrix of original system and the complement of the range of the preconditioning projector is obtained. The paper is supplied with results of a numerical experiment which suggest that the method may be efficient in practical computations.

97 citations


Journal ArticleDOI
TL;DR: It is suggested that the Chebyshev and Richardson methods, with reasonable parameter choices, may be more effective than the conjugate gradient method in the presence of inexactness.
Abstract: The Chebyshev and second-order Richardson methods are classical iterative schemes for solving linear systems. We consider the convergence analysis of these methods when each step of the iteration is carried out inexactly. This has many applications, since a preconditioned iteration requires, at each step, the solution of linear system which may be solved inexactly using an "inner" iteration. We derive an error bound which applies to the general nonsymmetric inexact Chebyshev iteration. We show how this simplifies slightly in the case of a symmetric or skew-symmetric iteration, and we consider both the cases of underestimating and overestimating the spectrum. We show that in the symmetric case, it is actually advantageous to underestimate the spectrum when the spectral radius and the degree of inexactness are both large. This is not true in the case of the skew-symmetric iteration. We show how similar results apply to the Richardson iteration. Finally, we describe numerical experiments which illustrate the results and suggest that the Chevyshev and Richardson methods, with reasonable param eter choices, may be more effective than the conjugate gradient method in the presence of inexactness.

95 citations


Journal ArticleDOI
TL;DR: In this paper, the backscatter cross section is calculated for thin material plates with finite electric permittivity, conductivity, and magnetic permeability illuminated by a plane wave, and integral equations are formed and solved by a combined conjugate gradient-fast Fourier transform (CG-FFT) method.
Abstract: The backscatter cross section is calculated for thin material plates with finite electric permittivity, conductivity, and magnetic permeability illuminated by a plane wave. The plates are assumed to be planar with an arbitrary perimeter. The integral equations are formed and solved by a combined conjugate gradient-fast Fourier transform (CG-FFT) method. The CG-FFT method was tested for several geometrics and materials measured and computed backscatter results are compared for a perfectly conducting equilateral triangle plate, a square dielectric and magnetic plate, and a circular dielectric plate. The agreement between measured and computed data is generally good except toward edge-on incidence where several factors cause discrepancies. Accurate approximations to the geometry and far-field integrals become critical near edge-on incidence and, it is postulated that as the incidence angle approaches edge-on, the sampling interval and tolerance should be decreased. >

Journal ArticleDOI
TL;DR: Numerical studies on four model problems show that the conjugate gradient method using these vectorized implementations of the modified incomplete factorizations and SSOR preconditioners achieves overall speeds approaching 100 megaflops on a Cray X-MP/24 vector computer.
Abstract: We consider the problem of vectorizing the recursive calculations found in modified incomplete factorizations and SSOR preconditioners for the conjugate gradient method. We examine matrix problems derived from partial differential equations which are discretized on regular 2-D and 3-D grids, where the grid nodes are ordered in the natural ordering. By performing data dependency analyses of the computations, we show that there is concurrency in both the factorization and the forward and backsolves. The computations may be performed with an average vector length of $O(n)$ on an $n^2 $ or $n^3 $ grid in two and three dimensions. Numerical studies on four model problems show that the conjugate gradient method using these vectorized implementations of the modified incomplete factorizations and SSOR preconditioners achieves overall speeds approaching 100 megaflops on a Cray X-MP/24 vector computer. Furthermore, these methods require considerably less overall execution time than diagonal scaling and no-fill red-black incomplete factorization preconditioners, both of which allow full vectorization but are not as convergent.

Journal ArticleDOI
TL;DR: In this paper, iterative algorithms based on the conjugate gradient method are developed for hypercubes designed for coarse-grained parallelism, and the communication requirements of different schemes for mapping finite-element meshes onto the processors of a hypercube are analyzed with respect to the effect of communication parameters of the architecture.
Abstract: Finite-element discretization produces linear equations in the form Ax=b, where A is large, sparse, and banded with proper ordering of the variables x. The solution of such equations on distributed-memory message-passing multiprocessors implementing the hypercube topology is addressed. Iterative algorithms based on the conjugate gradient method are developed for hypercubes designed for coarse-grained parallelism. The communication requirements of different schemes for mapping finite-element meshes onto the processors of a hypercube are analyzed with respect to the effect of communication parameters of the architecture. Experimental results for a 16-node Intel 80386-based iPSC/2 hypercube are presented and discussed. >

Journal ArticleDOI
01 Feb 1988
TL;DR: This survey focuses on promising approaches for solving large, well-structured constrained problems: dualization of problems with separable objective and constraint functions, and decomposition of hierarchical problems with linking variables.
Abstract: This survey is concerned with variants of nonlinear optimization methods designed for implementation on parallel computers. First, we consider a variety of methods for unconstrained minimization. We consider a particular type of parallelism (simultaneous function and gradient evaluations), and we concentrate on the main sources of inspiration: conjugate directions, homogeneous functions, variable-metric updates, and multi-dimensional searches. The computational process for solving small and medium-size constrained optimization problems is usually based on unconstrained optimization. This provides a straightforward opportunity for the introduction of parallelism. In the present survey, however, we focus on promising approaches for solving large, well-structured constrained problems: dualization of problems with separable objective and constraint functions, and decomposition of hierarchical problems with linking variables (typical for Bender's decomposition in the linear case). Finally, we outline the key issues in future computational studies of parallel nonlinear optimization algorithms.

Journal ArticleDOI
TL;DR: There is no single algorithm that provides an efficient solution for all types of problems in the conjugate gradient method, according to the principal conclusion.

Journal ArticleDOI
TL;DR: In this article, a scheme for analyzing electrodynamic problems involving conducting plates of resonant size using the conjugate-gradient (CG) method and the fast Fourier transform (FFT) is presented in detail.
Abstract: A scheme for analyzing electrodynamic problems involving conducting plates of resonant size using the conjugate-gradient (CG) method and the fast Fourier transform (FFT) is presented in detail. The problems are analyzed by solving their corresponding electric-field integral equation. The procedure is made easy and systematic by using a sampling process with rooftop functions to represent the induced current and pulses to average the fields. These functions have been widely used in moment-method (MM) applications. The scheme is an efficient numerical tool, benefiting from the good convergence and low memory requirements of the CG and the low CPU time consumed in performing convolutions with the FFT. In comparison with the MM, the scheme avoids the storage of large matrices and reduces the computer time by an order of magnitude. Several results are presented and compared with analytical, numerical, or measured values that appear in the literature. >

Journal ArticleDOI
TL;DR: It is shown that the four vector extrapolation methods, minimal polynomial extrapolation, reduced rank extrapolations, modified minimal Pooleian extrapolation and topological epsilon algorithm, when applied to linearly generated vector sequences, are Krylov subspace methods, and are equivalent to some well known conjugate gradient type methods.

Journal ArticleDOI
01 Mar 1988
TL;DR: Two efficient parallel algorithms for computing the forward dynamics for real-time simulation were developed for implementation on a single-instruction multiple-data-stream (SIMD) computer with n processors, where n is the number of degrees of freedom of the manipulator.
Abstract: Two efficient parallel algorithms for computing the forward dynamics for real-time simulation were developed for implementation on a single-instruction multiple-data-stream (SIMD) computer with n processors, where n is the number of degrees of freedom of the manipulator. The first parallel algorithm, based on the composite rigid-body method, generates the inertia matrix using the parallel Newton-Euler algorithm, the parallel linear recurrence algorithm, and the modified row-sweep algorithm, and then inverts the inertia matrix to obtain the joint acceleration vector at time t. The time complexity of this parallel algorithm is of the order O(n/sup 2/) with O(n) processors. The second parallel algorithm, based on the conjugate gradient method, computes the joint acceleration with a time complexity of O(n) for multiplication operation and O(n log/sub 2/n) for addition operation. The interprocessor communication problem for the implementation of the proposed parallel algorithms on SIMD machines is also discussed and analyzed. >

01 Jun 1988
TL;DR: This thesis examines the use of polynomial preconditioning in CG methods for both hermitian positive definite and indefinite matrices and solves a constrained minimax approximation problem about the solution of a linear system of equations.
Abstract: The solution of a linear system of equations, Ax = b, arises in many scientific applications. If A is large and sparse, an iterative method is required. When A is hermitian positive definite (hpd), the conjugate gradient method of Hestenes and Stiefel is popular. When A is hermitian indefinite (hid), the conjugate residual method may be used. If A is ill-conditioned, these methods may converge slowly, in which case a preconditioner is needed. In this thesis we examine the use of polynomial preconditioning in CG methods for both hermitian positive definite and indefinite matrices. Such preconditioners are easy to employ and well-suited to vector and/or parallel architectures. We first show that any CG method is characterized by three matrices: an hpd inner product matrix B, a preconditioning matrix C, and the hermitian matrix A. The resulting method, CG(B,C,A), minimizes the B-norm of the error over a Krylov subspace. We next exploit the versatility of polynomial preconditioners to design several new CG methods. To obtain an optimum preconditioner, we solve a constrained minimax approximation problem. The preconditioning polynomial, C($\lambda$), is optimum in that it minimizes a bound on the condition number of the preconditioned matrix, $p\sb{m}$(A). An adaptive procedure for dynamically determining the optimum preconditioner from the CG iteration parameters is also discussed. Finally, in a variety of numerical experiments, conducted on a Cray X-MP/48, we demonstrate the effectiveness of polynomial preconditioning.

Journal ArticleDOI
TL;DR: The aim of this paper is to review the properties of the various iteration methods for the numerical solution of very large, sparse linear systems of equations, in order to assist the user in making a deliberate selection.

Journal ArticleDOI
TL;DR: A new tridiagonalization process for unsymmetric matrices that is closely related to the Lanczos process is presented and used to derive the new algorithms USY MLQ and USYMQR in the same fashion as SYMMLQ and MINRES.
Abstract: We propose two new conjugate-gradient-type methods for the solution of sparse unsymmetric linear systems. We present a new tridiagonalization process for unsymmetric matrices that is closely related to the Lanczos process. We use orthogonal factorizations of the tridiagonal matrix to derive the new algorithms USYMLQ and USYMQR in the same fashion as SYMMLQ and MINRES [C. C. Paige and M. A. Saunders, “Solution of sparse indefinite systems of linear equations,” SIAM J. Numer. Anal., 12 (1975), pp. 617–629], for symmetric matrices. Some numerical results for the new methods and comparisons with other methods are presented.

Journal ArticleDOI
TL;DR: An analysis for single- and double-layered microstrip antennas is described, finding the planarity of these structures makes it possible to construct a general spectral representation for any number of layers and to derive the spectral Green's dyad in a compact fashion using a transmission-line analogy.
Abstract: An analysis for single- and double-layered microstrip antennas is described. The planarity of these structures makes it possible to construct a general spectral representation for any number of layers and to derive the spectral Green's dyad in a compact fashion using a transmission-line analogy. A formulation of the antenna problem in the spectral domain, incorporating this dyad, is coupled with the conjugate gradient algorithm, whose applicability as an efficient way of analyzing a number of microstrip antennas is studied. Results are quite accurate. Conclusions pertaining to the applicability of the method, including effects of problem parameters on convergence rates, are drawn. >

Journal ArticleDOI
TL;DR: Shanno's conjugate gradient algorithm is applied to the problem of minimizing the potential energy function associated with molecular mechanical calculations and can improve the rate of convergence to a minimum by a factor of 5 relative to Fletcher‐Reeves or Polak‐Ribière minimizers when used within the molecular mechanics package AMBER.
Abstract: We apply Shanno's conjugate gradient algorithm to the problem of minimizing the potential energy function associated with molecular mechanical calculations. Shanno's algorithm is stable with respect to roundoff errors and inexact line searches and converges rapidly to a minimum. Equally important, this algorithm can improve the rate of convergence to a minimum by a factor of 5 relative to Fletcher-Reeves or Polak-Ribiere minimizers when used within the molecular mechanics package AMBER. Comparable improvements are found for a limited number of simulations when the Polak-Ribiere direction vector is incorporated into the Shanno algorithm.

Journal ArticleDOI
TL;DR: In this paper, an elliptic differential equation is used to obtain mass-consistent wind fields satisfying the discrete formulation of the continuity equation and a proper parameterization allows to account for the influence of atmospheric stability on the resulting wind fields.

Journal ArticleDOI
TL;DR: A simple technique for conditioning conjugate gradient or conjugates residue matrix inversion as applied to the lattice gauge theory problem of computing the propagator of Wilson fermions.

Journal ArticleDOI
TL;DR: It is shown that communication startups are very important, and that even the small amount of global communication in these methods can significantly reduce the performance of many message-passing computer architectures.
Abstract: We discuss the parallel implementation of preconditioned conjugate gradient (PCG)-based domain decomposition techniques for self-adjoint elliptic partial differential equations in two dimensions on several architectures. The complexity of these methods is described on a variety of message-passing parallel computers as a function of the size of the problem, number of processors and relative communication speeds of the processors. We show that communication startups are very important, and that even the small amount of global communication in these methods can significantly reduce the performance of many message-passing computer architectures.

ReportDOI
01 Apr 1988
TL;DR: NSPCG (or Nonsymmetric Preconditioned Conjugate Gradient) is a computer package to solve the linear system Au = b by various iterative methods.
Abstract: NSPCG (or Nonsymmetric Preconditioned Conjugate Gradient) is a computer package to solve the linear system Au = b by various iterative methods. The coefficient matrix A is assumed to be large and sparse with real coefficients. A wide selection of preconditioners and accelerators is available for both symmetric and nonsymmetric coefficient matrices. Several sparse matrix data structures are available to represent matrices whose structures range from highly structures to completely unstructured. 36 refs., 8 tabs.

Journal ArticleDOI
TL;DR: In this paper, it is shown how the smallest active eigenvalue of a set of linear equations can be cheaply approximated, and the usefulness of this approximation for a practical termination criterion for the conjugate gradient method is studied.
Abstract: The conjugate gradient method for the iterative solution of a set of linear equationsAx=b is essentially equivalent to the Lanczos method, which implies that approximations to certain eigen-values ofA can be obtained at low cost. In this paper it is shown how the smallest “active” eigenvalue ofA can be cheaply approximated, and the usefulness of this approximation for a practical termination criterion for the conjugate gradient method is studied. It is proved that this termination criterion is reliable in many relevant situations.

Journal ArticleDOI
TL;DR: A preconditioned conjugate gradient method is applied to the Kuhn–Tucker equations associated with the LSE problem and it is shown that this method is well suited for structural optimization problems in reliability analysis and optimal design.
Abstract: We consider the linear equality-constrained least squares problem (LSE) of minimizing ${\|c - Gx\|}_2 $, subject to the constraint $Ex = p$. A preconditioned conjugate gradient method is applied to the Kuhn–Tucker equations associated with the LSE problem.We show that our method is well suited for structural optimization problems in reliability analysis and optimal design. Numerical tests are performed on an Alliant FX/8 multiprocessor and a Cray-X-MP using some practical structural analysis data.

Journal ArticleDOI
TL;DR: In this paper, the convergence rate of the conjugate gradient method is dependent on the eigenvalues of the iteration matrix as well as on the number of eigenvectors needed to represent the right side of the equation.
Abstract: A theory that relates eigenvalues of a continuous operator to those of the moment-method matrix operator is discussed and confirmed by examples. This theory suggests reasons for ill conditioning when certain types of basis and testing functions are used. In addition, the effect of eigenvalue location on the convergence of the conjugate gradient (CG) method is studied. The convergence rate of the CG method is dependent on the eigenvalues of the iteration matrix as well as on the number of eigenvectors of the iteration matrix needed to represent the right side of the equation. These findings explain the previously reported convergence behavior of the CG method when applied to electromagnetic-scattering problems. >