scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Numerical Analysis: Stable numerical methods for obtaining the Chebyshev solution to an overdetermined system of equations

01 Jun 1968-Communications of The ACM (ACM)-Vol. 11, Iss: 6, pp 401-406
TL;DR: An implementation of Stiefel's exchange algorithm for determining a Chebyshev solution to an overdetermined system of linear equations is presented, that uses Gaussian LU decomposition with row interchanges.
Abstract: An implementation of Stiefel's exchange algorithm for determining a Chebyshev solution to an overdetermined system of linear equations is presented, that uses Gaussian LU decomposition with row interchanges. The implementation is computationally more stable than those usually given in the literature. A generalization of Stiefel's algorithm is developed which permits the occasional exchange of two equations simultaneously.
Citations
More filters
Book
01 Jan 1987
TL;DR: The Numerical Continuation Methods for Nonlinear Systems of Equations (NCME) as discussed by the authors is an excellent introduction to numerical continuuation methods for solving nonlinear systems of equations.
Abstract: From the Publisher: Introduction to Numerical Continuation Methods continues to be useful for researchers and graduate students in mathematics, sciences, engineering, economics, and business looking for an introduction to computational methods for solving a large variety of nonlinear systems of equations. A background in elementary analysis and linear algebra is adequate preparation for reading this book; some knowledge from a first course in numerical analysis may also be helpful.

889 citations

Journal ArticleDOI
TL;DR: A survey of computational methods in linear algebra can be found in this article, where the authors discuss the means and methods of estimating the quality of numerical solution of computational problems, the generalized inverse of a matrix, the solution of systems with rectangular and poorly conditioned matrices, and more traditional questions such as algebraic eigenvalue problems and systems with a square matrix.
Abstract: The authors' survey paper is devoted to the present state of computational methods in linear algebra. Questions discussed are the means and methods of estimating the quality of numerical solution of computational problems, the generalized inverse of a matrix, the solution of systems with rectangular and poorly conditioned matrices, the inverse eigenvalue problem, and more traditional questions such as algebraic eigenvalue problems and the solution of systems with a square matrix (by direct and iterative methods).

667 citations

Journal ArticleDOI
TL;DR: The applicability of PM6 to model transition states was investigated by simulating a hypothetical reaction step in the chymotrypsin-catalyzed hydrolysis of a peptide bond, and a proposed technique for generating accurate protein geometries, starting with X-ray structures, was examined.
Abstract: The applicability of the newly developed PM6 method for modeling proteins is investigated. In order to allow the geometries of such large systems to be optimized rapidly, three modifications were made to the conventional semiempirical procedure: the matrix algebra method for solving the self-consistent field (SCF) equations was replaced with a localized molecular orbital method (MOZYME), Baker’s Eigenfollowing technique for geometry optimization was replaced with the L-BFGS function minimizer, and some of the integrals used in the NDDO set of approximations were replaced with point-charge and polarization functions. The resulting method was used in the unconstrained geometry optimization of 45 proteins ranging in size from a simple nonapeptide of 244 atoms to an importin consisting of 14,566 atoms. For most systems, PM6 gave structures in good agreement with the reported X-ray structures. Some derived properties, such as pKa and bulk elastic modulus, were also calculated. The applicability of PM6 to model transition states was investigated by simulating a hypothetical reaction step in the chymotrypsin-catalyzed hydrolysis of a peptide bond. A proposed technique for generating accurate protein geometries, starting with X-ray structures, was examined.

281 citations

Journal ArticleDOI
R.K. Shah1
TL;DR: In this article, a least squares-matching technique is presented to analyze fully developed laminar fluid flow and heat transfer in ducts of arbitrary cross-section, where forced convection heat transfer is considered under constant axial heat-transfer rate with arbitrary peripheral thermal boundary conditions.

233 citations

Journal ArticleDOI
TL;DR: In this article, the LU decomposition is computed with row interchanges of the basic matrix of Dantzig's simplex method, which is based on the LU matrix decomposition.
Abstract: Standard computer implementations of Dantzig's simplex method for linear programming are based upon forming the inverse of the basic matrix and updating the inverse after every step of the method. These implementations have bad round-off error properties. This paper gives the theoretical background for an implementation which is based upon the LU decomposition, computed with row interchanges, of the basic matrix. The implementation is slow, but has good round-off error behavior. The implementation appears as CACM Algorithm 350.

232 citations

References
More filters
Book
01 Jan 1966
TL;DR: In this paper, Tchebycheff polynomials and other linear families have been used for approximating least-squares approximations to systems of equations with one unknown solution.
Abstract: Introduction: 1 Examples and prospectus 2 Metric spaces 3 Normed linear spaces 4 Inner-product spaces 5 Convexity 6 Existence and unicity of best approximations 7 Convex functions The Tchebycheff Solution of Inconsistent Linear Equations: 1 Introduction 2 Systems of equations with one unknown 3 Characterization of the solution 4 The special case 5 Polya's algorithm 6 The ascent algorithm 7 The descent algorithm 8 Convex programming Tchebycheff Approximation by Polynomials and Other Linear Families: 1 Introduction 2 Interpolation 3 The Weierstrass theorem 4 General linear families 5 the unicity problem 6 Discretization errors: General theory 7 Discretization: Algebraic polynomials. The inequalities of Markoff and Bernstein 8 Algorithms Least-squares Approximation and Related Topics: 1 Introduction 2 Orthogonal systems of polynomials 3 Convergence of orthogonal expansions 4 Approximation by series of Tchebycheff polynomials 5 Discrete least-squares approximation 6 The Jackson theorems Rational Approximation: 1 Introduction 2 Existence of best rational approximations 3 The characterization of best approximations 4 Unicity Continuity of best-approximation operators 5 Algorithms 6 Pade Approximation and its generalizations 7 Continued fractions Some Additional Topics: 1 The Stone approximation theorem 2 The Muntz theorem 3 The converses of the Jackson theorems 4 Polygonal approximation and bases in $C[a, b]$ 5 The Kharshiladze-Lozinski theorems 6 Approximation in the mean Notes References Index.

1,854 citations

Book
01 Jan 2014

1,777 citations

Journal ArticleDOI
TL;DR: This paper determines error bounds for a number of the most effective direct methods of inverting a matrix by analyzing the effect of the rounding errors made in the solution of the equations.
Abstract: 1. In order to assess the relative effectiveness of methods of inverting a matrix it is useful to have a priori bounds for the errors in the computed inverses. In this paper we determine such error bounds for a number of the most effective direct methods. To illustrate fully the techniques we have used, some of the analysis has been done for floating-point computat ion and some for fixed-point. In all cases it has been assumed tha t the computat ion has been performed using a precision of t binary places, though it should be appreciated tha t on a computer which has both fixed and floating-point facilities the number of permissible digits in a fixed-point number is greater than the number of digits in the mantissa of a floating-point number. The techniques used for analyzing floating-point computat ion are essentially those of [8], and a familiarity with tha t paper is assumed. 2. The error bounds are most conveniently expressed in terms of vector and matr ix norms, and throughout we have used the Euclidean vector norm and the spectral matrix norm except when explicit reference is made to the contrary. For convenience the main properties of these norms are given in Section 9. In a recent paper [7] we analyzed the effect of the rounding errors made in the solution of the equations

381 citations

Journal ArticleDOI
Cleve B. Moler1
TL;DR: Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations with sufficiently high precision if sufficiently high Precision is used.
Abstract: Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations. Only one step requires higher precision arithmetic. If sufficiently high precision is used, the final result is shown to be very accurate.

162 citations