scispace - formally typeset
Search or ask a question
Book ChapterDOI

Numerical techniques in mathematical programming

TL;DR: In this article, the application of numerically stable matrix decompositions to minimization problems involving linear constraints is discussed and shown to be feasible without undue loss of efficiency, and the singular value decomposition is applied to the nonlinear least square problem and discusses related eigenvalue problems.
Abstract: The application of numerically stable matrix decompositions to minimization problems involving linear constraints is discussed and shown to be feasible without undue loss of efficiency. Part A describes computation and updating of the product-form of the LU decomposition of a matrix and shows it can be applied to solving linear systems at least as efficiently as standard techniques using the product-form of the inverse. Part B discusses orthogonalization via Householder transformations, with applications to least squares and quadratic programming algorithms based on the principal pivoting method of Cottle and Dantzig. Part C applies the singular value decomposition to the nonlinear least squares problem and discusses related eigenvalue problems.
Citations
More filters
Journal ArticleDOI
TL;DR: The history of these fomulas is presented and various applications to statistics, networks, structural analysis, asymptotic analysis, optimization, and partial differential equations are discussed.
Abstract: The Sherman–Morrison–Woodbury formulas relate the inverse of a matrix after a small-rank perturbation to the inverse of the original matrix. The history of these fomulas is presented and various applications to statistics, networks, structural analysis, asymptotic analysis, optimization, and partial differential equations are discussed. The Sherman-Morrison-Woodbury formulas express the inverse of a matrix after a small rank perturbation in terms of the inverse of the original matrix. This paper surveys the history of these formulas and we examine some applications where these formulas are helpful

1,026 citations


Cites methods from "Numerical techniques in mathematica..."

  • ...One method for updating a triangular factorization is presented by Bennett [7] while the series of papers by Bartels, Gill, Golub, Murray, and Saunders ([5], [20]-[22]) provides a comprehensive study of many different ways to update the standard factorizations after a rank change in the coefficient matrix....

    [...]

Journal ArticleDOI
TL;DR: An efficient and numerically stable dual algorithm for positive definite quadratic programming is described which takes advantage of the fact that the unconstrained minimum of the objective function can be used as a starting point.
Abstract: An efficient and numerically stable dual algorithm for positive definite quadratic programming is described which takes advantage of the fact that the unconstrained minimum of the objective function can be used as a starting point. Its implementation utilizes the Cholesky and QR factorizations and procedures for updating them. The performance of the dual algorithm is compared against that of primal algorithms when used to solve randomly generated test problems and quadratic programs generated in the course of solving nonlinear programming problems by a successive quadratic programming code (the principal motivation for the development of the algorithm). These computational results indicate that the dual algorithm is superior to primal algorithms when a primal feasible point is not readily available. The algorithm is also compared theoretically to the modified-simplex type dual methods of Lemke and Van de Panne and Whinston and it is illustrated by a numerical example.

1,007 citations

Journal ArticleDOI
TL;DR: In this paper, the problem of finding the stationary values of a quadratic form subject to linear constraints and determining the eigenvalues of a matrix which is modified by a matrix of rank one is considered.
Abstract: We consider the numerical calculation of several matrix eigenvalue problems which require some manipulation before the standard algorithms may be used. This includes finding the stationary values of a quadratic form subject to linear constraints and determining the eigenvalues of a matrix which is modified by a matrix of rank one. We also consider several inverse eigenvalue problems. This includes the problem of determining the coefficients for the Gauss–Radau and Gauss–Lobatto quadrature rules. In addition, we study several eigenvalue problems which arise in least squares.

615 citations

Journal ArticleDOI
TL;DR: Several methods are described for modifying Cholesky factors and a new algorithm is presented for modifying the complete orthogonal factorization of a general matrix, from which the conventional QR factors are obtained as a special case.
Abstract: In recent years several algorithms have appeared for modifying the factors of a matrix following a rank-one change. These methods have always been given in the context of specific applications and this has probably inhibited their use over a wider field. In this report several methods are described for modifying Cholesky factors. Some of these have been published previously while others appear for the first time. In addition, a new algorithm is presented for modifying the complete orthogonal factorization of a general matrix, from which the conventional QR factors are obtained as a special case. A uniform notation has been used and emphasis has been placed on illustrating the similarity between different methods.

562 citations

01 Jan 2007
TL;DR: This work considers the numerical calculation of several matrix eigenvalue problems which require some manipulation before the standard algorithms may be used, and studies several eigen value problems which arise in least squares.
Abstract: We consider the numerical calculation of several matrix eigenvalue problems which require some manipulation before the standard algorithms may be used. This includes finding the stationary values of a quadratic form subject to linear constraints and determining the eigenvalues of a matrix which is modified by a matrix of rank one. We also consider several inverse eigenvalue problems. This includes the problem of determining the coefficients for the Gauss–Radau and Gauss–Lobatto quadrature rules. In addition, we study several eigenvalue problems which arise in least squares.

435 citations


Cites background or methods from "Numerical techniques in mathematica..."

  • ...When x= [1, 0, 0], then Ax = Bx and hence A = 1....

    [...]

  • ...Another problem which arises frequently is that of finding a least squares solution with a quadratic constraint; we have considered this problem previously in [1]....

    [...]

  • ...An alternative method has been given in [1] and we shall describe that technique....

    [...]

  • ...Finally for xT = [0, 0, 1], Ax = ABx for all values of A....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, the problem of least square problems with non-linear normal equations is solved by an extension of the standard method which insures improvement of the initial solution, which can also be considered an extension to Newton's method.
Abstract: The standard method for solving least squares problems which lead to non-linear normal equations depends upon a reduction of the residuals to linear form by first order Taylor approximations taken about an initial or trial solution for the parameters.2 If the usual least squares procedure, performed with these linear approximations, yields new values for the parameters which are not sufficiently close to the initial values, the neglect of second and higher order terms may invalidate the process, and may actually give rise to a larger value of the sum of the squares of the residuals than that corresponding to the initial solution. This failure of the standard method to improve the initial solution has received some notice in statistical applications of least squares3 and has been encountered rather frequently in connection with certain engineering applications involving the approximate representation of one function by another. The purpose of this article is to show how the problem may be solved by an extension of the standard method which insures improvement of the initial solution.4 The process can also be used for solving non-linear simultaneous equations, in which case it may be considered an extension of Newton's method. Let the function to be approximated be h{x, y, z, • • • ), and let the approximating function be H{oc, y, z, • • ■ ; a, j3, y, ■ • ■ ), where a, /3, 7, • ■ ■ are the unknown parameters. Then the residuals at the points, yit zit • • • ), i = 1, 2, ■ • • , n, are

11,253 citations

Journal ArticleDOI
01 Jul 1955
TL;DR: A generalization of the inverse of a non-singular matrix is described in this paper as the unique solution of a certain set of equations, which is used here for solving linear matrix equations, and for finding an expression for the principal idempotent elements of a matrix.
Abstract: This paper describes a generalization of the inverse of a non-singular matrix, as the unique solution of a certain set of equations. This generalized inverse exists for any (possibly rectangular) matrix whatsoever with complex elements. It is used here for solving linear matrix equations, and among other applications for finding an expression for the principal idempotent elements of a matrix. Also a new type of spectral decomposition is given.

3,769 citations

Journal ArticleDOI
TL;DR: The decomposition of A is called the singular value decomposition (SVD) and the diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values.
Abstract: Let A be a real m×n matrix with m≧n. It is well known (cf. [4]) that $$A = U\sum {V^T}$$ (1) where $${U^T}U = {V^T}V = V{V^T} = {I_n}{\text{ and }}\sum {\text{ = diag(}}{\sigma _{\text{1}}}{\text{,}} \ldots {\text{,}}{\sigma _n}{\text{)}}{\text{.}}$$ The matrix U consists of n orthonormalized eigenvectors associated with the n largest eigenvalues of AA T , and the matrix V consists of the orthonormalized eigenvectors of A T A. The diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values. We shall assume that $${\sigma _1} \geqq {\sigma _2} \geqq \cdots \geqq {\sigma _n} \geqq 0.$$ Thus if rank(A)=r, σ r+1 = σ r+2=⋯=σ n = 0. The decomposition (1) is called the singular value decomposition (SVD).

3,036 citations

Book
01 Jan 1966

2,966 citations