scispace - formally typeset
Search or ask a question
Author

Peter A. Businger

Other affiliations: University of Texas at Austin
Bio: Peter A. Businger is an academic researcher from Bell Labs. The author has contributed to research in topics: Singular value decomposition & Singular value. The author has an hindex of 6, co-authored 7 publications receiving 640 citations. Previous affiliations of Peter A. Businger include University of Texas at Austin.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the euclidean norm is unitarily invariant and a vector x is determined such that x is parallel b-Ax parallel = \parallel c - QAx parallel where c denotes the first n components of c.
Abstract: Let A be a given m×n real matrix with m≧n and of rank n and b a given vector. We wish to determine a vector x such that $$\parallel b - A\hat x\parallel = \min .$$ where ∥ … ∥ indicates the euclidean norm. Since the euclidean norm is unitarily invariant $$\parallel b - Ax\parallel = \parallel c - QAx\parallel $$ where c=Q b and Q T Q = I. We choose Q so that $$QA = R = {\left( {_{\dddot 0}^{\tilde R}} \right)_{\} (m - n) \times n}}$$ (1) and R is an upper triangular matrix. Clearly, $$\hat x = {\tilde R^{ - 1}}\tilde c$$ where c denotes the first n components of c.

480 citations

Journal ArticleDOI
TL;DR: The main idea is to in ter leave composit ions of x and n -x objects and resor t to a lexicographic genera t ion ofComposit ions.
Abstract: p r o c e d u r e Ising (n, x, t, S); i n t e g e r n, x, l; i n t e g e r array S; c o m m e n t Ising generates n-sequences ($1, \" \" , S,) of zeros and ones where x = ~ i ~ S~ and t = ~,-~1 I S~+I S~ I are given. The main idea is to in ter leave composit ions of x and n -x objects and resor t to a lexicographic genera t ion of composit ions. We call these sequences Is ing configurat ions since we believe they first appeared in the s t u d y of the so-called I s ing problem (See Hill [1], Is ing [2]). T he number R(n, x, t) of dist i nc t configurat ions wi th fixed n, x, t is well known [1, 2]:

121 citations

Journal ArticleDOI
Peter A. Businger1
TL;DR: In this article, sufficient conditions are given for a matrix to be optimally scalable in the sense of minimizing its condition number, in particular in the case of simultaneous row-and column-scaling and subordinate to the l 1-orl?-norm.
Abstract: Sufficient conditions are given for a matrix to be optimally scalable in the sense of minimizing its condition number. In particular, in the case of simultaneous row- and column-scaling and subordinate to thel 1- orl ?-norm the minimal condition number is achieved for fully indecomposable matrices.

28 citations

Journal ArticleDOI
Peter A. Businger1
TL;DR: An effective and inexpensive test for partial pivoting is proposed for solving systems of linear algebraic equations by Gaussian elimination and it is proposed that this test requires less computational work than complete pivoting.
Abstract: Complete pivoting is known to be numerically preferable to partial pivoting for solving systems of linear algebraic equations by Gaussian elimination. However, partial pivoting requires less computational work. Hence we should like to use partial pivoting provided we can easily recognize numerical difficulties. We propose an effective and inexpensive test for this purpose.

24 citations

Patent
Peter A. Businger1
15 Dec 1969
TL;DR: In this paper, a method of insuring the numerical stability of the machine-implemented computational process of Gaussian elimination is proposed. But the accuracy of the method of complete pivoting is substantially reduced without sacrificing the economy of the partial pivoting, except in those cases where it is essential to do so to preserve accuracy.
Abstract: A method of insuring the numerical stability of the machineimplemented computational process of Gaussian elimination. The accuracy of the method of complete pivoting is substantially obtained without sacrificing the economy of the method of partial pivoting, except in those cases where it is essential to do so to preserve accuracy.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The decomposition of A is called the singular value decomposition (SVD) and the diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values.
Abstract: Let A be a real m×n matrix with m≧n. It is well known (cf. [4]) that $$A = U\sum {V^T}$$ (1) where $${U^T}U = {V^T}V = V{V^T} = {I_n}{\text{ and }}\sum {\text{ = diag(}}{\sigma _{\text{1}}}{\text{,}} \ldots {\text{,}}{\sigma _n}{\text{)}}{\text{.}}$$ The matrix U consists of n orthonormalized eigenvectors associated with the n largest eigenvalues of AA T , and the matrix V consists of the orthonormalized eigenvectors of A T A. The diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values. We shall assume that $${\sigma _1} \geqq {\sigma _2} \geqq \cdots \geqq {\sigma _n} \geqq 0.$$ Thus if rank(A)=r, σ r+1 = σ r+2=⋯=σ n = 0. The decomposition (1) is called the singular value decomposition (SVD).

3,036 citations

Journal ArticleDOI
TL;DR: Experimental results are given, which indicates that MGS gives $\theta_k$ with equal precision and fewer arithmetic operations than HT, however, HT gives principal vectors, which are orthogonal to working accuracy, which is not in general true for MGS.
Abstract: Assume that two subspaces F and G of unitary space are defined as the ranges (or nullspaces) of given rectangular matrices A and B. Accurate numerical methods are developed for computing the principal angles $\theta_k (F,G)$ and orthogonal sets of principal vectors $u_k\ \epsilon\ F$ and $v_k\ \epsilon\ G$, k = 1,2,..., q = dim(G) $\leq$ dim(F). An important application in statistics is computing the canonical correlations $\sigma_k\ = cos \theta_k$ between two sets of variates. A perturbation analysis shows that the condition number for $\theta_k$ essentially is max($\kappa (A),\kappa (B)$), where $\kappa$ denotes the condition number of a matrix. The algorithms are based on a preliminary QR-factorization of A and B (or $A^H$ and $B^H$), for which either the method of Householder transformations (HT) or the modified Gram-Schmidt method (MGS) is used. Then cos $\theta_k$ and sin $\theta_k$ are computed as the singular values of certain related matrices. Experimental results are given, which indicates that MGS gives $\theta_k$ with equal precision and fewer arithmetic operations than HT. However, HT gives principal vectors, which are orthogonal to working accuracy, which is not in general true for MGS. Finally the case when A and/or B are rank deficient is discussed.

763 citations

Journal ArticleDOI
TL;DR: Two algorithms are presented for computing rank-revealing QR factorizations that are nearly as efficient as QR with column pivoting for most problems and take O (ran2) floating-point operations in the worst case.
Abstract: Given anm n matrixM withm > n, it is shown that there exists a permutation FI and an integer k such that the QR factorization MYI= Q(Ak ckBk) reveals the numerical rank of M: the k k upper-triangular matrix Ak is well conditioned, IlCkll2 is small, and Bk is linearly dependent on Ak with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficient as QR with column pivoting for most problems and take O (ran2) floating-point operations in the worst case.

698 citations

Journal ArticleDOI
TL;DR: Developments in the theory of linear least-squares estimation in the last thirty years or so are outlined and particular attention is paid to early mathematica[ work in the field and to more modern developments showing some of the many connections between least-Squares filtering and other fields.
Abstract: Developments in the theory of linear least-squares estimation in the last thirty years or so are outlined. Particular attention is paid to early mathematica[ work in the field and to more modern developments showing some of the many connections between least-squares filtering and other fields.

696 citations

Journal ArticleDOI
TL;DR: A survey of computational methods in linear algebra can be found in this article, where the authors discuss the means and methods of estimating the quality of numerical solution of computational problems, the generalized inverse of a matrix, the solution of systems with rectangular and poorly conditioned matrices, and more traditional questions such as algebraic eigenvalue problems and systems with a square matrix.
Abstract: The authors' survey paper is devoted to the present state of computational methods in linear algebra. Questions discussed are the means and methods of estimating the quality of numerical solution of computational problems, the generalized inverse of a matrix, the solution of systems with rectangular and poorly conditioned matrices, the inverse eigenvalue problem, and more traditional questions such as algebraic eigenvalue problems and the solution of systems with a square matrix (by direct and iterative methods).

667 citations