01 Jan 2003-Siam Review (Society for Industrial and Applied Mathematics)-Vol. 45, Iss: 3, pp 504-519
TL;DR: This work defines and explores the properties of the exchange operator, which maps J-orthogonal matrices to orthogonalMatrices and vice versa, and shows how the exchange operators can be used to obtain a hyperbolic CS decomposition of a J- Orthogonal matrix directly from the usual CS decompositions of an orthogsonal matrix.
Abstract: A real, square matrix Q is J-orthogonal if QTJQ = J, where the signature matrix $J = \diag(\pm 1)$. J-orthogonal matrices arise in the analysis and numerical solution of various matrix problems involving indefinite inner products, including, in particular, the downdating of Cholesky factorizations. We present techniques and tools useful in the analysis, application, and construction of these matrices, giving a self-contained treatment that provides new insights. First, we define and explore the properties of the exchange operator, which maps J-orthogonal matrices to orthogonal matrices and vice versa. Then we show how the exchange operator can be used to obtain a hyperbolic CS decomposition of a J-orthogonal matrix directly from the usual CS decomposition of an orthogonal matrix. We employ the decomposition to derive an algorithm for constructing random J-orthogonal matrices with specified norm and condition number. We also give a short proof of the fact that J-orthogonal matrices are optimally scaled und...
TL;DR: A key feature of this analysis is the identification of two particular classes of scalar products, termed unitary and orthosymmetric, which serve to unify assumptions for the existence of structured factorizations.
Abstract: Let $A$ belong to an automorphism group, Lie algebra, or Jordan algebra of a scalar product. When $A$ is factored, to what extent do the factors inherit structure from $A$? We answer this question for the principal matrix square root, the matrix sign decomposition, and the polar decomposition. For general $A$, we give a simple derivation and characterization of a particular generalized polar decomposition, and we relate it to other such decompositions in the literature. Finally, we study eigendecompositions and structured singular value decompositions, considering in particular the structure in eigenvalues, eigenvectors, and singular values that persists across a wide range of scalar products.
A key feature of our analysis is the identification of two particular classes of scalar products, termed unitary and orthosymmetric, which serve to unify assumptions for the existence of structured factorizations. A variety of different characterizations of these scalar product classes are given.
TL;DR: In this paper, a generalized tensor function according to the tensor singular value decomposition (T-SVD) is defined, from which the projection operators and Moore-Penrose inverse of tensors are obtained.
TL;DR: It is shown that group structure is preserved precisely when $f(A^{-1}) = f(A)^{- 1}$ for bilinear forms and when $F(A)\in\mathbb{G}$ is the matrix automorphism group associated with a bil inear or sesquilinear form, and meromorphic functions that satisfy each of these conditions are characterized.
Abstract: For which functions $f$ does $A\in\mathbb{G} \Rightarrow f(A)\in\mathbb{G}$ when $\mathbb{G}$ is the matrix automorphism group associated with a bilinear or sesquilinear form? For example, if $A$ is symplectic when is $f(A)$ symplectic? We show that group structure is preserved precisely when $f(A^{-1}) = f(A)^{-1}$ for bilinear forms and when $f(A^{-*}) = f(A)^{-*}$ for sesquilinear forms. Meromorphic functions that satisfy each of these conditions are characterized. Related to structure preservation is the condition $f(\overline{A}) = \overline{f(A)}$, and analytic functions and rational functions satisfying this condition are also characterized. These results enable us to characterize all meromorphic functions that map every $\mathbb{G}$ into itself as the ratio of a polynomial and its ``reversal,'' up to a monomial factor and conjugation.
The principal square root is an important example of a function that preserves every automorphism group $\mathbb{G}$. By exploiting the matrix sign function, a new family of coupled iterations for the matrix square root is derived. Some of these iterations preserve every $\mathbb{G}$; all of them are shown, via a novel Frechet derivative-based analysis, to be numerically stable.
A rewritten form of Newton's method for the square root of $A\in\mathbb{G}$ is also derived. Unlike the original method, this new form has good numerical stability properties, and we argue that it is the iterative method of choice for computing $A^{1/2}$ when $A\in\mathbb{G}$. Our tools include a formula for the sign of a certain block $2\times 2$ matrix, the generalized polar decomposition along with a wide class of iterations for computing it, and a connection between the generalized polar decomposition of $I+A$ and the square root of $A\in\mathbb{G}$.
TL;DR: In this paper, an extensive and unified collection of structure-preserving transformations for non-degenerate bilinear or sesquilinear forms on R n or C n is presented.
Abstract: An extensive and unified collection of structure-preserving transformations is pre- sented and organized for easy reference. The structures involved arise in the context of a non- degenerate bilinear or sesquilinear form on R n or C n . A variety of transformations belonging to the automorphism groups of these forms, that imitate the action of Givens rotations, Householder reflec- tors, and Gauss transformations are constructed. Transformations for performing structured scaling actions are also described. The matrix groups considered in this paper are the complex orthogonal, real, complex and conjugate symplectic, real perplectic, real and complex pseudo-orthogonal, and pseudo-unitary groups. In addition to deriving new transformations, this paper collects and unifies existing structure-preserving tools.
TL;DR: In this paper, a model for simultaneous diagonalization and damping of J-Hermitian matrices is presented, and a spectral decomposition of a general Hermitian matrix is presented.
Abstract: 1 The model.- 2 Simultaneous diagonalisation (Modal damping).- 3 Phase space.- 4 The singular mass case.- 5 "Indefinite metric".- 6 Matrices and indefinite scalar products.- 7 Oblique projections.- 8 J-orthogonal projections.- 9 Spectral properties and reduction of J-Hermitian matrices.- 10 Definite spectra.- 11 General Hermitian matrix pairs.- 12 Spectral decomposition of a general J-Hermitian matrix.- 13 The matrix exponential.- 14 The quadratic eigenvalue problem.- 15 Simple eigenvalue inclusions.- 16 Spectral shift.- 17 Resonances and resolvents.- 18 Well-posedness .- 19 Modal approximation.- 20 Modal approximation and overdampedness.- 21 Passive control.- 22 Perturbing matrix exponential.- 23 Notes and remarks.
TL;DR: In this article, the authors present results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrate their importance in a variety of applications, such as linear algebra and matrix theory.
Abstract: Linear algebra and matrix theory are fundamental tools in mathematical and physical science, as well as fertile fields for research. This new edition of the acclaimed text presents results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrates their importance in a variety of applications. The authors have thoroughly revised, updated, and expanded on the first edition. The book opens with an extended summary of useful concepts and facts and includes numerous new topics and features, such as: - New sections on the singular value and CS decompositions - New applications of the Jordan canonical form - A new section on the Weyr canonical form - Expanded treatments of inverse problems and of block matrices - A central role for the Von Neumann trace theorem - A new appendix with a modern list of canonical forms for a pair of Hermitian matrices and for a symmetric-skew symmetric pair - Expanded index with more than 3,500 entries for easy reference - More than 1,100 problems and exercises, many with hints, to reinforce understanding and develop auxiliary themes such as finite-dimensional quantum systems, the compound and adjugate matrices, and the Loewner ellipsoid - A new appendix provides a collection of problem-solving hints.
TL;DR: In this article, the Perturbation of Eigenvalues and Generalized Eigenvalue Problems are studied. But they focus on linear systems and Least Squares problems and do not consider invariant subspaces.
Abstract: Preliminaries. Norms and Metrics. Linear Systems and Least Squares Problems. The Perturbation of Eigenvalues. Invariant Subspaces. Generalized Eigenvalue Problems.
TL;DR: Higham as discussed by the authors gives a thorough, up-to-date treatment of the behavior of numerical algorithms in finite precision arithmetic, combining algorithmic derivations, perturbation theory, and rounding error analysis.
Abstract: From the Publisher:
What is the most accurate way to sum floating point numbers? What are the advantages of IEEE arithmetic? How accurate is Gaussian elimination and what were the key breakthroughs in the development of error analysis for the method? The answers to these and many related questions are included here.
This book gives a thorough, up-to-date treatment of the behavior of numerical algorithms in finite precision arithmetic. It combines algorithmic derivations, perturbation theory, and rounding error analysis. Software practicalities are emphasized throughout, with particular reference to LAPACK and MATLAB. The best available error bounds, some of them new, are presented in a unified format with a minimum of jargon. Because of its central role in revealing problem sensitivity and providing error bounds, perturbation theory is treated in detail.
Historical perspective and insight are given, with particular reference to the fundamental work of Wilkinson and Turing, and the many quotations provide further information in an accessible format.
The book is unique in that algorithmic developments and motivations are given succinctly and implementation details minimized, so that attention can be concentrated on accuracy and stability results. Here, in one place and in a unified notation, is error analysis for most of the standard algorithms in matrix computations. Not since Wilkinson's Rounding Errors in Algebraic Processes (1963) and The Algebraic Eigenvalue Problem (1965) has any volume treated this subject in such depth. A number of topics are treated that are not usually covered in numerical analysis textbooks, including floating point summation, block LU factorization, condition number estimation, the Sylvester equation, powers of matrices, finite precision behavior of stationary iterative methods, Vandermonde systems, and fast matrix multiplication.
Although not designed specifically as a textbook, this volume is a suitable reference for an advanced course, and could be used by instructors at all levels as a supplementary text from which to draw examples, historical perspective, statements of results, and exercises (many of which have never before appeared in textbooks). The book is designed to be a comprehensive reference and its bibliography contains more than 1100 references from the research literature.
Audience
Specialists in numerical analysis as well as computational scientists and engineers concerned about the accuracy of their results will benefit from this book. Much of the book can be understood with only a basic grounding in numerical analysis and linear algebra.
About the Author
Nicholas J. Higham is a Professor of Applied Mathematics at the University of Manchester, England. He is the author of more than 40 publications and is a member of the editorial boards of the SIAM Journal on Matrix Analysis and Applications and the IMA Journal of Numerical Analysis. His book Handbook of Writing for the Mathematical Sciences was published by SIAM in 1993.
TL;DR: This volume treats the numerical solution of dense and large-scale eigenvalue problems with an emphasis on algorithms and the theoretical background required to understand them.
Abstract: This book is the second volume in a projected five-volume survey of numerical linear algebra and matrix algorithms. This volume treats the numerical solution of dense and large-scale eigenvalue problems with an emphasis on algorithms and the theoretical background required to understand them. Stressing depth over breadth, Professor Stewart treats the derivation and implementation of the more important algorithms in detail. The notes and references sections contain pointers to other methods along with historical comments. The book is divided into two parts: dense eigenproblems and large eigenproblems. The first part gives a full treatment of the widely used QR algorithm, which is then applied to the solution of generalized eigenproblems and the computation of the singular value decomposition. The second part treats Krylov sequence methods such as the Lanczos and Arnoldi algorithms and presents a new treatment of the Jacobi-Davidson method. The volumes in this survey are not intended to be encyclopedic. By treating carefully selected topics in depth, each volume gives the reader the theoretical and practical background to read the research literature and implement or modify new algorithms. The algorithms treated are illustrated by pseudocode that has been tested in MATLAB implementations.
TL;DR: Applications of the polar decomposition to factor analysis, aerospace computations and optimisation are outlined; and a new method is derived for computing the square root of a symmetric positive definite matrix.
Abstract: A quadratically convergent Newton method for computing the polar decomposition of a full-rank matrix is presented and analysed. Acceleration parameters are introduced so as to enhance the initial rate of convergence and it is shown how reliable estimates of the optimal parameters may be computed in practice.To add to the known best approximation property of the unitary polar factor, the Hermitian polar factor H of a nonsingular Hermitian matrix A is shown to be a good positive definite approximation to Aand $\frac{1}{2}(A + H)$ is shown to be a best Hermitian positive semi-definite approximation to A. Perturbation bounds for the polar factors are derived.Applications of the polar decomposition to factor analysis, aerospace computations and optimisation are outlined; and a new method is derived for computing the square root of a symmetric positive definite matrix.
446 citations
"J-Orthogonal Matrices: Properties a..." refers background or methods in this paper
..., [15], [23]), the latter iteration being produced on setting J = I....
[...]
...Analogously to the case of orthogonal matrices and the corresponding Newton iterations [15], [20], we show that these Newton iterations can be used to J-orthogonalize a matrix that is not too far from being J-orthogonal....
Q1. What are the contributions in "J-orthogonal matrices: properties and generation higham, nicholas j. 2003" ?
The authors present techniques and tools useful in the analysis, application and construction of these matrices, giving a self-contained treatment that provides new insights. Then the authors show how the exchange operator can be used to obtain a hyperbolic CS decomposition of a J-orthogonal matrix directly from the usual CS decomposition of an orthogonal matrix. The authors introduce the indefinite polar decomposition and investigate two iterations for computing the J-orthogonal polar factor: a Newton iteration involving only matrix inversion and a Schulz iteration involving only matrix multiplication. The authors show that these iterations can be used to J-orthogonalize a matrix that is not too far from being J-orthogonal.
Q2. What is the Newton iteration of a matrix?
Restoring lost orthogonality is a common requirement, for example in numerical solution of matrix differential equations having an orthogonal solution [17], or for computed eigenvector matrices of symmetric matrices.
Q3. What is the simplest way to prove the convergence of Xk+1?
From standard analysis of this iteration (see, e.g., [23]) the authors know that Sk converges quadratically to sign(S0), which is the identity matrix since the spectrum of S0 lies in the open right half-plane.
Q4. What is the simplest way to show that the inverse of the Newton iteration is?
Unlike for orthogonal matrices, for general J-orthogonal matrices ‖Q‖2 can be arbitrarily large and this has implications for the attainable accuracy of the Newton and Schulz iterations in floating point arithmetic.
Q5. What is the way to get the inverse of the matrix?
Such an iteration can be obtained by adapting the Schulz iteration, which exists in variants for computing the matrix inverse [31], the orthogonal polar factor [20], the matrix sign function [22], and the matrix square root [18].