J-Orthogonal Matrices: Properties and Generation
read more
Citations
Structured Factorizations in Scalar Product Spaces
Generalized tensor function via the tensor singular value decomposition based on the T-product
Functions Preserving Matrix Groups and Iterations for the Matrix Square Root
Structured tools for structured matrices
Damped oscillations of linear systems: a mathematical introduction / Kresimir Veselic
References
Principal pivot transforms: properties and applications
A Jacobi eigenreduction algorithm for definite matrix pairs
Structured tools for structured matrices
Solving the Indefinite Least Squares Problem by Hyperbolic QR Factorization
Polar decompositions in finite dimensional indefinite scalar product spaces: General theory
Frequently Asked Questions (5)
Q2. What is the Newton iteration of a matrix?
Restoring lost orthogonality is a common requirement, for example in numerical solution of matrix differential equations having an orthogonal solution [17], or for computed eigenvector matrices of symmetric matrices.
Q3. What is the simplest way to prove the convergence of Xk+1?
From standard analysis of this iteration (see, e.g., [23]) the authors know that Sk converges quadratically to sign(S0), which is the identity matrix since the spectrum of S0 lies in the open right half-plane.
Q4. What is the simplest way to show that the inverse of the Newton iteration is?
Unlike for orthogonal matrices, for general J-orthogonal matrices ‖Q‖2 can be arbitrarily large and this has implications for the attainable accuracy of the Newton and Schulz iterations in floating point arithmetic.
Q5. What is the way to get the inverse of the matrix?
Such an iteration can be obtained by adapting the Schulz iteration, which exists in variants for computing the matrix inverse [31], the orthogonal polar factor [20], the matrix sign function [22], and the matrix square root [18].