scispace - formally typeset
Search or ask a question

Showing papers on "Square matrix published in 2005"


Book
12 Jan 2005
TL;DR: A review of elementary matrix algebra can be found in this article, with a focus on matrix multiplication and matrix factorizations and Martrix Norms, as well as generalized inverses.
Abstract: Preface. 1. A Review of Elementary Matrix Algebra. 2. Vector Spaces. 3. Eigenvalues and Eigenvectors. 4. Matrix Factorizations and Martrix Norms. 5. Generalized Inverses. 6. Systems of Linear Equations. 7. Partitioned Matrices. 8. Special Matrices and Matrix Operations. 9. Matrix Derivatives and Related Topics. 10. Some Special Topics Related to Quadratic Forms. References. Index.

790 citations


Journal ArticleDOI
TL;DR: A new backward error analysis of the method is given that employs sharp bounds for the truncation errors and leads to an implementation of essentially optimal efficiency and new rounding error analysis shows the computed Pade approximant of the scaled matrix to be highly accurate.
Abstract: The scaling and squaring method is the most widely used method for computing the matrix exponential, not least because it is the method implemented in MATLAB's {\tt expm} function. The method scales the matrix by a power of 2 to reduce the norm to order 1, computes a Pade approximant to the matrix exponential, and then repeatedly squares to undo the effect of the scaling. We give a new backward error analysis of the method (in exact arithmetic) that employs sharp bounds for the truncation errors and leads to an implementation of essentially optimal efficiency. We also give new rounding error analysis that shows the computed Pade approximant of the scaled matrix to be highly accurate. For IEEE double precision arithmetic the best choice of degree of Pade approximant turns out to be 13, rather than the 6 or 8 used by previous authors. Our implementation of the scaling and squaring method always requires at least two fewer matrix multiplications than {\tt expm} when the matrix norm exceeds 1, which can amount to a 37% saving in the number of multiplications, and it is typically more accurate, owing to the fewer required squarings. We also investigate a different scaling and squaring algorithm proposed by Najfeld and Havel that employs a Pade approximation to the function $x \coth(x)$. This method is found to be essentially a variation of the standard one with weaker supporting error analysis.

513 citations


Journal ArticleDOI
TL;DR: The Dirac operator in a matrix representation in a kinetically balanced basis is transformed to a quasirelativistic Hamiltonian matrix, that has the same electronic eigenstates as the original Dirac matrix.
Abstract: The Dirac operator in a matrix representation in a kinetically balanced basis is transformed to a quasirelativistic Hamiltonian matrix, that has the same electronic eigenstates as the original Dirac matrix. This transformation involves a matrix X, for which an exact identity is derived, and which can be constructed either in a noniterative way or by various iteration schemes, without requiring an expansion parameter. The convergence behavior of five different iteration schemes is studied numerically, with very promising results.

376 citations


Journal ArticleDOI
TL;DR: Numerical examples demonstrate that the SOAR method outperforms convergence behaviors of the Krylov subspace--based Arnoldi method applied to the linearized QEP.
Abstract: We first introduce a second-order Krylov subspace $\mathcal{G}_n$(A,B;u) based on a pair of square matrices A and B and a vector u. The subspace is spanned by a sequence of vectors defined via a second-order linear homogeneous recurrence relation with coefficient matrices A and B and an initial vector u. It generalizes the well-known Krylov subspace $\mathcal{K}_n$(A;v), which is spanned by a sequence of vectors defined via a first-order linear homogeneous recurrence relation with a single coefficient matrix A and an initial vector v. Then we present a second-order Arnoldi (SOAR) procedure for generating an orthonormal basis of $\mathcal{G}_n$(A,B;u). By applying the standard Rayleigh--Ritz orthogonal projection technique, we derive an SOAR method for solving a large-scale quadratic eigenvalue problem (QEP). This method is applied to the QEP directly. Hence it preserves essential structures and properties of the QEP. Numerical examples demonstrate that the SOAR method outperforms convergence behaviors of the Krylov subspace--based Arnoldi method applied to the linearized QEP.

244 citations


Journal ArticleDOI
TL;DR: The problem of learning a symmetric positive definite matrix is addressed and the derivation and the analyses of the original EG update and AdaBoost generalize to the non-diagonal case, and the resulting matrix exponentiated gradient (MEG) update and DefiniteBoost are applied to the problem oflearning a kernel matrix from distance measurements.
Abstract: We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric positive definite matrix subject to linear constraints. The updates generalize the exponentiated gradient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the derivation and the analyses of the original EG update and AdaBoost generalize to the non-diagonal case. We apply the resulting matrix exponentiated gradient (MEG) update and DefiniteBoost to the problem of learning a kernel matrix from distance measurements.

199 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study the spectra of Laplacian matrices and the relation between them and stochastic matrices, and prove that the normalized LaplACian matrix L ∼ is semiconvergent, i.e., the multiplicities of 0 and 1 as the eigenvalues of L ∼ are equal to the in-forest dimension of the corresponding digraph and one less than the inforest dimension for the complementary digraph.

186 citations


Journal ArticleDOI
TL;DR: In this article, the determinant of the distance matrix of a weighted tree for a perturbation of D−1 was shown to be an entry-wise positive matrix, and the inertia of the tree was investigated.

129 citations


Journal ArticleDOI
TL;DR: In this paper, two canonical forms for Leonard pairs are introduced: the TD-D canonical form and the LB-UB canonical form, where the diagonal matrix of the matrix representing A is irreducible tridiagonal and the matrix of B is diagonal.

109 citations


Journal ArticleDOI
TL;DR: Using the shifted number system the high-order lifting and integrality certification techniques of Storjohann 2003 for polynomial matrices are extended to the integer case.

85 citations


Journal ArticleDOI
TL;DR: In this paper, the generalized Peano-Baker series was used to solve the problem of the generalized time-varying linear dynamic system of the form of a matrix exponential.
Abstract: We give a closed form for the unique solution to the n × n regressive time varying linear dynamic system of the form via use of a newly developed generalized form of the Peano-Baker series. We develop a power series representation for the generalized time scale matrix exponential when the matrix A(t) ≡ A is a constant matrix. We also introduce a finite series representation of the matrix exponential using the Laplace transform for time scales, as well as a theorem which allows us to write the matrix exponential as a series of (n − 1) terms of scalar functions multiplied by powers of the system matrix A.

75 citations


Journal ArticleDOI
TL;DR: It is shown that group structure is preserved precisely when $f(A^{-1}) = f(A)^{- 1}$ for bilinear forms and when $F(A)\in\mathbb{G}$ is the matrix automorphism group associated with a bil inear or sesquilinear form, and meromorphic functions that satisfy each of these conditions are characterized.
Abstract: For which functions $f$ does $A\in\mathbb{G} \Rightarrow f(A)\in\mathbb{G}$ when $\mathbb{G}$ is the matrix automorphism group associated with a bilinear or sesquilinear form? For example, if $A$ is symplectic when is $f(A)$ symplectic? We show that group structure is preserved precisely when $f(A^{-1}) = f(A)^{-1}$ for bilinear forms and when $f(A^{-*}) = f(A)^{-*}$ for sesquilinear forms. Meromorphic functions that satisfy each of these conditions are characterized. Related to structure preservation is the condition $f(\overline{A}) = \overline{f(A)}$, and analytic functions and rational functions satisfying this condition are also characterized. These results enable us to characterize all meromorphic functions that map every $\mathbb{G}$ into itself as the ratio of a polynomial and its ``reversal,'' up to a monomial factor and conjugation. The principal square root is an important example of a function that preserves every automorphism group $\mathbb{G}$. By exploiting the matrix sign function, a new family of coupled iterations for the matrix square root is derived. Some of these iterations preserve every $\mathbb{G}$; all of them are shown, via a novel Frechet derivative-based analysis, to be numerically stable. A rewritten form of Newton's method for the square root of $A\in\mathbb{G}$ is also derived. Unlike the original method, this new form has good numerical stability properties, and we argue that it is the iterative method of choice for computing $A^{1/2}$ when $A\in\mathbb{G}$. Our tools include a formula for the sign of a certain block $2\times 2$ matrix, the generalized polar decomposition along with a wide class of iterations for computing it, and a connection between the generalized polar decomposition of $I+A$ and the square root of $A\in\mathbb{G}$.

Journal ArticleDOI
TL;DR: Property of GMRES solutions at breakdown are discussed and a modification of GM RES to overcome the breakdown is presented.
Abstract: GMRES is a popular iterative method for the solution of large linear systems of equations with a square nonsingular matrix. When the matrix is singular, GMRES may break down before an acceptable approximate solution has been determined. This paper discusses properties of GMRES solutions at breakdown and presents a modification of GMRES to overcome the breakdown.

Journal ArticleDOI
TL;DR: A substantial acceleration of randomized computation of scalar, univariate, and multivariate matrix determinants, in terms of the output-sensitive bit operation complexity bounds, including computation modulo a product of random primes from a fixed range is accelerated.

Patent
15 Mar 2005
TL;DR: An improved and extended Reed-Solomon-like method for providing a redundancy of m≧3 is described in this article, where a general expression of the codes is described, as well as a systematic criterion for proving correctness and finding decoding algorithms for values of m ≥ 3.
Abstract: An improved and extended Reed-Solomon-like method for providing a redundancy of m≧3 is disclosed. A general expression of the codes is described, as well as a systematic criterion for proving correctness and finding decoding algorithms for values of m≧3. Examples of codes are given for m=3, 4, 5, based on primitive elements of a finite field of dimension N where N is 8, 16 or 32. A Horner's method and accumulator apparatus are described for XOR-efficient evaluation of polynomials with variable vector coefficients and constant sparse square matrix abscissa. A power balancing technique is described to further improve the XOR efficiency of the algorithms. XOR-efficient decoding methods are also described. A tower coordinate technique to efficiently carry out finite field multiplication or inversion for large dimension N forms a basis for one decoding method. Another decoding method uses a stored one-dimensional table of powers of α and Schur expressions to efficiently calculate the inverse of the square submatrices of the encoding matrix.

Journal ArticleDOI
TL;DR: In this paper, a simple recursive scheme for parametrization of n-by-n unitary matrices is presented, which is expressed as a product containing the (n−1)-by-(n− 1) matrix and a unitary matrix that contains the additional parameters needed to go from n−1 to n.
Abstract: A simple recursive scheme for parametrization of n-by-n unitary matrices is presented. The n-by-n matrix is expressed as a product containing the (n−1)-by-(n−1) matrix and a unitary matrix that contains the additional parameters needed to go from n−1 to n. The procedure is repeated to obtain recursion formulas for n-by-n unitary matrices.

Journal ArticleDOI
TL;DR: It is shown how the matrix square root is related to the constant block coefficient of the inverse of a suitable matrix Laurent polynomial, which allows one to design an efficient algorithm for its computation.
Abstract: We give a new characterization of the matrix square root and a new algorithm for its computation. We show how the matrix square root is related to the constant block coefficient of the inverse of a suitable matrix Laurent polynomial. This fact, besides giving a new interpretation of the matrix square root, allows one to design an efficient algorithm for its computation. The algorithm, which is mathematically equivalent to Newton's method, is quadratically convergent and numerically insensitive to the ill-conditioning of the original matrix and works also in the special case where the original matrix is singular and has a square root.

Journal ArticleDOI
TL;DR: New algorithms that can replace the diagonal entries of a Hermitian matrix by any set of diagonal entries that majorize the original set without altering the eigenvalues of the matrix are presented.
Abstract: In this paper, we present new algorithms that can replace the diagonal entries of a Hermitian matrix by any set of diagonal entries that majorize the original set without altering the eigenvalues of the matrix. They perform this feat by applying a sequence of (N-1) or fewer plane rotations, where N is the dimension of the matrix. Both the Bendel--Mickey and the Chan--Li algorithms are special cases of the proposed procedures. Using the fact that a positive semidefinite matrix can always be factored as $\mtx{X^\adj X}$, we also provide more efficient versions of the algorithms that can directly construct factors with specified singular values and column norms. We conclude with some open problems related to the construction of Hermitian matrices with joint diagonal and spectral properties.

Posted Content
TL;DR: In this article, it was shown that the characteristic polynomial of a symmetric matrix is interlaced by the characteristic of any principle submatrix of the symmetric matrices, using only the linearity of the determinant, and the fact that all eigenvalues of the matrix are real.
Abstract: Cauchy's interlace theorem states that the characteristic polynomial of a symmetric matrix is interlaced by the characteristic polynomial of any principle submatrix. We prove this in two sentences using only the linearity of the determinant, and the fact that all eigenvalues of a symmetric matrix are real.

Journal ArticleDOI
TL;DR: A constructive perturbation bound of the Drazin inverse of a square matrix is derived using a technique proposed by G. Stewart and based on perturbations theory for invariant subspaces.
Abstract: A constructive perturbation bound of the Drazin inverse of a square matrix is derived using a technique proposed by G. Stewart and based on perturbation theory for invariant subspaces. This is an improvement of the result published by the authors Wei and Li [Numer. Linear Algebra Appl., 10 (2003), pp. 563--575]. It is a totally new approach to developing perturbation bounds for the Drazin inverse of a matrix. A numerical example which indicates the sharpness of the perturbation bound is presented.

Journal ArticleDOI
TL;DR: A lower bound for the mean-square estimation error among the least-square ICI matrix estimators is derived using different training sequences and it is proved that the minimum mean- square error (MMSE) optimality is attained when the training sequences in different OFDM blocks are orthogonal to each other, regardless of the sequence length.
Abstract: The intercarrier interference (ICI) matrix for the orthogonal frequency division multiplexing (OFDM) systems usually has a fairly large dimension. The traditional least-square solution based on the pseudo-inverse operation, therefore, has its limitation. In addition, the provision of a sufficiently long training sequence to estimate the complete ICI matrix is not feasible, since it will result in severe throughput reduction. In this paper, we derive a lower bound for the mean-square estimation error among the least-square ICI matrix estimators using different training sequences and prove that the minimum mean-square error (MMSE) optimality is attained when the training sequences in different OFDM blocks are orthogonal to each other, regardless of the sequence length. We also prove that the asymptotical mean-square estimation error using the maximal-length shift-register sequences (m-sequences) as in the existing communication standards is 3 dB larger than that using the perfectly orthogonal sequences for ICI matrix estimation. Thus, we propose to employ the training sequences based on the Hadamard matrix to achieve a highly efficient and optimal ICI matrix estimator with minimum mean-square estimation error among all least-square ICI matrix estimators. Meanwhile, our new scheme involves only square computational complexity, while other existing least-square methods require the complexity proportional to the cube of the ICI matrix size. Analytical and experimental comparisons between our new scheme using Hadamard sequences and the existing method using m-sequences (pseudo-random sequences) show the significant advantages of our new ICI matrix estimator. The proposed method is most suitable for OFDM systems with large amount of subcarriers, using high order of subcarrier modulation, and designed for high-end of RF frequency band, where accurate ICI estimation is crucial.

Patent
15 Nov 2005
TL;DR: In this article, multiple iterations of Jacobi rotation are performed on a first matrix of complex values with multiple Jacobi-rotation matrices to zero out the off-diagonal elements in the first matrix.
Abstract: Techniques for decomposing matrices using Jacobi rotation are described. Multiple iterations of Jacobi rotation are performed on a first matrix of complex values with multiple Jacobi rotation matrices of complex values to zero out the off-diagonal elements in the first matrix. For each iteration, a submatrix may be formed based on the first matrix and decomposed to obtain eigenvectors for the submatrix, and a Jacobi rotation matrix may be formed with the eigenvectors and used to update the first matrix. A second matrix of complex values, which contains orthogonal vectors, is derived based on the Jacobi rotation matrices. For eigenvalue decomposition, a third matrix of eigenvalues may be derived based on the Jacobi rotation matrices. For singular value decomposition, a fourth matrix with left singular vectors and a matrix of singular values may be derived based on the Jacobi rotation matrices.

Journal ArticleDOI
TL;DR: The main purpose of this paper is to provide solutions of two problems on sensitivity of eigenvalues and eigendecompositions of matrices and to describe how to construct a nearest matrix having a multiple eigenvalue.

Patent
15 Mar 2005
TL;DR: In this paper, a matrix (I) of eigenvectors is derived using an iterative procedure, where an eigenmode matrix Vi is first initialized, e.g., to an identity matrix, and then updated based on a channel response matrix (II) for a MIMO channel to obtain an updated eigen mode matrix Vi+1.
Abstract: A matrix (I) of eigenvectors is derived using an iterative procedure. For the procedure, an eigenmode matrix Vi is first initialized, e.g., to an identity matrix. The eigenmode matrix Vi is then updated based on a channel response matrix (II) for a MIMO channel to obtain an updated eigenmode matrix Vi+1. The eigenmode matrix may be updated for a fixed or variable number of iterations. The columns of the updated eigenmode matrix may be orthogonalized periodically to improve performance and ensure stability of the iterative procedure. In one embodiment, after completion of all iterations, the updated eigenmode matrix for the last iteration is provided as the matrix (III).

Journal ArticleDOI
TL;DR: A new right-preconditioning process similar to the one presented by Neumaier in 1987 but in the more general context of the inner and outer estimations of linear AEsolution sets is presented, presented in the form of two new auxiliary interval equations.
Abstract: Aright-preconditioning process for linear interval systems has been presented by Neumaier in 1987. It allows the construction of an outer estimate of the united solution set of a square linear interval system in the form of a parallelepiped. The denomination “right-preconditioning” is used to describe the preconditioning processes which involve the matrix product AC in contrast to the (usual) left-preconditioning processes which involve the matrix product AC, where A and C are respectively the interval matrix of the studied linear interval system and the preconditioning matrix.

Journal ArticleDOI
TL;DR: This work presents a computationally-efficient matrix-vector expression for the solution of a matrix linear least squares problem that arises in multistatic antenna array processing and relates the vectorization-by-columns operator to the diagonal extraction operator.
Abstract: We present a computationally-efficient matrix-vector expression for the solution of a matrix linear least squares problem that arises in multistatic antenna array processing. Our derivation relies on an explicit new relation between Kronecker, Khatri-Rao and Schur-Hadamard matrix products, which involves a selection matrix (i.e., a subset of the columns of a permutation matrix). Moreover, we show that the same selection matrix also relates the vectorization-by-columns operator to the diagonal extraction operator, which plays a central role in our computationally- efficient solution.

Patent
30 Sep 2005
TL;DR: An apparatus, system, and method to perform QR decomposition of an input complex matrix are described in this paper, which includes a triangular systolic array to load the input matrix and an identity matrix, and a unitary complex matrix transformation requiring three rotation angles.
Abstract: An apparatus, system, and method to perform QR decomposition of an input complex matrix are described. The apparatus may include a triangular systolic array to load the input complex matrix and an identity matrix, to perform a unitary complex matrix transformation requiring three rotation angles, and to produce a complex unitary matrix and an upper triangular matrix. The upper triangular matrix may include real diagonal elements. Other embodiments are described and claimed.

Journal ArticleDOI
TL;DR: In this article, the necessary and sufficient conditions for the existence of and the expressions for the skew-symmetric orthogonal solutions of the matrix equation AX = B have been established and the explicit expression of the nearest matrix to a given matrix in the Frobenius norm has been provided.

Proceedings ArticleDOI
24 Jul 2005
TL;DR: In this article, the rank and null-space basis of a univariate polynomial matrix is computed in O(nmr ω-2d) time using matrix Hensel high-order lifting and matrix minimal fraction reconstruction.
Abstract: We reduce the problem of computing the rank and a null-space basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n x n matrix of degree, d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices of dimension, n and degree, d. If the latter multiplication is done in MM(n,d)= O~(nωd operations, with ω the exponent of matrix multiplication over K, then the algorithm uses O~MM(n,d) operations in, K. For m x n matrices of rank r and degree d, the cost expression is O(nmr ω-2d). The soft-O notation O~ indicates some missing logarithmic factors. The method is randomized with Las Vegas certification. We achieve our results in part through a combination of matrix Hensel high-order lifting and matrix minimal fraction reconstruction, and through the computation of minimal or small degree vectors in the nullspace seen as a K[x]-module.

Journal ArticleDOI
TL;DR: The eigenvalue spectral density of the correlation matrix of factor models of multivariate time series is studied by making use of the random matrix theory and quantified the effect of statistical uncertainty on the spectral density due to the finiteness of the sample.
Abstract: We studied the eigenvalue spectral density of the correlation matrix of factor models of multivariate time series. By making use of the random matrix theory, we analytically quantified the effect of statistical uncertainty on the spectral density due to the finiteness of the sample. We considered a broad range of models, ranging from one-factor models to hierarchical multifactor models.

Journal ArticleDOI
TL;DR: In this article, the Cayley-Hamilton theorem and the sequence of Horner polynomials associated with a polynomial w ( z ) were used to obtain explicit formulas for functions of the form f ( tA ), where f is defined by a convergent power series and A is a square matrix.