Topic
Square matrix
About: Square matrix is a research topic. Over the lifetime, 5000 publications have been published within this topic receiving 92428 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, a new computational method of finding P when A and Q are known is described. The method is simple and efficient with matrix operations of the same order as A.
Abstract: The matrix equation $A^T P + PA = - Q$ is a useful equation for the study of the stability of a system when the dynamics are characterized by $\dot X = AX$. The matrix or system is stable if and only if the solution matrix P is positive definite for a positive definite matrix Q. This paper describes a new computational method of finding P when A and Q are known. The method is simple and efficient with matrix operations of the same order as A.
30 citations
••
TL;DR: In this article, an algorithm for computing bounds on the largest eigenvalue and a positive lower bound on the smallest eigen value of a distribution function is presented. But the algorithm is not suitable for the case where s is a positive integer greater than 2k or a negative integer.
Abstract: where a(A) = 0 for A ê A.!, = V + • • • + a? ^ < A S U = « ! 2 + • • • + o„ A n < A. Thus, {jLtm}m=i are a set of moments associated with the distribution function a(A). In certain applications (cf. [1]) we are interested in determining bounds for /LLS where s is a positive integer greater than 2k or a negative integer. We shall construct algorithms for computing bounds on [is where we have an upper bound on the largest eigenvalue and a positive lower bound on the smallest eigenvalue, e.g.,
29 citations
•
10 Jan 2003
TL;DR: In this article, the authors presented a method for computing the inverse square root of a given positive-definite Hermitian matrix, where the covariance matrix, K, is derived by the noise whitener of a MIMO receiver.
Abstract: Generally, a method and apparatus are provided for computing a matrix inverse square root of a given positive-definite Hermitian matrix, K The disclosed technique for computing an inverse square root of a matrix may be implemented, for example, by the noise whitener of a MIMO receiver Conventional noise whitening algorithms whiten a non-white vector, X, by applying a matrix, Q, to X, such that the resulting vector, Y, equal to Q·X, is a white vector Thus, the noise whitening algorithms attempt to identify a matrix, Q, that when multiplied by the non-white vector, will convert the vector to a white vector The disclosed iterative algorithm determines the matrix, Q, given the covariance matrix, K The disclosed matrix inverse square root determination process initially establishes an initial matrix, Q0, by multiplying an identity matrix by a scalar value and then continues to iterate and compute another value of the matrix, Qn+1, until a convergence threshold is satisfied The disclosed iterative algorithm only requires multiplication and addition operations and allows incremental updates when the covariance matrix, K, changes
29 citations
••
TL;DR: All the commuting solutions of the quadratic matrix equation A X A = X A X , by taking advantage of the Jordan form structure of A, are found, together with the help of a well-known theorem on the uniqueness of a solution to Sylvester's equation.
Abstract: Let A be a square matrix that is diagonalizable. We find all the commuting solutions of the quadratic matrix equation A X A = X A X , by taking advantage of the Jordan form structure of A , together with the help of a well-known theorem on the uniqueness of a solution to Sylvester's equation. Two special classes of the given matrix A are further investigated, including circular matrices and those that are equal to some of their powers. Moreover, all the non-commuting solutions are constructed when A is a Householder matrix, based on a spectral perturbation result.
29 citations
••
TL;DR: This paper completely characterize the worst-case GMRES-related quantities in the next-to-last iteration step and evaluates the standard bound in terms of explicit polynomials involving the matrix eigenvalues involving the determinants of the GMRES residual norm.
Abstract: We study the convergence of GMRES for linear algebraic systems with normal matrices. In particular, we explore the standard bound based on a min-max approximation problem on the discrete set of the matrix eigenvalues. This bound is sharp, i.e. it is attainable by the GMRES residual norm. The question is how to evaluate or estimate the standard bound, and if it is possible to characterize the GMRES-related quantities for which this bound is attained (worst-case GMRES). In this paper we completely characterize the worst-case GMRES-related quantities in the next-to-last iteration step and evaluate the standard bound in terms of explicit polynomials involving the matrix eigenvalues. For a general iteration step, we develop a computable lower and upper bound on the standard bound. Our bounds allow us to study the worst-case GMRES residual norm as a function of the eigenvalue distribution. For hermitian matrices the lower bound is equal to the worst-case residual norm. In addition, numerical experiments show that the lower bound is generally very tight, and support our conjecture that it is to within a factor of 4/π of the actual worst-case residual norm. Since the worst-case residual norm in each step is to within a factor of the square root of the matrix size to what is considered an “average” residual norm, our results are of relevance beyond the worst case.
29 citations