scispace - formally typeset
Search or ask a question

Showing papers on "Square matrix published in 2003"


Journal ArticleDOI
TL;DR: This work defines and explores the properties of the exchange operator, which maps J-orthogonal matrices to orthogonalMatrices and vice versa, and shows how the exchange operators can be used to obtain a hyperbolic CS decomposition of a J- Orthogonal matrix directly from the usual CS decompositions of an orthogsonal matrix.
Abstract: A real, square matrix Q is J-orthogonal if QTJQ = J, where the signature matrix $J = \diag(\pm 1)$. J-orthogonal matrices arise in the analysis and numerical solution of various matrix problems involving indefinite inner products, including, in particular, the downdating of Cholesky factorizations. We present techniques and tools useful in the analysis, application, and construction of these matrices, giving a self-contained treatment that provides new insights. First, we define and explore the properties of the exchange operator, which maps J-orthogonal matrices to orthogonal matrices and vice versa. Then we show how the exchange operator can be used to obtain a hyperbolic CS decomposition of a J-orthogonal matrix directly from the usual CS decomposition of an orthogonal matrix. We employ the decomposition to derive an algorithm for constructing random J-orthogonal matrices with specified norm and condition number. We also give a short proof of the fact that J-orthogonal matrices are optimally scaled und...

107 citations


Journal ArticleDOI
TL;DR: In this article, the usual companion matrix of a polynomial of degree n can be factored into a product of n matrices, n − 1 of them being the identity matrix in which a 2×2 identity submatrix in two consecutive rows (and columns) is replaced by an appropriate 2 × 2 matrix, the remaining being an identity matrix with the last entry replaced by possibly different entry.

106 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the solutions of complex matrix equations X−AXB=C and X −AXB =C, and obtained explicit solutions of the equations by the method of characteristic polynomial and a method of real representation of a complex matrix respectively.

105 citations


Journal ArticleDOI
TL;DR: In this paper, two new improvements to the Fermi operator expansion (FOE) method are introduced, and they make the FOE method competitive with the best existing alternatives.
Abstract: Linear scaling algorithms based on Fermi operator expansions (FOE) have been considered significantly slower than other alternative approaches in evaluating the density matrix in Kohn–Sham density functional theory, despite their attractive simplicity. In this work, two new improvements to the FOE method are introduced. First, novel fast summation methods are employed to evaluate a matrix polynomial or Chebyshev matrix polynomial with matrix multiplications totalling roughly twice the square root of the degree of the polynomial. Second, six different representations of the Fermi operators are compared to assess the smallest possible degree of polynomial expansion for a given target precision. The optimal choice appears to be the complementary error function. Together, these advances make the FOE method competitive with the best existing alternatives.

97 citations


Journal ArticleDOI
TL;DR: An algorithm for computing the ‘pseudospectral abscissa’, which is the largest real part of such an eigenvalue, measures the robust stability of A, and proves global and local quadratic convergence.
Abstract: x = Ax is robustly stable when all eigenvalues of complex matrices within a given distance of the square matrix A lie in the left half-plane. The ‘pseudospectral abscissa’, which is the largest real part of such an eigenvalue, measures the robust stability of A .W epresent an algorithm for computing the pseudospectral abscissa, prove global and local quadratic convergence, and discuss numerical implementation. As with analogous methods for calculating H∞ norms, our algorithm depends on computing the eigenvalues of associated Hamiltonian matrices.

81 citations


Patent
12 Nov 2003
TL;DR: In this paper, the inner products between the columns of a base DFT sub-matrix W1 and the rows of the matrix GLXL are computed to obtain the entries of the intermediate vector B.
Abstract: Techniques to derive a channel estimate using substantially fewer number of complex multiplications than with a brute-force method to derive the same channel estimate. In one method, an intermediate vector B (418) is initially derived based on K sub-vectors of a vector H for a channel frequency response estimate and at least two DFT sub-matrices for a DFT matrix W , where K > 1. An intermediate matrix A (420) for the DFT matrix W is also obtained. A least square channel impulse response estimate is then derived based on the intermediate vector B and the intermediate matrix A (422). In one implementation, the intermediate vector B is obtained by first computing DFTs of a matrix HTXL , which is formed based on the vector H, to provide a matrix GLXL. Inner products between the columns of a base DFT sub-matrix W1 and the rows of the matrix GLXL are then computed to obtain the entries of the intermediate vector B .

79 citations


Posted Content
TL;DR: In this article, the authors introduce two canonical forms for Leonard pairs, the TD-D canonical form and the LB-UB canonical form, and present a necessary and sufficient condition for each of them to represent a Leonard pair.
Abstract: Let $\K$ denote a field and let $V$ denote a vector space over $\K$ with finite positive dimension. We consider an ordered pair of linear transformations $A:V\to V$ and $B:V\to V$ which satisfy both (i), (ii) below. (i) There exists a basis for $V$ with respect to which the matrix representing $A$ is irreducible tridiagonal and the matrix representing $B$ is diagonal; (ii) There exists a basis for $V$ with respect to which the matrix representing $A$ is diagonal and the matrix representing $B$ is irreducible tridiagonal. We call such a pair a Leonard pair on $V$. We introduce two canonical forms for Leonard pairs. We call these the TD-D canonical form and the LB-UB canonical form. In the TD-D canonical form the Leonard pair is represented by an irreducible tridiagonal matrix and a diagonal matrix, subject to a certain normalization. In the LB-UB canonical form the Leonard pair is represented by a lower bidiagonal matrix and an upper bidiagonal matrix, subject to a certain normalization. We describe the two canonical forms in detail. As an application we obtain the following results. Given square matrices $A,B$ over $\K$, with $A$ tridiagonal and $B$ diagonal, we display a necessary and sufficient condition for $A,B$ to represent a Leonard pair. Given square matrices $A,B$ over $\K$, with $A$ lower bidiagonal and $B$ upper bidiagonal, we display a necessary and sufficient condition for $A,B$ to represent a Leonard pair. We briefly discuss how Leonard pairs correspond to the $q$-Racah polynomials and some related polynomials in the Askey scheme. We present some open problems concerning Leonard pairs.

75 citations


Journal ArticleDOI
TL;DR: The solvability of the modified matrix inequality ensures not only the stability but also the absence of overflow oscillation under the state saturation arithmetic, and this approach has the advantage of being free from auxiliary parameters.
Abstract: This note is concerned with the stability of discrete-time dynamical systems employing saturation arithmetic in the state-space. A matrix measure is introduced so that it can administer the proximity evaluation of a matrix to the set of diagonal matrices, and the measure is utilized for making an additional condition to the Lyapunov-Stein matrix inequality. The solvability of the modified matrix inequality ensures not only the stability but also the absence of overflow oscillation under the state saturation arithmetic, and this approach has the advantage of being free from auxiliary parameters. As an application, the obtained result is applied to the stability analysis of two-dimensional dynamics. Numerical examples are given to illustrate the results.

61 citations


Journal ArticleDOI
01 Sep 2003
TL;DR: An explicit formula for computing the resultant of any sparse unmixed bivariate system with a given support is given, and square matrices whose determinant is exactly the resultant, with no extraneous factors are constructed.
Abstract: This paper gives an explicit formula for computing the resultant of any sparse unmixed bivariate system with a given support. We construct square matrices whose determinant is exactly the resultant, with no extraneous factors. This is the first time that such matrices have been given for unmixed bivariate systems with arbitrary support. The matrices constructed are of hybrid Sylvester and Bezout type. The results extend previous work by the author by giving a complete combinatorial description of the matrix. We make use of the exterior algebra techniques of Eisenbud, Floystad, and Schreyer.

56 citations


Journal ArticleDOI
TL;DR: In this paper, the authors give complete invariants for the existence of equivalence for matrices over a principal ideal domain R. The invariants involve an associated diagram (the K-web) of R-module homomorphisms.
Abstract: Given square matrices B and B' with a poset-indexed block structure (for which an ij block is zero unless i ≤ j), when are there invertible matrices U and V with this required-zero-block structure such that UBV = B'? We give complete invariants for the existence of such an equivalence for matrices over a principal ideal domain R As one application, when R is a field we classify such matrices up to similarity by matrices respecting the block structure We also give complete invariants for equivalence under the additional requirement that the diagonal blocks of U and V have determinant 1 The invariants involve an associated diagram (the K-web) of R-module homomorphisms The study is motivated by applications to symbolic dynamics and C*-algebras

51 citations


Patent
16 Jun 2003
TL;DR: In this paper, a system and method for identifying dependency relationships between components in a group of software components is described, where the direct dependencies are indicated in a square matrix where each component in the group of components has a corresponding row and column.
Abstract: Described is a system and method for identifying dependency relationships between components in a group of software components. Given a group of software components, a set of direct dependencies between each of the components and any other component is identified. The direct dependencies are indicated in a square matrix where each component in the group of components has a corresponding row and column. A particular component has the same row number as column number in the matrix. Multiplying that the matrix by itself identifies second-order dependencies. Higher order dependencies are identified by repeating the multiplication of the resultant matrix by the first-order dependency matrix. In other words, multiplying the third-order matrix by the first-order matrix achieves the fourth-order matrix, and so on.

Posted Content
TL;DR: The local eigenvalue statistic, arising in a certain neighborhood of the edges of the support of the Density of States, is independent of the form of the potential, determining the matrix model as discussed by the authors.
Abstract: Basing on our recent results on the $1/n$-expansion in unitary invariant random matrix ensembles, known as matrix models, we prove that the local eigenvalue statistic, arising in a certain neighborhood of the edges of the support of the Density of States, is independent of the form of the potential, determining the matrix model Our proof is applicable to the case of real analytic potentials and of supports, consisting of one or two disjoint intervals

Journal ArticleDOI
TL;DR: In this paper, a short proof of Hua's fundamental theorem of the geometry of square matrices is given. But this proof is based on the automorphisms of the poset of idempotent matrices.

Journal ArticleDOI
TL;DR: In this article, a two-dimensional modelization of piezoelectric thin shells has been developed in the first part of this work and the approximation of the second formulation by a conforming finite element method is analyzed.


Journal ArticleDOI
TL;DR: The problem of generating a matrix A with specified eigen-pair, where A is a symmetric and anti-persymmetric matrix, is presented and an existence theorem is given and proved and an expression is provided for this nearest matrix.
Abstract: The problem of generating a matrix A with specified eigen-pair, where A is a symmetric and anti-persymmetric matrix, is presented. An existence theorem is given and proved. A general expression of such a matrix is provided. We denote the set of such matrices by En. The optimal approximation problem associated with En is discussed, that is: to find the nearest matrix to a given matrix A* by A∈En. The existence and uniqueness of the optimal approximation problem is proved and the expression is provided for this nearest matrix. Copyright © 2002 John Wiley & Sons, Ltd.

Patent
10 Jan 2003
TL;DR: In this article, the authors presented a method for computing the inverse square root of a given positive-definite Hermitian matrix, where the covariance matrix, K, is derived by the noise whitener of a MIMO receiver.
Abstract: Generally, a method and apparatus are provided for computing a matrix inverse square root of a given positive-definite Hermitian matrix, K The disclosed technique for computing an inverse square root of a matrix may be implemented, for example, by the noise whitener of a MIMO receiver Conventional noise whitening algorithms whiten a non-white vector, X, by applying a matrix, Q, to X, such that the resulting vector, Y, equal to Q·X, is a white vector Thus, the noise whitening algorithms attempt to identify a matrix, Q, that when multiplied by the non-white vector, will convert the vector to a white vector The disclosed iterative algorithm determines the matrix, Q, given the covariance matrix, K The disclosed matrix inverse square root determination process initially establishes an initial matrix, Q0, by multiplying an identity matrix by a scalar value and then continues to iterate and compute another value of the matrix, Qn+1, until a convergence threshold is satisfied The disclosed iterative algorithm only requires multiplication and addition operations and allows incremental updates when the covariance matrix, K, changes

Patent
10 Feb 2003
TL;DR: In this article, the Strassen-Winograd method is used to obtain a first set of dimension values for the first matrix and a second set of dimensions for the second matrix, selecting one of a plurality of multiplication permutations if the first dimension values and the second dimension values are greater than a crossover value.
Abstract: A computer system for multiplying a first matrix and a second matrix that reduces rounding error, including a processor, a memory, a storage device, and software instructions stored in the memory for enabling the computer system, under the control of the processor, to perform obtaining a first set of dimension values for the first matrix and a second set of dimension values for the second matrix, selecting one of a plurality of multiplication permutations if the first set of dimension values and the second set of dimension values are greater than a crossover value, multiplying the first matrix by the second matrix using the multiplication permutation and a Strassen-Winograd method, recursively sub-dividing the first matrix and the second matrix producing a set of sub-matrix products and a recursion tree, and propagating the set of sub-matrix products up the recursion tree to produce a product matrix.

Journal ArticleDOI
TL;DR: In this article, the generalization of Wielandt and Ky-Fan theorem for Hermitian matrix pairs is given, and some new eigenvalue perturbation estimates are obtained.

Journal ArticleDOI
TL;DR: In this paper, the linear operators that strongly preserve the matrix majorization were characterized, which is a generalization of multivariate majorization, and they were used to characterize the linear operator that strongly preserves the matrix regularization.

Journal ArticleDOI
TL;DR: An algorithm to compute the singular value decomposition (SVD) of time-varying square matrices is concerned with, whose solutions asymptotically track the diagonalizing transformation.

Journal ArticleDOI
TL;DR: In this article, the distance between Jordan and Kronecker structures in a closure hierarchy of an orbit or bundle stratification has been studied and lower and upper bounds have been derived from a matrix representation of the tangent space of the orbit of a matrix or a matrix pencil.
Abstract: Computing the fine-canonical-structure elements of matrices and matrix pencils are ill-posed problems. Therefore, besides knowing the canonical structure of a matrix or a matrix pencil, it is equally important to know what are the nearby canonical structures that explain the behavior under small perturbations. Qualitative strata information is provided by our StratiGraph tool. Here, we present lower and upper bounds for the distance between Jordan and Kronecker structures in a closure hierarchy of an orbit or bundle stratification. This quantitative information is of importance in applications, e.g., distance to more degenerate systems (uncontrollability). Our upper bounds are based on staircase regularizing perturbations. The lower bounds are of Eckart―Young type and are derived from a matrix representation of the tangent space of the orbit of a matrix or a matrix pencil. Computational results illustrate the use of the bounds. Bibliography: 42 titles.

Journal ArticleDOI
TL;DR: It is shown analytically that the 2-norm polynomial numerical hulls of degrees 1 through n−1 for an n by n Jordan block are disks about the eigenvalue with radii approaching 1 as n→∞, and a theorem characterizing these radii rk,n is proved.

Journal ArticleDOI
TL;DR: All matrices B and C are described for the solution X=A^#, where A^# is the group inverse of A, and these results are extended to reflexive generalized inverses.

Journal ArticleDOI
TL;DR: For a class X of real matrices, a list of positions in an n×n matrix (a pattern) is said to have X-completion if every partial matrix that specifies exactly these positions can be completed to an X-matrix as discussed by the authors.

Journal ArticleDOI
TL;DR: The method developed in this paper for sensor selection can be applied to the dual problem of actuator selection, where, for a given matrix pair, a matrix B is to be determined such that the resulting matrix triple has the pre-specified structural properties.

Journal ArticleDOI
TL;DR: The problem of the strong regularity of a square matrix in a general max–min algebra is considered and a necessary and sufficient condition using the trapezoidal property is described.

Patent
05 Dec 2003
TL;DR: In this article, a conjugate gradient algorithm is used to determine the channel estimate based on the received signal y, the matrix Ŝ, and the matrix F such that the matrix f results from the forming of the matrix ǫ as a convolution matrix.
Abstract: In forming a channel estimate, a received signal y is decoded to form data s, a convolution matrix Ŝ is formed from the data s, a matrix F is formed from the data s such that the matrix F results from the forming of the matrix Ŝ as a convolution matrix, and a conjugate gradient algorithm is performed to determine the channel estimate. The conjugate gradient algorithm is based on the received signal y, the matrix Ŝ, and the matrix F.

Book ChapterDOI
02 Jun 2003
TL;DR: An algorithmic issue is discussed, trying to apply a Newton type algorithm already considered for the usual inverse singular value problem, to find a real matrix A having them as G-singular values.
Abstract: In this paper the solution of an inverse singular value problem is considered First the decomposition of a real square matrix A = UΣV is introduced, where U and V are real square matrices orthogonal with respect to a particular inner product defined through a real diagonal matrix G of order n having all the elements equal to ±1, and Σ is a real diagonal matrix with nonnegative elements, called G-singular values When G is the identity matrix this decomposition is the usual SVD and Σ is the diagonal matrix of singular values Given a set {σ1,, σn} of n real positive numbers we consider the problem to find a real matrix A having them as G-singular values Neglecting theoretical aspects of the problem, we discuss only an algorithmic issue, trying to apply a Newton type algorithm already considered for the usual inverse singular value problem

Journal ArticleDOI
TL;DR: In this paper, the problem of finding a Hermitian matrix with specified eigenpairs in the Frobenius norm is studied, and the best approximation is shown to be unique.
Abstract: In this paper, we first consider the inverse eigenvalue problem as follows: find a matrix A with specified eigenpairs, where A is a Hermitian and generalized skew-Hamiltonian matrix. The sufficient and necessary conditions are obtained, and a general representation of such a matrix is presented. We denote the set of such matrices by S. Then we discuss the best approximation problem for the inverse eigenproblem. That is, given an arbitrary A, find a matrix A S which is nearest to A in the Frobenius norm. We show that the best approximation is unique and provide an expression for this nearest matrix.