Topic
Matrix differential equation
About: Matrix differential equation is a research topic. Over the lifetime, 3219 publications have been published within this topic receiving 61794 citations.
Papers published on a yearly basis
Papers
More filters
[...]
TL;DR: In this article, the exponential of a matrix could be computed in many ways, including approximation theory, differential equations, the matrix eigenvalues, and the matrix characteristic polynomial.
Abstract: In principle, the exponential of a matrix could be computed in many ways. Methods involving approximation theory, differential equations, the matrix eigenvalues, and the matrix characteristic polyn...
1,688 citations
[...]
TL;DR: The algorithm is supplied as one file of BCD 80 character card images at 556 B.P.I., even parity, on seven ~rack tape, and the user sends a small tape (wt. less than 1 lb.) the algorithm will be copied on it and returned to him at a charge of $10.O0 (U.S.and Canada) or $18.00 (elsewhere).
Abstract: and Canada) or $18.00 (elsewhere). If the user sends a small tape (wt. less than 1 lb.) the algorithm will be copied on it and returned to him at a charge of $10.O0 (U.S. only). All orders are to be prepaid with checks payable to ACM Algorithms. The algorithm is re corded as one file of BCD 80 character card images at 556 B.P.I., even parity, on seven ~rack tape. We will supply the algorithm at a density of 800 B.P.I. if requested. The cards for the algorithm are sequenced starting at 10 and incremented by 10. The sequence number is right justified in colums 80. Although we will make every attempt to insure that the algorithm conforms to the description printed here, we cannot guarantee it, nor can we guarantee that the algorithm is correct.-L.D.F. Descdption The following programs are a collection of Fortran IV sub-routines to solve the matrix equation AX-.}-XB = C (1) where A, B, and C are real matrices of dimensions m X m, n X n, and m X n, respectively. Additional subroutines permit the efficient solution of the equation ArX + xa = C, (2) where C is symmetric. Equation (1) has applications to the direct solution of discrete Poisson equations [2]. It is well known that (1) has a unique solution if and only if the One proof of the result amounts to constructing the solution from complete systems of eigenvalues and eigenvectors of A and B, when they exist. This technique has been proposed as a computational method (e.g. see [1 ]); however, it is unstable when the eigensystem is ill conditioned. The method proposed here is based on the Schur reduction to triangular form by orthogonal similarity transformations. Equation (1) is solved as follows. The matrix A is reduced to lower real Schur form A' by an orthogonal similarity transformation U; that is A is reduced to the real, block lower triangular form.
1,647 citations
[...]
TL;DR: Exact expressions for rates of change of eigenvalues and eigenvector to facilitate computerized design of complex structures are presented.
Abstract: Exact expressions for rates of change of eigenvalues and eigenvector to facilitate computerized design of complex structures
1,034 citations
[...]
TL;DR: In this paper, the authors used random matrix theory to analyze the cross-correlation matrix C of stock price changes of the largest 1000 US companies for the 2-year period 1994-1995.
Abstract: We use methods of random matrix theory to analyze the cross-correlation matrix C of stock price changes of the largest 1000 US companies for the 2-year period 1994 ‐ 1995 We find that the statistics of most of the eigenvalues in the spectrum of C agree with the predictions of random matrix theory, but there are deviations for a few of the largest eigenvalues We find that C has the universal properties of the Gaussian orthogonal ensemble of random matrices Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum — a situation reminiscent of localization theory results
924 citations
[...]
TL;DR: A simplified procedure is presented for the determination of the derivatives of eigenvectors of nth order algebraic eigensystems, applicable to symmetric or nonsymmetric systems, and requires knowledge of only one eigenvalue and its associated right and left eigenavectors.
Abstract: A simplified procedure is presented for the determination of the derivatives of eigenvectors of nth order algebraic eigensystems. The method is applicable to symmetric or nonsymmetric systems, and requires knowledge of only one eigenvalue and its associated right and left eigenvectors. In the procedure, the matrix of the original eigensystem of rank (/?-!) is modified to convert it to a matrix of rank /?, which then is solved directly for a vector which, together with the eigenvector, gives the eigenvector derivative to within an arbitrary constant. The norm of the eigenvector is used to determine this constant and complete the calculation. The method is simple, since the modified n rank matrix is formed without matrix multiplication or extensive manipulation. Since the matrix has the same bandedness as the original eigensystems, it can be treated efficiently using the same banded equation solution algorithms that are used to find the eigenvectors.
814 citations