scispace - formally typeset
Search or ask a question
Journal Article•DOI•

Solution of the matrix equation AX + XB = C [F4]

01 Sep 1972-Communications of The ACM (ACM)-Vol. 15, Iss: 9, pp 820-826
TL;DR: The algorithm is supplied as one file of BCD 80 character card images at 556 B.P.I., even parity, on seven ~rack tape, and the user sends a small tape (wt. less than 1 lb.) the algorithm will be copied on it and returned to him at a charge of $10.O0 (U.S.and Canada) or $18.00 (elsewhere).
Abstract: and Canada) or $18.00 (elsewhere). If the user sends a small tape (wt. less than 1 lb.) the algorithm will be copied on it and returned to him at a charge of $10.O0 (U.S. only). All orders are to be prepaid with checks payable to ACM Algorithms. The algorithm is re corded as one file of BCD 80 character card images at 556 B.P.I., even parity, on seven ~rack tape. We will supply the algorithm at a density of 800 B.P.I. if requested. The cards for the algorithm are sequenced starting at 10 and incremented by 10. The sequence number is right justified in colums 80. Although we will make every attempt to insure that the algorithm conforms to the description printed here, we cannot guarantee it, nor can we guarantee that the algorithm is correct.-L.D.F. Descdption The following programs are a collection of Fortran IV sub-routines to solve the matrix equation AX-.}-XB = C (1) where A, B, and C are real matrices of dimensions m X m, n X n, and m X n, respectively. Additional subroutines permit the efficient solution of the equation ArX + xa = C, (2) where C is symmetric. Equation (1) has applications to the direct solution of discrete Poisson equations [2]. It is well known that (1) has a unique solution if and only if the One proof of the result amounts to constructing the solution from complete systems of eigenvalues and eigenvectors of A and B, when they exist. This technique has been proposed as a computational method (e.g. see [1 ]); however, it is unstable when the eigensystem is ill conditioned. The method proposed here is based on the Schur reduction to triangular form by orthogonal similarity transformations. Equation (1) is solved as follows. The matrix A is reduced to lower real Schur form A' by an orthogonal similarity transformation U; that is A is reduced to the real, block lower triangular form.
Citations
More filters
Journal Article•DOI•
B. Moore1•
TL;DR: In this paper, it is shown that principal component analysis (PCA) is a powerful tool for coping with structural instability in dynamic systems, and it is proposed that the first step in model reduction is to apply the mechanics of minimal realization using these working subspaces.
Abstract: Kalman's minimal realization theory involves geometric objects (controllable, unobservable subspaces) which are subject to structural instability. Specifically, arbitrarily small perturbations in a model may cause a change in the dimensions of the associated subspaces. This situation is manifested in computational difficulties which arise in attempts to apply textbook algorithms for computing a minimal realization. Structural instability associated with geometric theories is not unique to control; it arises in the theory of linear equations as well. In this setting, the computational problems have been studied for decades and excellent tools have been developed for coping with the situation. One of the main goals of this paper is to call attention to principal component analysis (Hotelling, 1933), and an algorithm (Golub and Reinsch, 1970) for computing the singular value decompositon of a matrix. Together they form a powerful tool for coping with structural instability in dynamic systems. As developed in this paper, principal component analysis is a technique for analyzing signals. (Singular value decomposition provides the computational machinery.) For this reason, Kalman's minimal realization theory is recast in terms of responses to injected signals. Application of the signal analysis to controllability and observability leads to a coordinate system in which the "internally balanced" model has special properties. For asymptotically stable systems, this yields working approximations of X_{c}, X_{\bar{o}} , the controllable and unobservable subspaces. It is proposed that a natural first step in model reduction is to apply the mechanics of minimal realization using these working subspaces.

5,134 citations

Journal Article•DOI•
01 Jan 1978
TL;DR: In this article, a new algorithm for solving algebraic Riccati equations (both continuous-time and discrete-time versions) is presented, which is a variant of the classical eigenvector approach and uses instead an appropriate set of Schur vectors.
Abstract: In this paper a new algorithm for solving algebraic Riccati equations (both continuous-time and discrete-time versions) is presented. The method studied is a variant of the classical eigenvector approach and uses instead an appropriate set of Schur vectors thereby gaining substantial numerical advantages. Complete proofs of the Schur approach are given as well as considerable discussion of numerical issues. The method is apparently quite numerically stable and performs reliably on systems with dense matrices up to order 100 or so, storage being the main limiting factor. The description given below is a considerably abridged version of a complete report given in [0].

1,002 citations

Journal Article•DOI•
Charles Van Loan1•
TL;DR: The Kronecker product has a rich and very pleasing algebra that supports a wide range of fast, elegant, and practical algorithms and several trends in scientific computing suggest that this important matrix operation will have an increasingly greater role in the future.

888 citations

Journal Article•DOI•
TL;DR: A new method is proposed which differs from the Bartels-Stewart algorithm in that A is only reduced to Hessenberg form, and the resulting algorithm is between 30 and 70 percent faster depending upon the dimensions of the matrices A and B.
Abstract: One of the most effective methods for solving the matrix equation AX+XB=C is the Bartels-Stewart algorithm. Key to this technique is the orthogonal reduction of A and B to triangular form using the QR algorithm for eigenvalues. A new method is proposed which differs from the Bartels-Stewart algorithm in that A is only reduced to Hessenberg form. The resulting algorithm is between 30 and 70 percent faster depending upon the dimensions of the matrices A and B . The stability of the new method is demonstrated through a roundoff error analysis and supported by numerical tests. Finally, it is shown how the techniques described can be applied and generalized to other matrix equation problems.

795 citations

Posted Content•
TL;DR: A new zero-shot learning dataset is proposed, the Animals with Attributes 2 (AWA2) dataset which is made publicly available both in terms of image features and the images themselves and compares and analyzes a significant number of the state-of-the-art methods in depth.
Abstract: Due to the importance of zero-shot learning, i.e. classifying images where there is a lack of labeled training data, the number of proposed approaches has recently increased steadily. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets used for this task. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g. pre-training on zero-shot test classes. Moreover, we propose a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset which we make publicly available both in terms of image features and the images themselves. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss in detail the limitations of the current status of the area which can be taken as a basis for advancing it.

785 citations


Cites methods from "Solution of the matrix equation AX ..."

  • ...The optimization problem can be transformed such that Bartels-Stewart algorithm [68] is able to solve it efficiently....

    [...]

References
More filters
Book•
01 Jan 1965
TL;DR: Theoretical background Perturbation theory Error analysis Solution of linear algebraic equations Hermitian matrices Reduction of a general matrix to condensed form Eigenvalues of matrices of condensed forms The LR and QR algorithms Iterative methods Bibliography.
Abstract: Theoretical background Perturbation theory Error analysis Solution of linear algebraic equations Hermitian matrices Reduction of a general matrix to condensed form Eigenvalues of matrices of condensed forms The LR and QR algorithms Iterative methods Bibliography Index.

7,422 citations

Journal Article•DOI•

2,408 citations

Journal Article•DOI•
TL;DR: In this article, the authors provide a survey of direct methods for solving finite difference equations with rectilinear domains. But the authors do not discuss whether the methods are easily adaptable to more general regions, and to general elliptic partial differential equations.
Abstract: where G is a rectangle, Au = 82u/8x2 + 82u/8y2, and v, w are known functions. For computational purposes, this partial differential equation is frequently replaced by a finite difference analogue. These discrete models for (1) consist of linear systems of equations of very large dimension, and it is widely recognized that the usual direct methods (e.g., Gaussian elimination) are unsatisfactory for such systems [18, ?? 21.2-21.3]. Theoretical investigation has, therefore, been primarily directed toward the development of effective iterative methods for the solution of these problems [64], [66]. In recent years, however, direct methods that take advantage of the special block structure of these linear equations have appeared. For the rectangular regions under consideration, these methods can be considerably faster than iterative methods. The purpose of this survey paper is to provide brief summaries and a list of references for methods which can be used to directly solve the finite difference equations. Some of these methods can be applied to problems in more general domains. However, the extensions generally include only simple rectilinear regions, such as L-shaped or T-shaped domains. This is basically due to the fact that the direct methods require a great degree of regularity in the block structure of the matrix equation. In our discussion, we will indicate whether the methods are easily adaptable to more general regions, and to more general elliptic partial differential equations.

218 citations

Journal Article•DOI•
TL;DR: The volume of work involved in a QR step is far less if the matrix is of Hessenberg form, and since there are several stable ways of reducing a general matrix to this form, the QR algorithm is invariably used after such a reduction.
Abstract: The QR algorithm of Francis [1] and Kublanovskaya [4] with shifts of origin is described by the relations $$ \matrix{ {{Q_s}({A_s} - {k_s}I) = {R_s},} & {{A_{s + 1}} = {R_s}Q_s^T + {k_s}I,} & {giving} \cr } \matrix{ {{A_{s + 1}} = } \hfill \cr } {Q_s}{A_s}Q_s^T, $$ (1) where Q s is orthogonal, R s is upper triangular and k s is the shift of origin. When the initial matrix A 1 is of upper Hessenberg form then it is easy to show that this is true of all A s . The volume of work involved in a QR step is far less if the matrix is of Hessenberg form, and since there are several stable ways of reducing a general matrix to this form [3,5, 8], the QR algorithm is invariably used after such a reduction.

72 citations

Journal Article•DOI•
TL;DR: In this paper, the problem of boundary problems arising in the approximate solutions of linear PDEs was investigated. But the work was conducted on desk machines, and the operations involved are, however, capable of being handled efficiently and simply by modern high-speed digital computers.
Abstract: The investigations described in this paper were initiated in an attempt to replace by direct methods the successive approximation methods such as those of Southwell and Thom for the solution of systems of difference equations arising in the approximate solutions of linear partial differential equations. Boundary problems of this type form the subject of part III, which is the kernel of the paper. As the work progressed it was found that the methods evolved were applicable, and capable of extension, to step-by-step solutions also, and to ordinary as well as partial differential equations. These topics are presented in parts I, II and IV. Matrix methods naturally predominate. The methods are illustrated by small-scale examples worked on desk machines, but the operations involved are, we believe, capable of being handled efficiently and simply by modern high-speed digital computers.

48 citations