scispace - formally typeset
Search or ask a question

Showing papers on "Square matrix published in 1972"


Journal ArticleDOI
TL;DR: Inequalities are obtained for the complex eigenvalues of anM matrix or aP matrix which depend only on the order of the matrix as mentioned in this paper. But these inequalities are not applicable to the complex Eigenvalue of a P matrix.
Abstract: Inequalities are obtained for the complex eigenvalues of anM matrix or aP matrix which depend only on the order of the matrix.

52 citations


Journal ArticleDOI
01 Jan 1972
TL;DR: In this article, necessary and sufficient conditions are given in order that a nonnegative matrix have a non-negative MoorePenrose generalized inverse (MPGI) for a given matrix.
Abstract: Necessary and sufficient conditions are given in order that a nonnegative matrix have a nonnegative MoorePenrose generalized inverse.

45 citations


Journal ArticleDOI
TL;DR: In this article, a numerical method for computing Fisher information matrix about the five parameters of a mixture of two normal distributions is presented, by using a simple transformation which reduces the number of parameters from five to three.
Abstract: This paper presents a numerical method for computation of the Fisher information matrix about the five parameters of a mixture of two normal distributions. It is shown, by using a simple transformation which reduces the number of parameters from five to three, that the computation of the whole information matrix leads to the numerical evaluation of a particular integral. The Hermite-Gauss quadrature formula, Romberg's algorithm, a power series, and Taylor's expansion are applied for the evaluation of this integral and the results are compared with each other. A short table has been provided from which the approximate information matrix can be obtained in practice.

44 citations


Patent
Hurd E1, Stern D1
27 Sep 1972
TL;DR: In this article, a printer using a matrix of printing elements arranged in a square configuration with the printing elements being used to print alpha-numeric data in either a vertical or horizontal orientation was presented.
Abstract: A printer using a matrix of printing elements arranged in a square configuration with the printing elements being used to print alpha-numeric data in either a vertical or horizontal orientation by electronically selecting a rectangular matrix from less than the full number of printing elements in the square matrix to permit selective orientation of the printed data without mechanically reorienting the print head. The print element drive circuitry also enables the printing matrix to print from either end of the selected rectangular print matrix in either the horizontal or vertical orientation to provide four possible orientations of the printed alpha-numeric data. A memory is used to store the input control signals for each of the rows of the rectangular printing matrix while a control means is provided for reading out the control signals in either direction from the memory in combination with a matrix selection control to provide energization of a rectangular print matrix in either the horizontal or vertical configuration.

34 citations


Journal ArticleDOI
01 Feb 1972
TL;DR: In this paper, the authors proposed a matrix product AXAT=Y, where AT denotes the transpose of the matrix A. The matrix Y is a symmetric matrix of order m. This matrix Y gives us a complete description of the intersection patterns S,(Si.
Abstract: Let S={x1, x2, * * *, x,} be an n-set and let Sl, S2,. .. , Sm be subsets of S. Let A of size m by n be the incidence matrix for these subsets of S. We now regard x1, x2, .. , xn as independent indeterminates and define X=diag[x1, x2, *.. *, xnj. We then form the matrix product AXAT= Y, where AT denotes the transpose of the matrix A. The symmetric matrix Y has in its (i,j) position the sum of the indeterminates in S,r-Sj and consequently Ygives us a complete description of the intersection patterns S,(Si. The specialization x1=x2= ... =xn=1 of this basic matrix equation has been used extensively in the study of block designs. We give some other interesting applications of the matrix equation that involve subsets with various restricted intersection patterns. 1. The matrix equation. Let S= {x1, x2, * * , xj} be an n-set (a set of n elements) and let S,, S2,.. * , Sm be subsets of S. We set aij =1 if xi is a member of Si and we set aij=O if xj is not a member of Si. The resulting (0, 1)-matrix (1.1) A = [aij] of size m by n is the familiar incidence matrix for the subsets S,, S2, * , Sm of S. It is clear that A characterizes the configuration of subsets. Now let us regard xl, x2, * * *, xn, as independent indeterminates over the field of rational numbers and define (1.2) X= diag[xl, x2, ,xJ]. We then form the matrix product (1.3) AXAT Y The matrix AT denotes the transpose of the matrix A. The matrix Y is a symmetric matrix of order m. We know the structure of this matrix Received by the editors August 25, 1971. AMS 1970 subject classi{/cations. Primary 05B20, 05B30; Secondary 15A24.

22 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of finding the canonical (i.e., the simplest) matrix $A^0 $ in the set of all unitary congruence transforms of a given square matrix A is solved for matrices which satisfy (A + = ( {A + A} )^ * $, $A + $ and $A * $ being the adjoint and the complex conjugate of A).
Abstract: A unitary congruence transform of a given square matrix A is $UA\tilde U$, where U is a unitary matrix, and $\tilde U$ is its transpose. The problem of finding the canonical (i.e., the simplest) matrix $A^0 $ in the set of all unitary congruence transforms of A is solved for matrices which satisfy $AA^ + = ( {A^ + A} )^ * $, $A^ + $ and $A^ * $ being the adjoint and the complex conjugate of A. Such matrices are called conjugate-normal. Symmetric, skew-symmetric and unitary matrices are special cases, but do not exhaust all conjugate-normal ones. The canonical form $A^ 0 $ turns out to be the direct sum of a diagonal matrix and of several $2 \times 2$ matrices of the form $( {\begin{array}{*{20}c} 0 & {| a |} \\ a & 0 \\ \end{array} } )$ . A particular matrix Z and the general form $Z'$ of U which transforms A into $A^0 $ by congruence are derived.

21 citations


Journal ArticleDOI
Jorma Rissanen1
TL;DR: In this paper, an algorithm for computing Pade approximants to any sequence A0, A1,... of s × t-matrices is described, which gives a new way for finding the minimal polynomial of any square matrix A and the inverse of the characteristic matrix xl - A.
Abstract: An algorithm is described for calculating the existing Pade approximants to any sequence A0, A1,... of s × t-matrices. As an application the algorithm gives a new way for finding the minimal polynomial of any square matrix A and the inverse of the characteristic matrix xl - A.

18 citations


Journal ArticleDOI
A. Rowe1

16 citations


Journal ArticleDOI
01 Jan 1972
TL;DR: In this paper, it was shown that a nonnegative finite square matrix has a non-negative inverse if and only if its entries are all zero except for a single positive entry in each row and column.
Abstract: We generalize a result stating that a nonnegative finite square matrix has a nonnegative inverse if and only if it is the product of a permutation matrix by a diagonal matrix. We consider column-finite infinite matrices and give a simple proof using elementary ideas from the theory of partially ordered linear algebras. In [1] the authors show that a nonnegative square matrix has a nonnegative inverse if and only if its entries are all zero except for a single positive entry in each row and column. In this note we generalize this result and simplify the proof as well. Let A denote the real linear algebra of all column-finite infinite matrices with real entries. We partially order A as follows: [aij] 10 and note that 1 +d_ (1 +d)2=yxyx

16 citations


Journal ArticleDOI
TL;DR: In this paper, a procedure for inversion of a symmetric band matrix with all elements in a certain diagonal equal is given, starting from the main diagonal, where the elementsk,k−1, k−2,k −2,... 2,1 with zeros in the remaining diagonals.
Abstract: A procedure is given for inversion of a symmetric band matrix with all elements in a certain diagonal equal. Starting from the main diagonal we have the elementsk,k−1,k−2, ... 2,1 with zeros in the remaining diagonals. Letting the second order difference operator δ2 operate on the rows of the matrix we obtain a new matrix which can be inverted by a partition method after certain permutations of the elements.

14 citations



Journal ArticleDOI
TL;DR: In this article, a class of linear systems of differential equations of Ito is examined and the algebraic criterion for exponential stability in the mean square is given, where the spectrum of some square matrix, which is constructed with respect to parameters of the system, lies in the open left halfplane.

Journal ArticleDOI
TL;DR: A unified contirbued fraction theory which can be considered as a generalized feedback theory is established and a multiple cycle model consisting many feedback constant matrices and many feedforward integral matrices is constructed.

Journal ArticleDOI
TL;DR: A matrix inequality is proved which gives a necessary condition for this problem to have a solution.
Abstract: LetA be anHermitiann×n matrix ands1, ...,sn real numbers; under what conditions does there exist a diagonal realn×n matrixM such thatA+M has eigenvaluess1, ...,sn? In the present note we prove a matrix inequality which gives a necessary condition for this problem to have a solution.

Journal ArticleDOI
01 Feb 1972
TL;DR: The main purpose of as discussed by the authors is to obtain solutions of matrix equations of the following types, AX-XB=C, XDX+AX+XB+C+C=O, in which case X is an unknown n by n matrix and A, B, C, D are n-by-n matrices having elements belonging to the field C of complex numbers.
Abstract: The main purpose of this paper is to obtain solutions of matrix equations of the following types, AX-XB=C, XDX+AX+XB+C=O, in which case X is an unknown n by n matrix and A, B, C, D are n by n matrices having elements belonging to the field C of complex numbers. Results obtained extend those of W. E. Roth, J. E. Potter and others concerning the existence and the representation of solutions X of the above equations.

Journal ArticleDOI
01 May 1972
TL;DR: In this article, it was shown that if P is a real square matrix with all members of its nested set of principal minors non-zero, then there exists a real diagonal matrix, D, such that the characteristic roots of DP are all real, negative and distinct.
Abstract: Several years ago, Fisher and Fuller(1) proved that if P is a real square matrix with all members of its ‘nested set’ of principal minors non-zero, then there exists a real diagonal matrix, D , such that the characteristic roots of DP are all real, negative and distinct. This interesting and powerful result, used by the authors to derive further results concerning convergence of linear iterative processes, has since also proved of interest to economists studying the stability of economic general equilibrium.

Journal ArticleDOI
TL;DR: In this article, the notion of weak spectral inverse of a square matrix is introduced, generalizing the definition of a spectral inverse given by Greville, and a representation for weak spectral inverses in terms of the Drazin inverse is developed, and it is shown that a formula given previously for the Cline inverse is a special case.
Abstract: The notion of a weak spectral inverse of a square matrix is introduced, generalizing the definition of a spectral inverse given by Greville. A representation for weak spectral inverses in terms of the Drazin inverse is developed, and it is shown that a formula given previously for the Cline inverse is a special case. Necessary and sufficient conditions are given for a weak spectral inverse to be a spectral inverse, first in terms of the Drazin inverse representation, and then by making use of the quasi-commuting inverse defined by Erdelyi.

Journal ArticleDOI
TL;DR: In this paper, a construction for the Jacobson chains of columns from the given elementary divisors of a square matrix A over a field F that is not necessarily algebraically closed is presented.
Abstract: In this note we present a construction for the Jacobson chains of columns from the given elementary divisors of a square matrix A over a field $\mathcal{F}$, that is not necessarily algebraically closed. It generalizes the algorithm used by Gantmacher for the closed field case and does not assume a knowledge of the second canonical form under similarity.


Journal ArticleDOI
TL;DR: A triangular partitioning (additive) scheme for inverting a non-singular matrix is proposed in this paper, which expresses the inverse matrix as a product of a lower triangular matrix and a set of upper triangular matrices.
Abstract: A triangular partitioning (additive) scheme for inverting a non-singular matrix is suggested. This scheme is direct and expresses the inverse matrix as a product of a lower triangular matrix and a set of upper triangular matrices

Journal ArticleDOI
R. Hoard1, F. Huband
TL;DR: In this article, necessary and sufficient conditions for nonsingularity of a particular mn \times mn partitioned matrix C are given in terms of lower order m \timesm matrices.
Abstract: Necessary and sufficient conditions for the nonsingularity of a particular mn \times mn partitioned matrix C are given in terms of lower order m \times m matrices. The matrix C is formed by adding a block diagonal matrix to a partitioned matrix, each of whose submatrices are identical. Analytical formulas are given for both the determinant and the inverse of C .

Journal ArticleDOI
TL;DR: In this article, a modified form of Roth's transformation diagram for a linear graph is used to illustrate the solution of the electrical-network problem, and the significance of the orthogonal projections of the branch space into the branch voltage and current subspaces, which are defined by Kirchhoff's laws, is illustrated.
Abstract: A modified form of Roth's transformation diagram for a linear graph is used to illustrate the solution of the electrical-network problem. The diagram illustrates particularly the significance of the orthogonal projections of the branch space into the branch voltage and current subspaces, which are defined by Kirchhoff's laws, and also the existence of the constrained matrix inverse which forms a basis for the solution of the electrical-network problem. Properties of the generalised inverse matrix are also discussed in relation to the network problem.


Journal ArticleDOI
TL;DR: In this article, the Kronecker product of matrices A = ( a ij ) and B is the block matrix whose ( i, j )-th block is a Ij B, and is written A x B.
Abstract: Throughout this paper any matrix is square and of order q unless otherwise specified. I and J will represent an identity matrix and a square matrix with every element + 1 respectively; if necessary, the size will be indicated by a subscript. The Kronecker product of matrices A = ( a ij ) and B is the block matrix whose ( i, j )-th block is a ij B , and is written A x B .

01 Oct 1972
TL;DR: In this article, the approximation of a Toeplitz matrix by a circulant matrix is used to derive the dispersion matrix of the parameters of a dynamic model when the error process is known and when one has an approximation of the value of the parameter.
Abstract: : The approximation of a Toeplitz matrix by a circulant matrix is used to derive the dispersion matrix of the parameters of a dynamic model when the error process is known and when one has an approximation of the value of the parameters. (Author)


Journal ArticleDOI
TL;DR: In this paper, the authors prove the existence of a transformation operator with a condition at infinity that sends a solution of the matrix equation−y″ + My=λ2y (M is a constant Hermitian matrix) into the solution of a matrix equation −y″+Q(x)y+My=λ 2y (the matrix function Q(x), is continuously differentiable for 0 ≤ x<∞ and it is Hermyian for each x belonging to [0, ∞).
Abstract: We prove the existence of a transformation operator with a condition at infinity that sends a solution of the matrix equation−y″ + My=λ2y (M is a constant Hermitian matrix) into a solution of the matrix equation−y″+Q(x)y+My=λ2y (the matrix function Q(x) is continuously differentiable for 0 ≤ x<∞ and it is Hermitian for each x belonging to [0, ∞)); we study some properties of the kernel of the transformation operator.

Journal ArticleDOI
01 Feb 1972
TL;DR: The class of matrix functions of bounded variation of order n on an interval [a, b] that are representable as the difference of two monotone matrix functions on that interval was introduced by Dobsch.
Abstract: The class of matrix functions of 'bounded variation' was introduced by 0. Dobsch in a paper published in 1937 [2]. The consideration of this class of functions immediately gives rise to the consideration of those matrix functions of order n on an interval [a, b] that are representable as the difference of two monotone matrix functions on that interval. Such a difference will have high regularity properties when n is large and is therefore much more than simply a function of bounded variation. The characterization of this class was sought in the paper of Dobsch [2]. The purpose of this paper is to give a complete description of a related class: the functions defined on (-1, 1) which have restrictions to any closed subinterval which are such differences.