Showing papers on "Square matrix published in 1974"
••
TL;DR: The fast matrix multiplication algorithm by Strassen was used to obtain the triangular factorization of a permutation of any nonsingular matrix of order n in 2.35, i.e. if n > (2.35)5 t 100 as discussed by the authors.
Abstract: The fast matrix multiplication algorithm by Strassen is used to obtain the triangular factorization of a permutation of any nonsingular matrix of order n in 2.35, i.e. if n > (2.35)5 t 100. Strassen uses block LDU factorization (Householder (2, p. 126)) recursively to compute the inverse of a matrix of order m2k by m2k divisions, < (6/5)m37k - m2k multiplications, and < (6/5)(5 + m)m27k - 7(m2k)2 additions. The inverse of a matrix of order n could then be computed by ? (5.64)nlO02 7 arithmetic operations.
341 citations
••
01 Feb 1974
TL;DR: In this article, it was shown that if A is a nonnegative fully indecomposable matrix, i.e. A contains no s x (n s) zero submatrix, then there exists a doubly stochastic matrix of the form D 1AD2 where DI and D2 are diagonal matrices with positive main diagonals.
Abstract: Let A be a nonnegative m x n matrix and let r= (rl, ** I rm) and c = (c1' * c ) be positive vectors such that ?m r. = zn. c1. It is well known that if there exists a nonnegative z =1 z ]-=1 m x n matrix B with the same zero pattern as A having the ith row sum ri and jth column sum c,, there exist diagonal matrices D1 and D with positive main diagonals such that D1AD2 has ith row sum r. and jth column sum cjHowever the known proofs are at best cumbersome. It is shown here that this result can be obtained by considering the minimum of a certain real-valued function of n positive variables. It has been shown originally by Sinkhorn and Knopp [81 and Brualdi, Parter, and Schneider [31 that if A is a nonnegative fully indecomposable matrix, i.e. A contains no s x (n s) zero submatrix, then there exists a doubly stochastic matrix of the form D 1AD2 where DI and D2 are diagonal matrices with positive main diagonals. Later Djokovic? [41, and independently, London [5], proved the same theorem by considering the minimum of
65 citations
••
TL;DR: The definition of a projector under a semileast square inverse of a complex matrix is given in this article, where the same concept can also be defined in terms of projectors under seminorms.
63 citations
••
TL;DR: A nonlinear generalization of square matrices with non-positive off-diagonal elements is presented, and an algorithm to solve the corresponding complementarity problem is suggested and a potential application in extending the well-known linear Leontief input—output systems is discussed.
Abstract: A nonlinear generalization of square matrices with non-positive off-diagonal elements is presented, and an algorithm to solve the corresponding complementarity problem is suggested. It is shown that the existence of a feasible solution implies the existence of a least solution which is also a complementary solution. A potential application of this nonlinear setup in extending the well-known linear Leontief input—output systems is discussed.
53 citations
••
29 Jul 1974TL;DR: Five classes of composite matrix multiplication algorithms are considered and an optimal strategy is presented for each class and best and worst case cost coefficients for matrix multiplication are given.
Abstract: A, set of basic procedures for constructing matrix multiplication algorithms is defined. Five classes of composite matrix multiplication algorithms are considered and an optimal strategy is presented for each class. Instances are given of improvements in arithmetic cost over Strassen’s method for multiplying square matrices. Best and worst case cost coefficients for matrix multiplication are given.
39 citations
••
01 Feb 1974
TL;DR: In this article, Menon et al. showed that a symmetric nonnegative matrix A and a positive vector R = (rl, *, rm) are both completely decomposable.
Abstract: Given an m x m symmetric nonnegative matrix A and a positive vector R = (rl, * , rm), necessary and sufficient conditions are obtained in order that there exist a diagonal matrix D with positive main diagonal such that DAD has row sum vector R. A nonnegative m x n matrix A is called completely decomposable if there exist partitions a1, a2 of 11, * mI and 01 02 of 11, ... , n} into nonvacuous sets such that A[a1, 02] and A[a2, Oil are zero matrices. Here we use the notation that A[a, /] is the submatrix of A whose rows are indexed by a and whose columns are indexed by /, the rows and columns in A[a, /] appearing in the same order as in A. If m = n, the matrix A is called completely reducible if there exists a partition a 1' a2 of II, **, ml into nonvacuous sets such that A[a,, a2] and A[a2, a1] are zero matrices. Generalizing theorems of Sinkhorn and Knopp [10] and Brualdi, Parter, and Schneider [1], Menon [7] proved the following theorem: Let A be an m x n nonnegative matrix and let R -= (r1, *. ., r ) and S = (s1, *. ., sn) be positive vectors with r1 + * + rm = s1 + --+ Sn. Let M(R, S) denote the class of all m x n nonnegative matrices with row sum vector R and column sum vector S. Then there exist diagonal matrices D 1 and D2 with positive main diagonals such that D 1AD2 is in 2(R, S) if and only if there is a matrix in 2(R, S) which has the same zero pattern as A. (We say that a matrix B has the same zero pattern as A provided bi= 0 if and only if ai = 0.) If, in addition, A is not completely decomposable, the diagonal matrices D1, D2 are unique up to positive scalar factor: if U1AU2 is in 2(R, S) then there exists & > 0 such that U1 = D Secondary 15A5 1.
33 citations
••
TL;DR: In this article, an algorithm for computing bounds on the largest eigenvalue and a positive lower bound on the smallest eigen value of a distribution function is presented. But the algorithm is not suitable for the case where s is a positive integer greater than 2k or a negative integer.
Abstract: where a(A) = 0 for A ê A.!, = V + • • • + a? ^ < A S U = « ! 2 + • • • + o„ A n < A. Thus, {jLtm}m=i are a set of moments associated with the distribution function a(A). In certain applications (cf. [1]) we are interested in determining bounds for /LLS where s is a positive integer greater than 2k or a negative integer. We shall construct algorithms for computing bounds on [is where we have an upper bound on the largest eigenvalue and a positive lower bound on the smallest eigenvalue, e.g.,
29 citations
••
TL;DR: Some theorems concerning the application of the e-algorithm to vectors satisfying a matrix difference equation are proved and generalize results on the scalar e-Algorithm and some recent theorem on the vector e- algorithm.
26 citations
••
TL;DR: Methods of decomposing a partitioned rectangular matrix A into a product of an orthogonal matrix Q and an upper triangular matrix R are presented and can be applied to decompose a matrix stored in rectangular blocks on a random access second level storage.
25 citations
••
TL;DR: In this article, the first and last singular vectors of the scaled matrix have components of equal modulus, and the problem of best scaling for rectangular matrices is introduced and a conjecture regarding a possible best scaling is made.
Abstract: This paper is concerned with best two-sided scaling of a general square matrix, and in particular with a certain characterization of that best scaling: namely that the first and last singular vectors (on left and right) of the scaled matrix have components of equal modulus Necessity, sufficiency, and its relation with other characterizations are discussed Then the problem of best scaling for rectangular matrices is introduced and a conjecture made regarding a possible best scaling The conjecture is verified for some special cases
22 citations
••
TL;DR: A necessary and sufficient condition for the existence of unitary matrices U and V such that UAV is a real diagonal matrix for every matrix A in some set Γ of rectangular complex matrices is given in this article.
••
TL;DR: In this article, a similarity transformation method was proposed to find the eigenvalues and eigenvectors of an arbitrary complex matrix in N-1 or less transformations, where each transformation matrix is a matrix function-the matrix sign function with a ± added to the main diagonal elements.
Abstract: A new method of finding the eigenvalues and eigenvectors of an arbitrary complex matrix is presented. The new method is a similarity transformation method which transforms an arbitrary N × N matrix to a Jordan canonical form in N-1 or less transformations. Each transformation matrix is a matrix function-the matrix sign function with a ± added to the main diagonal elements. Using this matrix function as a similarity transformation gives a block diagonal form which is a reduced form of the transformed matrix. As the Jordan canonical form is found, the eigenvectors are simultaneously found since the product of transformation matrices must be a matrix of eigenvectors. The theoretical development of the new method and a computational scheme with examples are given. In the examples, the computational scheme is applied successfully to matrices which have characteristics that cause problems for most numerical techniques.
••
01 Jul 1974-Journal of Research of the National Bureau of Standards, Section B: Mathematical Sciences
••
TL;DR: For a given real square matrix A, the authors describes the following matrices: (∗) all nonsingular real symmetric (r.s.) matrices S such that A = S−1T for some symmetric matrix T.
••
01 Feb 1974
Abstract: It is known that a square matrix A can be written as a commutator XYYX if and only if Tr(A)-O. In this note it is shown further that for a fixed A the spectrum of one of the factors may be taken to be arbitrary while the spectrum of the other factor is arbitrary as long as the characteristic roots are distinct. The distinctness restriction on one of the factors may not in general be relaxed.
••
••
TL;DR: In this paper, it was shown that the matrix obtained by applying a matrix bilinear transformation to a companion matrix can itself be transformed by a similarity transformation into a companion matrices, using a matrix T which is invariant for matrices of a particular order.
••
TL;DR: When n, r = 1, the Moore-Penrose pseudo-inverse of an $n \times n$r-circulant matrix is a strong spectral inverse.
Abstract: When $(n,r) = 1$, the Moore–Penrose pseudo-inverse of an $n \times n$r-circulant matrix is a strong spectral inverse.
••
TL;DR: The solution of Ising model is reduced to the problem of diagonalization of matrix of the linear transformation W in the space of vectors composed of the correlation functions of the model.
••
••
01 Dec 1974
TL;DR: In this article, a method of finding the modal matrix corresponding to any nonderogotory constant square matrix which can be transformed to its companion form by similarity transformation has been presented.
Abstract: A method of finding the modal matrix corresponding to any nonderogotory constant square matrix which can be transformed to its companion form by similarity transformation has been presented.
••
TL;DR: This paper presents a complete algorithm which transforms a square matrix into the Jordan normal form and shows how this transformation affects theorems of matrix multiplication and matrix normalization.
Abstract: This paper presents a complete algorithm which transforms a square matrix into the Jordan normal form.
••
TL;DR: In this paper, a procedure is derived for calculating a symmetric matrix P with minimal sum of the squares of its elements, which satisfies the condition that B and A are rectangular or square matrices.
Abstract: A procedure is derived for calculating a symmetric matrix, P, with minimal sum of the squares of its elements, which satisfies $PB = A$. B and A are rectangular or square matrices. It is necessary that $AB^ + B = A$, which is not a trivial requirement if B has more columns than rows, or if B is not of full rank. Also, no solution is possible unless $B'A$ is symmetric. The procedure is similar to that for calculating a nonsymmetric minimal matrix, but with additional terms to give symmetry.
••
TL;DR: A technique is given for permuting the rows and columns of a general square matrixM into a fully reduced matrixC and its validity is proved and it is compared with Harary's algorithm.
Abstract: A technique is given for permuting the rows and columns of a general square matrixM into a fully reduced matrixC. The corresponding algorithm is described, its validity is proved and it is compared with Harary's algorithm [5].
••
TL;DR: In this paper, the rim-similarity relation is defined on the set of all sequences of square matrices of a given fixed dimension, and sufficient conditions are given which guarantee that the uniform stability of the null solution of a non-linear system carries over to the linear variational equation with respect to this solution.
Abstract: An equivalence relation, rim–similarity, is defined on the set of all sequences of square matrices of a given fixed dimension. For linear discrete – time systems, theorems are presented which show that certain stability properties are invariant under n∞ similarity. General linear systems as well as those of variational type are considered. Also, sufficient conditions are given which guarantee that the uniform stability of the null solution of a non– linear system carries over to the linear variational equation with respect to this solution. All of these results are analogous to known ones for ordinary differential equations.