scispace - formally typeset
Search or ask a question

Showing papers on "Square matrix published in 1973"


Journal ArticleDOI
TL;DR: A new method, called the QZ algorithm, is presented for the solution of the matrix eigenvalue problem $Ax = \lambda Bx$ with general square matrices A and B with particular attention to the degeneracies which result when B is singular.
Abstract: A new method, called the $QZ$ algorithm, is presented for the solution of the matrix eigenvalue problem $Ax = \lambda Bx$ with general square matrices A and B. Particular attention is paid to the degeneracies which result when B is singular. No inversions of B or its submatrices are used. The algorithm is a generalization of the $QR$ algorithm, and reduces to it when $B = I$. Problems involving higher powers of $\lambda $ are also mentioned.

1,038 citations


Journal ArticleDOI
TL;DR: It is proved that a matrix satisfies its two-dimensional characteristic function, and this property is used to form a diagnostic matrix, which is used in a minimization technique.
Abstract: A model for two-dimensional linear iterative circuits is defined in the form of matrix equations. From the matrix equations, a two-dimensional characteristic function is defined. It is then proved that a matrix satisfies its two-dimensional characteristic function. This property is used to form a diagnostic matrix. Finally, the diagnostic matrix is used in a minimization technique.

121 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that an n × n real triangular matrix A is a ΔSTP matrix if all its nontrivial minors are strictly positive, where A = LU where L is a lower triangular matrix and U is an upper triangular matrix.

82 citations


Journal ArticleDOI
C. B. Garcia1
TL;DR: A class of matrices is introduced such that for anyM in this class a solution to the linear complementarity problem exists for all feasibleq, and such that Lemke's algorithm will yield a solution or demonstrate infeasibility.
Abstract: The linear complementarity problem is the problem of finding solutionsw, z tow = q + Mz, w≥0,z≥0, andw T z=0, whereq is ann-dimensional constant column, andM is a given square matrix of dimensionn. In this paper, the author introduces a class of matrices such that for anyM in this class a solution to the above problem exists for all feasibleq, and such that Lemke's algorithm will yield a solution or demonstrate infeasibility. This class is a refinement of that introduced and characterized by Eaves. It is also shown that for someM in this class, there is an even number of solutions for all nondegenerateq, and that matrices for general quadratic programs and matrices for polymatrix games nicely relate to these matrices.

77 citations


Journal ArticleDOI
TL;DR: Several different types of ordering matrices, each type having the capability of exhibiting different amounts of potential concurrency, can be calculated from the d and ê vectors of the instructions of a task using ``linear algebraic-like'' operations.
Abstract: A formal technique for the representation of tasks such that the potential concurrency of the task is detectable, and hence exploitable, during the execution of the task is described. Instructions are represented as a pair of binary vectors d? and e which completely describe the sources and sinks specified by the instruction. Tasks are represented as square matrices M called ordering matrices. The values of the elements of these matrices are used to dynamically indicate the necessary ordering of the execution of instructions. It is shown how several different types of ordering matrices, each type having the capability of exhibiting different amounts of potential concurrency, can be calculated from the d? and e vectors of the instructions of a task using ``linear algebraic-like'' operations. For example, intercycle independencies can be detected with a ternary ordering matrix. This matrix can be extended to dynamically detect opportunities for reassigning the resources specified by certain instructions to increase the amount of potential concurrency. Experimental results are presented showing the relative capability of each of these matrix types for exhibiting potential concurrency. These techniques are shown to produce somewhat greater amounts of potential concurrency than other known dynamic techniques. However, the amounts of potential concurrency found are less than those reported for preprocessing detection techniques.

39 citations


Journal ArticleDOI
TL;DR: In this paper, the authors give a direct method for the construction of an operator norm with respect to which μ[A] can be made arbitrarily close to max Re(λi).

28 citations


Journal ArticleDOI
TL;DR: In this article, Rao et al. showed that for a general first-order design matrix A and a given dispersion matrix Σ of the n unknown net coordinates, the unique minimum Euclidean norm solution for the weight matrix P is P = (A′)+Σ−1A+ where A indicates the transpose and A+ the pseudoinverse of the rectangular matrix A.
Abstract: Let the configuration or first-order design matrix A of a geodetic net be given and let the weight or second-order design matrix P of m observations be unknown. For some predetermined choice of the dispersion matrix Σ of the n unknown net coordinates, the unique minimum Euclidean norm solution for the weight matrix P is P = (A′)+Σ−1A+. A′ indicates the transpose and A+ the pseudoinverse of the (in general) rectangular matrix A. For a general first-order design matrix A and a given Σ, there does not exist a solution for a diagonal positive definite weight matrix P. Our solution for P belongs to the first category of an optimal design of C. R. Rao. The designs P = I and P = (A′)+Σ−1A+ yield the same best linear unbiased estimator for the unknowns, but the variance covariance matrix for these designs is different. Examples, especially the structure by G. I. Taylor and T. Karman for the homogeneous and isotropic geodetic net, illustrate theoretical results.

28 citations


Journal ArticleDOI
TL;DR: The Galerkin procedure when applied to the equation for horizontal two-dimensional flow of groundwater in a nonhomogeneous isotropic aquifer generates approximating equations of the following form: Rc + G [dc/dt] + f = 0, where R and G are square matrices, c and f are column matrices and t is time as discussed by the authors.
Abstract: The Galerkin procedure when it is applied to the equation for horizontal two-dimensional flow of groundwater in a nonhomogeneous isotropic aquifer generates approximating equations of the following form: Rc + G [dc/dt] + f = 0, where R and G are square matrices, c and f are column matrices, and t is time. This matrix equation is decoupled and solved for the unknown column matrix c(t). In the case of a confined aquifer that approaches a steady state solution, R, G, and f are constant. An analytic solution to the matrix equation for c(t) is given for this case. In the case of a water table aquifer that approaches a steady state solution, R and f are explicitly dependent on c(t), and G is constant. For this case, c(t = ∞) is found in a simple iterative manner, and an iterative procedure is given to approximate c(t). These methods are compared with the approximate numerical Crank-Nicholson procedure by applying both to a particular problem for which the unknown column matrix c(t) has 49 elements. The Crank-Nicholson procedure is found usually to require less computation time to evaluate c(t) for the confined aquifer case but to give errors for drawdown averaging approximately 10%. The Crank-Nicholson procedure is found to take considerably more computation time to evaluate c(t =∞) for both the confined and the water table cases but to take considerably less time to evaluate c(t) for the water table case.

23 citations


Journal ArticleDOI
TL;DR: Givone and Roesser state a "two-dimensional" analog of the Cayley-Hamilton theorem: every square matrix satisfies its characteristic equation.
Abstract: Givone and Roesser [1] state a "two-dimensional" analog of the Cayley-Hamilton theorem: every square matrix satisfies its characteristic equation. Readers may be interested in a proof different from the one given in [1].

20 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the logarithmic derivative of a square matrix A can be made equal to p for some operator norm in C n if and only if those eigenvalues of A with maximum real part are simple roots of the minimal polynomials for A.

19 citations


Journal ArticleDOI
TL;DR: In this article, the stability of the system is determined by reference to the dominant root of the polynomial equation (2) det E Atmt) = 0, where t = O
Abstract: where y(t) represents the vector of endogenous variables, x(t) the vector of exogenous variables, u(t) the vector of stochastic disturbances, and t the tth period of observation. The matrices A, (T = 0, 1, . . . , m) of the structural coefficients are square matrices of order G. It is assumed that the conditions justifying the theorems in [3, Ch. 10] are satisfied, and that there are no nonlinear restrictions on the elements of A.. The stability of the system is determined by reference to the dominant root of the polynomial equation (2) det E Atmt) =0. t=O

Journal ArticleDOI
TL;DR: In this article, a time domain method is given for estimating the matrix of related parameters in linear systems with constant coefficients and real eigenvalues, which consists of a one-dimensional search for the local minima of a scalar function, which provide the eigen values of the system matrix and the matrix itself when observable.
Abstract: A time domain method is given for estimating the matrix of related parameters in linear systems with constant coefficients and real eigenvalues. The method consists of a one-dimensional search for the local minima of a scalar function μ(λ), which provide the eigenvalues of the system matrix and the matrix itself when observable. Applications are given to the determination of a transfer function and the estimation of the rate matrix of a monomolecular reaction system. Questions of accuracy, number, and type of measurements required are discussed.

Journal ArticleDOI
M. Healey1
01 Aug 1973
TL;DR: The rational approximant to the power-series expansion and a block-diagonal transformation are shown to be the superior methods of computing the square matrices.
Abstract: Various methods of computing the square matrices ? = eAT and ? = ?T0 e A? d? are studied. They are programmed in FORTRAN and compared for storage, speed and accuracy on a variety of test matrices. The methods studied are power-series expansion and related approximants; the eigenvalue methods of Sylvester's expansion and diagonal transformation; and numerical integration. The rational approximant to the power-series expansion and a block-diagonal transformation are shown to be the superior methods.

Journal ArticleDOI
TL;DR: In the above correspondence, an algorithm has been described that allows the transposing of a 2n× 2nmatrix by reading and writing n times at the most via direct access rows of the matrix and by permuting its data elements.
Abstract: In the above correspondence,1an algorithm has been described that allows the transposing of a 2n× 2nmatrix by reading and writing n times at the most via direct access rows of the matrix and by permuting its data elements. The generalization to arbitrary square matrices has not been described, and the generalization to nonsquare matrices has been reported to be impossible.

Journal ArticleDOI
TL;DR: In this article, the existence of a one-to-one correspondence between a certain class of square matrices of arbitrary order and a related extension field is shown. The elements of these matrices are obtained from certain basic linear recursive sequences by means of a generalization of the euclidean algorithm.
Abstract: This paper shows the existence of a one-to-one correspondence between a certain class of square matrices of arbitrary order and a related extension field. The elements of these matrices are obtained from certain basic linear recursive sequences by means of a generalization of the euclidean algorithm.

Journal ArticleDOI
TL;DR: The theory of monotone matrix functions has been developed by K. Loewner as mentioned in this paper, who first gave necessary and sufficient conditions for a function to be a monotonous matrix function of order n, and then, as a result of further deep investigations including questions of interpolation, he arrived at the following criterion: a real-valued function f(x) defined in (a, b) is monotonically defined if and only if it can be analytically continued onto the entire upper half-plane, and has there a nonnegative imaginary part.
Abstract: The theory of monotone matrix functions has been developed by K. Loewner; he first gives some necessary and sufficient conditions for a function to be a monotone matrix function of order n, and then, as a result of further deep investigations including questions of interpolation he arrives at the following criterion: A real-valued function f(x) defined in (a, b) is monotone of arbitrary high order n if and only if it is analytic in (a, b), can be analytically continued onto the entire upper half-plane, and has there a nonnegative imaginary part. The problem of monotone operator functions of two real variables has recently been considered by A. Koranyi. He has generalized Loewner's theorem on monotone matrix functions of arbitrary high order n to two variables. We seek a theory of monotone matrix functions of two variables analogous to that developed by Loewner and show that a complete analogue to Loewner's theory exists in two dimensions.

Journal ArticleDOI
TL;DR: In this article, the properties of three different forms of error matrices in electron diffraction are investigated, assuming the presence of stationary, Gaussian, Markovian noise in the primary data.

Journal ArticleDOI
TL;DR: Inequalities concerning real square matrices A with positive definite symmetric component A+A ∗ are derived from certain inertia relations which hold for any complex (not necessarily real) square matrix A with A + A ∗ as mentioned in this paper.
Abstract: Inequalities concerning real square matrices A with positive definite symmetric component A+A ∗are derived from certain inertia relations which hold for any complex (not necessarily real) square matrices A with positive definite A+A ∗

Journal ArticleDOI
TL;DR: In this paper, an algebraic method for the second-order statistics of the response of multi-degree-of-freedom linear time-invariant dynamical systems to (zero mean) white noise or stationary filtered white noise excitation is described.

Journal ArticleDOI
TL;DR: In this article, the 2−(1.41 MeV β−)2+ β-decay of 122Sb and the 3−(2.31 MeV ε β−)-2+ ε-decays of 124Sb have been extracted using the formalism of Buhring, and the results can be understood on the basis of the simple shell model with configuration mixing.

Journal ArticleDOI
TL;DR: In this paper, it was shown that if a singular square matrix has both a commuting reciprocal and Moore-Penrose inverse, either of these may be factorized in terms of the other.
Abstract: If a singular square matrix has both a commuting reciprocal and Moore–Penrose inverse, it is proved that either of these may be factorized in terms of the other. Examples related to singular boundary value problems are given.

Journal ArticleDOI
M. C. Perz1
TL;DR: In this paper, a matrix analysis of voltage and current propagations along a resistive conductor system above an imperfectly conducting ground is described, and the modal transformation is shown to be hot power invariant.
Abstract: Matrix analysis is described of voltage and current propagations along a resistive conductor system above an imperfectly conducting ground. Square matrices and their functions are considered as operators in a vector space. Conductor voltages and currents are represented as vectors and resolved into modes or eigenvectors of the propagation matrix. It is shown that although the modal transformation is hot power invariant the mode are independent but not orthogonal. General and modal reflection-free line terminations are developed. A brief review of some simplified modal analyses is described. A numerical solution is demonstrated. Mathematical background is appended.

Journal ArticleDOI
TL;DR: In this paper, the problem of determining conditions on the pair of real square matrices (A,B) such that for every diagonal matrix D with positive elements on the main diagonal, the inverse of the matrix $AD + B$ has only nonnegative elements was considered.
Abstract: Several new results are presented concerning matrices having nonnegative inverses. We consider the problem of determining conditions on the pair of real square matrices $(A,B)$ such that for every diagonal matrix D with positive elements on the main diagonal, the inverse of the matrix $AD + B$ has only nonnegative elements. Of primary concern are the two special cases which occur frequently in the applications, (a) the case in which A is the identity matrix, and (b) the case in which both of the matrices $A,B$ have only nonpositive off-diagonal elements.

Journal ArticleDOI
TL;DR: In this paper, the norm of a vector and its subordinate operator norm of matrices, are fixed; this operator norm is associated, as in [21] with the logarithmic norm.
Abstract: In what follows the word matrix denotes a square matrix of order n with complex elements, and the word vector denotes an n-dimensional vector with complex components. The norm of a vector and its subordinate operator norm of matrices, are fixed; this operator norm of matrices is associated, as in [21, with the logarithmic norm. The symbols A, B, 2 denote matrices; x, y, 4, 7 denote vectors. The symbols \]A\\, [lx\\ and y(A) denote, respectively, the norm of A, the norm of x and the logarithmic norm of A.

Journal ArticleDOI
TL;DR: In this paper, the authors discussed the numerical evaluation of the elements of the transformation matrix of an orthogonal rectilinear triad moving with respect to a similar fixed system, when the former rotates with a known spin vector.
Abstract: This paper discusses the numerical evaluation of the elements of the transformation matrix of an orthogonal rectilinear triad moving with respect to a similar fixed system, when the former rotates with a known spin vector Typical governing equations are formu lated in terms of Euler angles, in terms of quater nions, and in terms of direction cosines and their derivatives Emphasis is placed on the latter formu lation for solution by digital computationThe governing equations are reformulated into a homogeneous and linear matricial state variable equa tion of the first order, with a skew symmetric coefficient matrix called the rotation operator The transition matrix of this state equation gives the required rotation matrix The characteristics of the skew-symmetric matrices are used in the development of a suitable algorithm for the solutionThe exact rotation matrix can be obtained for cases of constant spin vector and for special cases of variable spin vectors For the general case of a variable spin

Journal ArticleDOI
TL;DR: In this article, the return difference matrix F(s) of a multivariable control system with respect to its gain elements is obtained directly from the inverse transfer matrix T−1(s), which is in turn derived (by elimination of undriven node groups) from the coefficient matrix of the set of independent differential equations describing the system.
Abstract: The return difference matrix F(s) of a multivariable control system with respect to its gain elements is obtainable directly from the inverse transfer matrix T−1(s), which is in turn derived (by elimination of undriven node groups) from the coefficient matrix of the set of independent differential equations describing the system. Thus, no knowledge of the system's topology is required in order to obtain F(s). Furthermore, the inverse of one of the submatrices of T−1(s) yields the idealized response, if all gain elements are assumed to exhibit infinite gain.

Journal ArticleDOI
TL;DR: In this paper, a new system matrix that can realize any second-order digital filter is presented, which exhibits minimum eigenvalue sensitivity when compared to other secondorder matrices over certain regions of root locations within the unit circle.
Abstract: A new system matrix that can realize any second-order digital filter is presented. This matrix exhibits minimum eigenvalue sensitivity when compared to other second-order matrices over certain regions of root locations within the unit circle.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the transfer matrix of the symmetric 16-vertex model is identical to that of the asymmetric 8-verstex model in a different spin representation.

Journal ArticleDOI
TL;DR: This article showed that a separable t matrix can give a good approximation to the trinucleon energy, which is insensitive to values of the t matrix at very large momenta.
Abstract: Osborn used the property of compactness to show that there is an infinite mean square deviation between the t matrix for a local potential and any separable t matrix of finite rank. I present an alternate proof, using only the property of square integrability. The divergence of the mean square deviation arises from very large momenta. I argue that a separable t matrix can give a good approximation to the trinucleon energy, which is insensitive to values of the t matrix at very large momenta.

Journal ArticleDOI
TL;DR: In this paper, a generalized method for numerically calculating Lowdin orbitals is proposed, the method being an application of Frobenius' theorem in algebra, and it is proved that the eigen values of the overlap matrix of S are always positive for any choice of linearly independent atomic orbitals and that S−1⁄2, the inverse square root of S, exists within the range of real numbers.
Abstract: A generalized method for numerically calculating Lowdin orbitals is proposed, the method being an application of Frobenius’ theorem in algebra. Lowdin transformation matrix T=S−1⁄2 (S is overlap matrix) is calculated as T=US0−1⁄2 Ut, where S0 is a diagonal matrix whose diagonal elements are eigen values of S, U is a matrix whose column vectors are eigen vectors of S and Ut is the transposed matrix of U. It is proved that the eigen values of the overlap matrix of S are always positive for any choice of linearly independent atomic orbitals and that S−1⁄2, the inverse square-root of S, exists within the range of real numbers. This method is useful for computer calculation by applying Jacobi’s method.