scispace - formally typeset
Search or ask a question

Showing papers on "Matrix decomposition published in 1980"


Book
01 Jan 1980
TL;DR: The Factorization of Rational Matrix Functions as discussed by the authors is a generalization of matrix function factorization relative to a contour, and generalized factorization of rational matrix functions is also related to generalized factorization.
Abstract: The Factorization of Rational Matrix Functions.- Decomposing Algebras of Matrix Functions.- Canonical Factorizations of Continuous Matrix Functions.- Factorization of Triangular Matrix Functions.- Factorization of Continuous Self-Adjoint Matrix Functions on the Unit Circle.- Miscellaneous Results on Factorization Relative to a Contour.- Generalized Factorization.- Further Results Concerning Generalized Factorization.- Local Principles in the Theory of Factorization.- Perturbations and Stability.

447 citations


Journal ArticleDOI
L. Marple1
TL;DR: A new recursive algorithm for autoregressive (AR) spectral estimation is introduced, based on the least squares solution for the AR parameters using forward and backward linear prediction, comparable to that of the popular Burg algorithm.
Abstract: A new recursive algorithm for autoregressive (AR) spectral estimation is introduced, based on the least squares solution for the AR parameters using forward and backward linear prediction. The algorithm has computational complexity proportional to the process order squared, comparable to that of the popular Burg algorithm. The computational efficiency is obtained by exploiting the structure of the least squares normal matrix equation, which may be decomposed into products of Toeplitz matrices. AR spectra generated by the new algorithm have improved performance over AR spectra generated by the Burg algorithm. These improvements include less bias in the frequency estimate of spectral components, reduced variance in frequency estimates over an ensemble of spectra, and absence of observed spectral line splitting.

434 citations


Journal ArticleDOI
TL;DR: It is shown that every symmetric matrix A, with entries from a finite field F, can be factored over F into A = BB'$, where the number of columns of B is bounded from below by either the rank $\rho (A)$ of A, or by $1 + \rho(A)$, depending on A and on the characteristic of F.
Abstract: It is shown that every symmetric matrix A, with entries from a finite field F, can be factored over F into $A = BB'$, where the number of columns of B is bounded from below by either the rank $\rho (A)$ of A, or by $1 + \rho (A)$, depending on A and on the characteristic of F This result is applied to show that every finite extension $\Phi $ of a finite field F has a trace-orthogonal basis over F. Necessary and sufficient conditions for the existence of a trace-orthonormal basis are also given. All proofs are constructive, and can be utilized to formulate procedures for minimal factorization and basis construction.

95 citations


Journal ArticleDOI
TL;DR: In this paper, the solution of a Poisson equation expressed in finite-difference form on a nonuniform multidimensional mesh of gridpoints in an orthogonal coordinate system is discussed.

29 citations


Journal ArticleDOI
TL;DR: In this article, simple expressions for the Wiener-Hopf factors of a matrix considered by Daniele are given, and simple expressions are also given for the WFH factor of a certain matrix.
Abstract: Simple expressions for the Wiener-Hopf factors of a certain matrix considered by Daniele are given.

18 citations


Journal ArticleDOI
P. Delsarte1, Y. Genin1, Y. Kamp1
TL;DR: In this paper, a triangular decomposition of the inverse of a given matrix is presented, which is applicable to any matrix all contiguous principal submatrices of which are nonsingular and is particularly efficient when the matrix has certain partial symmetries exhibited by the Toeplitz structure.

17 citations


Journal ArticleDOI
TL;DR: The definition of acceleration parameters for the convergence of a sparseLU factorization semi-direct method is shown to be based on lower and upper bounds of the extreme eigevalues of the iteration matrix.
Abstract: The definition of acceleration parameters for the convergence of a sparseLU factorization semi-direct method is shown to be based on lower and upper bounds of the extreme eigevalues of the iteration matrix. Optimum values of these parameters are established when the eigenvalues of the iteration matrix are either real or complex. Estimates for the computational work required to reduce theL 2 norm of the error by a specified factor ? are also given.

13 citations


Journal ArticleDOI
TL;DR: In this article, Green's functions and boundary integral equation methods are used to derive a matrix set of equations for scattering from a multilayered homogeneous elastic body embedded in an infinite elastic material.

13 citations


Proceedings ArticleDOI
Mihalis Yannakakis1
13 Oct 1980
TL;DR: It is shown that the 0,1 Integer Programming Problem with an RTUM matrix of constraints has the same time complexity as the b-matching and the max flow problems.
Abstract: We examine the class of matrices that satisfy Commoner's sufficient condition for total unimodularity [C], which we call restricted totally unimodular (RTUM) We show that a matrix is RTUM if and only if it can be decomposed in a very simple way into the incidence matrices (or their transposes) of bipartite graphs or directed graphs, and give a linear time algorithm to perform this task Based on this decomposition, we show that the 0,1 Integer Programming Problem with an RTUM matrix of constraints has the same time complexity as the b-matching and the max flow problems

9 citations


Proceedings ArticleDOI
19 May 1980
TL;DR: MATLAB is an interactive computer program that serves as a convenient "laboratory" for computations involving matrices, and provides easy access to matrix software developed by the LINPACK and EISPACK projects.
Abstract: MATLAB is an interactive computer program that serves as a convenient "laboratory" for computations involving matrices It provides easy access to matrix software developed by the LINPACK and EISPACK projects [1--3] The capabilities range from standard tasks such as solving simultaneous linear equations and inverting matrices, through symmetric and nonsymmetric eigenvalue problems, to fairly sophisticated matrix tools such as the singular value decomposition

8 citations


Proceedings ArticleDOI
01 Dec 1980
TL;DR: In this article, the singular value decomposition of the augmented matrix (A -?I,B) is used to find its null space and then, subsequently, the subspace of possible closed loop eigenvectors and the necessary feedback matrix, K, for the assignment of the specified closed-loop eigenvalues and eigen vectors.
Abstract: This short paper examines use of the singular value decomposition of the augmented matrix [A - ?I,B] to find its null space and then, subsequently, the subspace of possible closed loop eigenvectors and the necessary feedback matrix, K, for the assignment of the specified closed loop eigenvalues and eigenvectors. This paper describes the very attractive computational alternative of using the singular value decomposition rather than the previously reported approach of elementary column operations. The assignment of complex eigenvalues and repeated eigenvalues using the same basic singular value decomposition of a real matrix is also discussed.

Journal ArticleDOI
Tohru Katayama1
TL;DR: In this article, the estimation of two-dimensional images that may be modeled by a separable autoregressive process is considered, and an approximate feasible estimation algorithm is obtained by applying the kalman filter.
Abstract: This note considers the estimation of two-dimensional images that may be modeled by a separable autoregressive process. We first derive a one-dimensional vector stochastic model with multiple delays for images; the one-dimensional vector model is further decomposed into a set of nearly independent equations using the matrix factorization theorem and the orthogonal sine transform [10]. Then, applying the kalman filter, the approximate feasible estimation algorithm is obtained.



Journal ArticleDOI
TL;DR: In this paper, the spectral decomposition of the spatial correlation matrix is interpreted as analysis of the actual correlation matrix into components due to a number of uncorrelated sources, with the appropriate complex delay factors and average powers displayed explicitly.
Abstract: Any Hermitian matrix, such as the spatial correlation matrix of measured data from an array of sensors at some frequency, can be represented as the sum of the dyads formed from its eigenvectors, weighted by the corresponding eigenvalues (the spectral decomposition of the matrix). Each term of this sum is of the form of the spatial correlation matrix due to a single source, received at the array with random amplitude, but fixed wave front (not necessarily planar). The spectral decomposition can thus be interpreted as analysis of the actual correlation matrix into components due to a number of (perhaps hypothetical) uncorrelated sources, with the appropriate complex delay factors and average powers displayed explicitly. In this note, we discuss this briefly, and illustrate with an example.

01 Aug 1980
TL;DR: In this article, generalizations of Cochran's theorem including nonsymmetric matrices and r-potent matrices are proved by consistent use of projection matrices, and the decomposition of diagonalizable matrices into projections to eigenspaces is studied.
Abstract: : Generalizations of Cochran's theorem including (i) nonsymmetric matrices and (ii) r-potent matrices are proved by consistent use of projection matrices Decomposition of diagonalizable matrices into projections to eigenspaces (or spectral decomposition) and its relation to Cochran-type decomposition are studied (Author)

Proceedings ArticleDOI
01 Sep 1980
TL;DR: For theAnalysis of large circuits, a decomposition technique is presented which requires less computer memory and is efficient and applicable for the analysis of linear and nonlinear circuits.
Abstract: For the analysis of large circuits,a decomposition technique is presented which requires less computer memory and is efficient Decomposition is carried out at physical structure level by removing few interconnections This method is applicable for the analysis of linear and nonlinear circuits

01 Mar 1980
TL;DR: The mathematical theory for decoupling mth-order matrix differential equations and the role of eigenprojectors and latent projectors is discussed, and the mathematical relationships between eigenvalues, eigenvectors, latent roots, and latent vectors are developed.
Abstract: The mathematical theory for decoupling mth-order matrix differential equations is presented. It is shown that the decoupling precedure can be developed from the algebraic theory of matrix polynomials. The role of eigenprojectors and latent projectors in the decoupling process is discussed and the mathematical relationships between eigenvalues, eigenvectors, latent roots, and latent vectors are developed. It is shown that the eigenvectors of the companion form of a matrix contains the latent vectors as a subset. The spectral decomposition of a matrix and the application to differential equations is given.

Journal ArticleDOI
TL;DR: The multivariable Foster-network synthesis suitable for application in integrated circuit design technology using state-variable methods is described and makes use of the Markov parameters.
Abstract: The multivariable Foster-network synthesis suitable for application in integrated circuit design technology using state-variable methods is described. The technique developed makes use of the Markov parameters. The procedure is computationally simple as no matrix factorization or transformation is required.