scispace - formally typeset
Search or ask a question

Showing papers on "Matrix decomposition published in 1993"


Journal ArticleDOI
TL;DR: In this paper, the use of static voltage stability indices based on a singular value decomposition of the power flow Jacobian matrix and matrices derived from the Jacobian matrices is discussed.
Abstract: The use of static voltage stability indices based on a singular value decomposition of the power flow Jacobian matrix and matrices derived from the Jacobian matrix is discussed. It is shown that such indices, together with the singular vectors, contain substantial and important information about the proximity to voltage instability and also about critical buses and disturbances from a voltage instability point of view. This is done by a theoretical analysis of the linear power flow equations and an analysis from model power systems as well as realistic power systems (1033 nodes). It is argued that indices based on these matrices are useful for the system analyst in planning and operations planning. >

377 citations


Journal ArticleDOI
TL;DR: The Singular Value Decomposition of the equilibrium matrix makes it possible to answer any question of a static, kinematic, or static/kinematic nature for any structural assembly, within a unified computational framework as discussed by the authors.

361 citations


Journal ArticleDOI
TL;DR: The nonnegative rank of a nonnegative matrix is the smallest number of non negative rank-one matrices into which the matrix can be decomposed additively.

269 citations


Journal ArticleDOI
TL;DR: This paper presents a new approach to generate multidimensional Gaussian random fields over a regular sampling grid that is both exact and computationally very efficient and comparable with that of a spectral method also implemented using the FFT.
Abstract: To generate multidimensional Gaussian random fields over a regular sampling grid, hydrogeologists can call upon essentially two approaches. The first approach covers methods that are exact but computationally expensive, e.g., matrix factorization. The second covers methods that are approximate but that have only modest computational requirements, e.g., the spectral and turning bands methods. In this paper, we present a new approach that is both exact and computationally very efficient. The approach is based on embedding the random field correlation matrix R in a matrix S that has a circulant/block circulant structure. We then construct products of the square root S1/2 with white noise random vectors. Appropriate sub vectors of this product have correlation matrix R, and so are realizations of the desired random field. The only conditions that must be satisfied for the method to be valid are that (1) the mesh of the sampling grid be rectangular, (2) the correlation function be invariant under translation, and (3) the embedding matrix S be nonnegative definite. These conditions are mild and turn out to be satisfied in most practical hydrogeological problems. Implementation of the method requires only knowledge of the desired correlation function. Furthermore, if the sampling grid is a d-dimensional rectangular mesh containing n points in total and the correlation between points on opposite sides of the rectangle is vanishingly small, the computational requirements are only those of a fast Fourier transform (FFT) of a vector of dimension 2dn per realization. Thus the cost of our approach is comparable with that of a spectral method also implemented using the FFT. In summary, the method is simple to understand, easy to implement, and is fast.

243 citations


Journal ArticleDOI
TL;DR: In this article, the authors proved that a three-way array can be uniquely decomposed as the sum of F rank-1 arrays if the F vectors corresponding to two of the ways are linearly independent and the F vector corresponding to the third way has the property that no two are collinear.
Abstract: An I-by-J-by-K array has rank 1 if the array is the outer product of an I-, a J-, and a K-vector. The authors prove that a three-way array can be uniquely decomposed as the sum of F rank-1 arrays if the F vectors corresponding to two of the ways are linearly independent and the F vectors corresponding to the third way have the property that no two are collinear. Several algorithms that implement the decomposition are described. The algorithms are applied to obtain initial values for nonlinear least-squares calculations. The performances of the decompositions and of the nonlinear least-squares solutions on real and on simulated data are compared. An extension to higher-way arrays is introduced, and the method is compared with those of other authors.

203 citations


Journal ArticleDOI
TL;DR: This paper presents algorithms for updating a rank-revealing ULV decomposition and can be implemented on a linear array of processors to run in O( n ) time.
Abstract: A ULV decomposition of a matrix A of order n is a decomposition of the form $A = ULV^H $, where U and V are orthogonal matrices and L is a lower triangular matrix. When A is approximately of rank k, the decomposition is rank revealing if the last $n - k$ rows of L are small. This paper presents algorithms for updating a rank-revealing ULV decomposition. The algorithms run in $O( n^2 )$ time, and can be implemented on a linear array of processors to run in $O( n )$ time.

157 citations


Book ChapterDOI
TL;DR: The chapter discusses the problem of perfect or nearly perfect linear dependencies, the complex QR decompositions, the QR decomposition in regression, the essential properties of the QR decay, and the use of Householder transformations to compute the QR decompposition.
Abstract: Publisher Summary The QR decomposition is one of the most basic tools for statistical computation, yet it is also one of the most versatile and powerful. The chapter discusses the problem of perfect or nearly perfect linear dependencies, the complex QR decomposition, the QR decomposition in regression, the essential properties of the QR decomposition, and the use of Householder transformations to compute the QR decomposition. A classical alternative to the Householder QR algorithm is the Gram–Schmidt method. Least-squares regression estimates can also be found using either the Cholesky factorization or the singular value decomposition. The QR decomposition approach has been observed to offer excellent numerical properties at reasonable computational cost, while providing factors, Q and R , which are quite generally useful. Although the QR decomposition is a long established technique, it is not in stasis. The QR decomposition is a key to stable and efficient solutions to many least-squares problems. A diverse collection of examples in statistics is given in the chapter and it is certain that there are many more problems for which orthogonalization algorithms can be exploited. In addition, the QR decomposition provides ready insight into the statistical properties of regression estimates.

156 citations


Journal ArticleDOI
TL;DR: A new computationally efficient algorithm for re- cursive least squares filtering is derived, which is based upon an inverse QR decomposition, which solves directly for the time-recursive least squares filter vector, while avoiding the highly serial backsubstitution step required in previously direct QR approaches.
Abstract: A new computationally efficient algorithm for re- cursive least squares filtering is derived, which is based upon an inverse QR decomposition. The method solves directly for the time-recursive least squares filter vector, while avoiding the highly serial backsubstitution step required in previously de- rived direct QR approaches. Furthermore, the method employs orthogonal rotation operations to recursively update the filter, and thus preserves the inherent stability properties of QR ap- proaches to recursive least squares filtering. The results of sim- ulations over extremely long data sets are also presented, which suggest stability of the new time-recursive algorithm. Finally, parallel implementation of the resulting method is briefly dis- cussed, and computational wavefronts are displayed.

119 citations


Journal ArticleDOI
A.S. Morse1
TL;DR: In this paper, an identifier-based solution to a simplified multivariable adaptive stabilization problem solved previously using non-identifier-based methods is presented, which is accomplished by exploiting a new method of discrete-time parameter adjustment called pseudo-continuous tuning.

94 citations


Journal ArticleDOI
TL;DR: A new numerical method for computing the GSVD of two matrices A and B is presented, a variation on Paige''s method, which differs from previous algorithms in guaranteeing both backward stability and convergence.
Abstract: We present a new numerical method for computing the GSVD [36, 27] of two matrices A and B. This method is a variation on Paige''s method [30]. It differs from previous algorithms in guaranteeing both backward stability and con- vergence. There are two innovations. The first is a new pre- processing step which reduces A and B to upper triangular forms satisfying certain rank conditions. The second is a new 2 by 2 triangular GSVD algorithm, which constitutes the inner loop of Paige''s method. We present proofs of stability and convergence of our method, and demonstrate examples on which all previous algorithms fail.

89 citations


Journal ArticleDOI
TL;DR: Two methods of matrix inversion are compared for use in an image reconstruction algorithm based on energy minimization using a Hopfield neural network and the inverse obtained using singular value decomposition.
Abstract: Two methods of matrix inversion are compared for use in an image reconstruction algorithm. The first is based on energy minimization using a Hopfield neural network. This is compared with the inverse obtained using singular value decomposition (SVD). It is shown for a practical example that the neural network provides a more useful and robust matrix inverse. >

Journal ArticleDOI
TL;DR: In this paper, a characterization of nonsingular totally positive matrices by their $QR$ factorization is obtained, and Neville elimination plays an essential role in Neville elimination.
Abstract: A well-known characterization of nonsingular totally positive matrices is improved: Only the sign of minors with consecutive initial rows or consecutive initial columns has to be checked. On the other hand, a new characterization of such matrices by their $QR$ factorization is obtained. As in other recent papers of the authors, Neville elimination plays an essential role.

Proceedings ArticleDOI
16 Aug 1993
TL;DR: This paper analyzes the performance and scalability of a number of parallel formulations of the matrix multiplication algorithm and predicts the conditions under which each formulation is better than the others.
Abstract: A number of parallel formulations of dense matrix multiplication algorithm have been developed For arbitrarily large number of processors, any of these algorithms or their variants can provide near linear speedup for sufficiently large matrix sizes and none of the algorithms can be clearly claimed to be superior than the others In this paper we analyze the performance and scalability of a number of parallel formulations of the matrix multiplication algorithm and predict the conditions under which each formulation is better than the others

Journal ArticleDOI
TL;DR: New implementation techniques for a modified Forrest-Tomlin LU update which reduce the time complexity of the update and the solution of the associated sparse linear systems of simplex-based linear programming software are presented.
Abstract: This paper discusses sparse matrix kernels of simplex-based linear programming software. State-of-the-art implementations of the simplex method maintain an LU factorization of the basis matrix which is updated at each iteration. The LU factorization is used to solve two sparse sets of linear equations at each iteration. We present new implementation techniques for a modified Forrest-Tomlin LU update which reduce the time complexity of the update and the solution of the associated sparse linear systems. We present numerical results on Netlib and other real-life LP models.

Journal ArticleDOI
TL;DR: In this article, error bounds are derived for a first-order expansion of the LU factorization of a perturbation of the identity, and the results are applied to obtain perturbations expansions of LU, Cholesky, and QR factorizations.
Abstract: In this paper error bounds are derived for a first-order expansion of the LU factorization of a perturbation of the identity The results are applied to obtain perturbation expansions of the LU, Cholesky, and QR factorizations

Proceedings ArticleDOI
02 Jun 1993
TL;DR: In this paper, the authors present all controllers for the general H∞ control problem, with no assumptions on the plant matrices, and provide necessary and sufficient conditions for the existence of an H ∞ suboptimal controller of any order in terms of three Linear Matrix Inequalities.
Abstract: This paper presents all controllers for the general H∞ control problem (with no assumptions on the plant matrices). Necessary and sufficient conditions for the existence of an H∞ suboptimal controller of any order are given in terms of three Linear Matrix Inequalities (LMIs). Furthermore, we provide the set of all H∞ suboptimal controllers explicitly parametrized in the state space using the positive definite solutions to the LMIs. The inequality formulation converts the existence conditions to a convex feasibility problem, and also a free matrix parameter in the controller formula defines a finite dimensional design space, as opposed to the infinite dimensional space associated with the Q-parametrization.

Journal ArticleDOI
TL;DR: Various theoretical issues in multidimensional (m-D) multirate signal processing are formulated and solved, based on several key properties of integer matrices, including greatest common divisors and least common multiples.
Abstract: Various theoretical issues in multidimensional (m-D) multirate signal processing are formulated and solved. In the problems considered, the decimation matrix and the expansion matrix are nondiagonal, so that extensions of 1-D results are nontrivial. The m-D polyphase implementation technique for rational sampling rate alterations, the perfect reconstruction properties for the m-D delay-chain systems, and the periodicity matrices of decimated m-D signals (both deterministic and statistical) are treated. The discussions are based on several key properties of integer matrices, including greatest common divisors and least common multiples. These properties are reviewed. >

Journal ArticleDOI
TL;DR: A fast algorithm is suggested to compute the inverse of the window function matrix, enabling discrete signals to be transformed into generalized nonorthogonal Gabor representations efficiently.
Abstract: Properties of the Gabor transformation used for image representation are discussed. The properties can be expressed in matrix notation, and the complete Gabor coefficients can be found by multiplying the inverse of the Gabor (1946) matrix and the signal vector. The Gabor matrix can be decomposed into the product of a sparse constant complex matrix and another sparse matrix that depends only on the window function. A fast algorithm is suggested to compute the inverse of the window function matrix, enabling discrete signals to be transformed into generalized nonorthogonal Gabor representations efficiently. A comparison is made between this method and the analytical method. The relation between the window function matrix and the biorthogonal functions is demonstrated. A numerical computation method for the biorthogonal functions is proposed. >

Book ChapterDOI
01 Jan 1993
TL;DR: Some new methods will be presented for computing verified inclusions of the solution of large linear systems, with examples with up to 1.000.000 unknowns.
Abstract: Validated Solution of Large Linear Systems. Some new methods will be presented for computing verified inclusions of the solution of large linear systems. The matrix of the linear system is typically of sparse or band structure. There are no prerequisites for the matrix, such as being M-matrix, Symmetric, positive definite or diagonally dominant. For general band matrices of lower, Upper bandwidth p, q of dimension n the Computing time is n · (pq + p 2 + q 2). Examples with up to 1.000.000 unknowns will be presented.

Proceedings ArticleDOI
06 Oct 1993
TL;DR: This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors that make use of non-blocking, point-to-point communication between processors, and results are presented for runs on the Intel Touchstone Delta computer.
Abstract: This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors. We assume that the matrix is distributed over a P/spl times/Q processor template with a block scattered data distribution. P, Q, and the block size can be arbitrary, so the algorithms have wide applicability. The algorithms make use of non-blocking, point-to-point communication between processors. The use of nonblocking communication allows a processor to overlap the messages that it sends to different processors, thereby avoiding unnecessary synchronization. Combined with the matrix multiplication routine, C=A/spl middot/B, the algorithms are used to compute parallel multiplications of transposed matrices, C=A/sup T//spl middot/B/sup T/, in the PUMMA package. Details of the parallel implementation of the algorithms are given, and results are presented for runs on the Intel Touchstone Delta computer. >


Proceedings ArticleDOI
02 Jun 1993
TL;DR: In this article, a new method for eigenvalue assignment in linear periodic discrete-time systems through the use of linear periodic state feedback is presented. But this method is not suitable for the case where the system equation is given in descriptor form.
Abstract: We present a new method for eigenvalue assignment in linear periodic discrete-time systems through the use of linear periodic state feedback. The proposed method uses reliable numerical techniques based on unitary transformations. In essence, it computes the Schur form of the open-loop monodromy matrix via a recent implicit eigen-decomposition algorithm, and shifts its eigenvalues sequentially. Given complete reachability of the open-loop system, we show that we can assign an arbitrary set of eigenvalues to the closed-loop monodromy matrix in this manner. Under the weaker assumption of complete control-lability, this method can be used to place all eigenvalues at the origin, thus solving the so-called deadbeat control problem. The algorithm readily extends to more general situations, such as when the system equation is given in descriptor form.

Journal ArticleDOI
TL;DR: An infeasible-interior-point algorithm for monotone linear complementarity problems that has polynomial complexity, global linear convergence, and local superlinear convergence with a Q-order of 2.
Abstract: We describe an infeasible-interior-point algorithm for monotone linear complementarity problems that has polynomial complexity, global linear convergence, and local superlinear convergence with a Q-order of 2. Only one matrix factorization is required per iteration, and the analysis assumes only that a strictly complementary solution exists.

Journal ArticleDOI
TL;DR: The time evolution operator for any quantum-mechanical computer is diagonalizable, but to obtain the diagonal decomposition of a program state of the computer is as hard as actually performing the computation corresponding to the program.
Abstract: The time evolution operator for any quantum-mechanical computer is diagonalizable, but to obtain the diagonal decomposition of a program state of the computer is as hard as actually performing the computation corresponding to the program. In particular, if a quantum-mechanical system is capable of universal computation, then the diagonal decomposition of program states is uncomputable. As a result, in a universe in which local variables support universal computation, a quantum-mechanical theory for that universe that supplies its spectrum cannot supply the spectral decomposition of the computational variables. A ``theory of everything'' can be simultaneously correct and fundamentally incomplete.

Journal ArticleDOI
TL;DR: This work considers a different so-called rank-revealing two-sided orthogonal decomposition which decomposes the matrix into a product of a unitARY matrix, a triangular matrix and another unitary matrix in such a way that the effective rank of the matrix is obvious and at the same time the noise subspace is exhibited explicity.
Abstract: Solving Total Least Squares (TLS) problemsAX≈B requires the computation of the noise subspace of the data matrix [A;B]. The widely used tool for doing this is the Singular Value Decomposition (SVD). However, the SVD has the drawback that it is computationally expensive. Therefore, we consider here a different so-called rank-revealing two-sided orthogonal decomposition which decomposes the matrix into a product of a unitary matrix, a triangular matrix and another unitary matrix in such a way that the effective rank of the matrix is obvious and at the same time the noise subspace is exhibited explicity. We show how this decompsition leads to an efficient and reliable TLS algorithm that can be parallelized in an efficient way.

Journal ArticleDOI
TL;DR: This paper gives polynomial algorithms to test whether a linear matrix is balanced or perfect, based on decomposition results previously obtained by the authors.
Abstract: A (0, 1) matrix is linear if it does not contain a 2×2 submatrix of all ones. In this paper we give polynomial algorithms to test whether a linear matrix is balanced or perfect. The algorithms are based on decomposition results previously obtained by the authors.

Journal ArticleDOI
TL;DR: An overview of matrix decomposition algorithms and how they can be expressed in terms of a unifying framework is provided and particular emphasis is placed on algorithms formulated recently by the authors for solving the linear systems arising in orthogonal spline collocation, that is, splineCollocation at Gauss points.

Journal ArticleDOI
TL;DR: In this article, a first-order componentwise perturbation analysis of the $QR$ decomposition is presented, and the bounds derived are invariant under column scaling of the underlying matrix.
Abstract: A first-order componentwise perturbation analysis of the $QR$ decomposition is presented. In contrast to the traditional normwise perturbation analysis, the bounds derived are invariant under column scaling of the underlying matrix. As an application, an assessment of the accuracy of the computed $QR$ decomposition using Householder transformations is given.

Journal ArticleDOI
TL;DR: It is shown that regularization can be incorporated into the algorithm with virtually no extra work and it is possible to compute regularized solutions to ill-conditioned Toeplitz least squares problems using only $O(mn)$ operations.
Abstract: Fast orthogonalization schemes for $m \times n$ Toeplitz matrices T, introduced by Bojanczyk, Brent, and de Hoog (BBH) and Chun, Kailath, and Lev-Ari (CKL), are extended to compute directly an inverse $QR$ factorization of T using only $O(mn)$ operations. An inverse factorization allows for an efficient parallel implementation, and the algorithm is computationally less expensive for computing a solution to the Toeplitz least squares problem than previously studied inverse $QR$ methods. In addition, it is shown that regularization can be incorporated into the algorithm with virtually no extra work. Thus it is possible to compute regularized solutions to ill-conditioned Toeplitz least squares problems using only $O(mn)$ operations. An application to ill-conditioned problems occurring in signal restoration is provided, illustrating the effectiveness of this method.

Proceedings ArticleDOI
06 Oct 1993
TL;DR: This paper compares two general library routines for performing parallel distributed matrix multiplication, the PUMMA algorithm utilities block scattered data layout, whereas BiMMeR utilizes virtual 2-D torus wrap.
Abstract: This paper compares two general library routines for performing parallel distributed matrix multiplication. The PUMMA algorithm utilities block scattered data layout, whereas BiMMeR utilizes virtual 2-D torus wrap. The algorithmic differences resulting from these different layouts are discussed us well as the general issues associated with different data layouts for library routines. Results on the Intel Delta for the two matrix multiplication algorithms are presented. >