scispace - formally typeset
Search or ask a question
Topic

Square matrix

About: Square matrix is a research topic. Over the lifetime, 5000 publications have been published within this topic receiving 92428 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: An extension of the Schur method is presented which enables real arithmetic to be used throughout when computing a real square root of a real matrix.

208 citations

Journal ArticleDOI
TL;DR: It is shown that apparently innocuous algorithmic modifications to the Padé iteration can lead to instability, and a perturbation analysis is given to provide some explanation.
Abstract: Any matrix with no nonpositive real eigenvalues has a unique square root for which every eigenvalue lies in the open right half-plane. A link between the matrix sign function and this square root is exploited to derive both old and new iterations for the square root from iterations for the sign function. One new iteration is a quadratically convergent Schulz iteration based entirely on matrix multiplication; it converges only locally, but can be used to compute the square root of any nonsingular M-matrix. A new Pade iteration well suited to parallel implementation is also derived and its properties explained. Iterative methods for the matrix square root are notorious for suffering from numerical instability. It is shown that apparently innocuous algorithmic modifications to the Pade iteration can lead to instability, and a perturbation analysis is given to provide some explanation. Numerical experiments are included and advice is offered on the choice of iterative method for computing the matrix square root.

207 citations

Journal ArticleDOI
TL;DR: The authors show that a necessary and sufficient condition for a 3*3 matrix to be so decomposable is that one of its singular values is zero and the other two are equal.
Abstract: In the eight-point linear algorithm for determining 3D motion/structure from two perspective views using point correspondences, the E matrix plays a central role. The E matrix is defined as a skew-symmetrical matrix (containing the translation components) postmultiplied by a rotation matrix. The authors show that a necessary and sufficient condition for a 3*3 matrix to be so decomposable is that one of its singular values is zero and the other two are equal. Several other forms of this property are presented. Some applications are briefly described. >

204 citations

Journal ArticleDOI
01 Feb 1987
TL;DR: In this paper, the authors discuss algorithms for matrix multiplication on a concurrent processor containing a two-dimensional mesh or richer topology, and present detailed performance measurements on hypercubes with 4, 16, and 64 nodes.
Abstract: We discuss algorithms for matrix multiplication on a concurrent processor containing a two-dimensional mesh or richer topology. We present detailed performance measurements on hypercubes with 4, 16, and 64 nodes, and analyze them in terms of communication overhead and load balancing. We show that the decomposition into square subblocks is optimal C code implementing the algorithms is available.

200 citations

Journal ArticleDOI
TL;DR: The problem of learning a symmetric positive definite matrix is addressed and the derivation and the analyses of the original EG update and AdaBoost generalize to the non-diagonal case, and the resulting matrix exponentiated gradient (MEG) update and DefiniteBoost are applied to the problem oflearning a kernel matrix from distance measurements.
Abstract: We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric positive definite matrix subject to linear constraints. The updates generalize the exponentiated gradient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the derivation and the analyses of the original EG update and AdaBoost generalize to the non-diagonal case. We apply the resulting matrix exponentiated gradient (MEG) update and DefiniteBoost to the problem of learning a kernel matrix from distance measurements.

199 citations


Network Information
Related Topics (5)
Matrix (mathematics)
105.5K papers, 1.9M citations
84% related
Polynomial
52.6K papers, 853.1K citations
84% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
81% related
Bounded function
77.2K papers, 1.3M citations
80% related
Hilbert space
29.7K papers, 637K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202322
202244
2021115
2020149
2019134
2018145