scispace - formally typeset
Search or ask a question
Topic

Orthogonal matrix

About: Orthogonal matrix is a research topic. Over the lifetime, 1541 publications have been published within this topic receiving 28246 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Methods involv- ing approximation theory, dierential equations, the matrix eigenvalues, and the matrix characteristic polynomial have been proposed, indicating that some of the methods are preferable to others, but that none are completely satisfactory.
Abstract: In principle, the exponential of a matrix could be computed in many ways. Methods involving approximation theory, differential equations, the matrix eigenvalues, and the matrix characteristic polyn...

2,196 citations

Journal ArticleDOI
TL;DR: In this paper, a closed-form solution to the least square problem for three or more points is presented, which requires the computation of the square root of a symmetric matrix, and the best scale is equal to the ratio of the root-mean-square deviations of the coordinates in the two systems from their respective centroids.
Abstract: Finding the relationship between two coordinate systems by using pairs of measurements of the coordinates of a number of points in both systems is a classic photogrammetric task. The solution has applications in stereophotogrammetry and in robotics. We present here a closed-form solution to the least-squares problem for three or more points. Currently, various empirical, graphical, and numerical iterative methods are in use. Derivation of a closed-form solution can be simplified by using unit quaternions to represent rotation, as was shown in an earlier paper [ J. Opt. Soc. Am. A4, 629 ( 1987)]. Since orthonormal matrices are used more widely to represent rotation, we now present a solution in which 3 × 3 matrices are used. Our method requires the computation of the square root of a symmetric matrix. We compare the new result with that obtained by an alternative method in which orthonormality is not directly enforced. In this other method a best-fit linear transformation is found, and then the nearest orthonormal matrix is chosen for the rotation. We note that the best translational offset is the difference between the centroid of the coordinates in one system and the rotated and scaled centroid of the coordinates in the other system. The best scale is equal to the ratio of the root-mean-square deviations of the coordinates in the two systems from their respective centroids. These exact results are to be preferred to approximate methods based on measurements of a few selected points.

1,101 citations

Book
01 Jan 1991
TL;DR: This paper focuses on Gaussian Elimination as a model for Iterative Methods for Linear Systems, and its applications to Singular Value Decomposition and Sparse Eigenvalue Problems.
Abstract: Gaussian Elimination and its Variants Sensitivity of Linear Systems Effects of Roundoff Errors Orthogonal Matrices and the Least Squares Problem Eigenvalues, Eigenvectors and Invariant Subspaces Other Methods for the Symmetric Eigenvalue Problem The Singular Value Decomposition Appendices Bibliography

1,077 citations

Journal ArticleDOI
TL;DR: The Homotopy method is applied to the underdetermined lscr1-minimization problem min parxpar1 subject to y=Ax and is shown to run much more rapidly than general-purpose LP solvers when sufficient sparsity is present, implying that homotopy may be used to rapidly decode error-correcting codes in a stylized communication system with a computational budget constraint.
Abstract: The minimum lscr1-norm solution to an underdetermined system of linear equations y=Ax is often, remarkably, also the sparsest solution to that system. This sparsity-seeking property is of interest in signal processing and information transmission. However, general-purpose optimizers are much too slow for lscr1 minimization in many large-scale applications.In this paper, the Homotopy method, originally proposed by Osborne et al. and Efron et al., is applied to the underdetermined lscr1-minimization problem min parxpar1 subject to y=Ax. Homotopy is shown to run much more rapidly than general-purpose LP solvers when sufficient sparsity is present. Indeed, the method often has the following k-step solution property: if the underlying solution has only k nonzeros, the Homotopy method reaches that solution in only k iterative steps. This k-step solution property is demonstrated for several ensembles of matrices, including incoherent matrices, uniform spherical matrices, and partial orthogonal matrices. These results imply that Homotopy may be used to rapidly decode error-correcting codes in a stylized communication system with a computational budget constraint. The approach also sheds light on the evident parallelism in results on lscr1 minimization and orthogonal matching pursuit (OMP), and aids in explaining the inherent relations between Homotopy, least angle regression (LARS), OMP, and polytope faces pursuit.

921 citations

Journal ArticleDOI
TL;DR: Methods of optimization to derive the maximum likelihood estimates as well as the practical usefulness of these models are discussed and an application on stellar data which dramatically illustrated the relevance of allowing clusters to have different volumes is illustrated.

858 citations


Network Information
Related Topics (5)
Markov chain
51.9K papers, 1.3M citations
76% related
Upper and lower bounds
56.9K papers, 1.1M citations
73% related
Probability distribution
40.9K papers, 1.1M citations
73% related
Bounded function
77.2K papers, 1.3M citations
73% related
Optimization problem
96.4K papers, 2.1M citations
72% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202315
202238
202156
202069
201967
201871