scispace - formally typeset
Search or ask a question
Book ChapterDOI

Comparing Two Matrices by Means of Isometric Projections

TL;DR: A number of optimization problems defined on a manifold in order to compare two matrices, possibly of different order, are considered and how they relate to various specific problems from the literature are considered.
Abstract: In this paper, we go over a number of optimization problems defined on a manifold in order to compare two matrices, possibly of different order. We consider several variants and show how these problems relate to various specific problems from the literature.
Citations
More filters
Journal ArticleDOI
TL;DR: The convergence properties of the algorithm are analyzed and it is proved that accumulation points are stationary points of Φ(S), which is a low-rank approximation of the similarity matrix S introduced by Blondel et al. in [1].

20 citations


Cites background from "Comparing Two Matrices by Means of ..."

  • ...The Feasible Set of Problem 1 and its Stationary Points Problem 1 is defined on a feasible set that has a manifold structure [8]....

    [...]

Dissertation
11 Jun 2012
TL;DR: This thesis examines several algorithms for solving the nearness problem and proposes an augmented Lagrangian-based algorithm that uses these geometric tools and allows us to optimize an arbitrary smooth function over $\mathcal{C}$.
Abstract: In many areas of science one often has a given matrix, representing for example a measured data set and is required to find a matrix that is closest in a suitable norm to the matrix and possesses additionally a structure,inherited from the model used or coming from the application. We call these problems structured matrix nearness problems. We look at three different groups of these problems that come from real applications, analyze the properties of the corresponding matrix structure, and propose algorithms to solve them efficiently.The first part of this thesis concerns the nearness problem of finding the nearest $k$ factor correlation matrix $C(X) =\diag(I_n -XX^T)+XX^T$ to a given symmetric matrix, subject to natural nonlinear constraints on the elements of the $n\times k$ matrix $X$, where distance is measured in the Frobenius norm.Such problems arise, for example, when one is investigating factor models of collateralized debt obligations (CDOs) or multivariate time series. We examine several algorithms for solving the nearness problem that differ in whether or not they can take account of the nonlinear constraints and in their convergence properties. Our numerical experiments show that the performance of the methods depends strongly on the problem, but that, among our tested methods, the spectral projected gradient method is the clear winner.In the second part we look at two two-sided optimization problems where the matrix of unknowns $Y\in\R^{n\times p}$ lies in the Stiefel manifold. These two problems come from an application in atomic chemistry where one is looking for atomic orbitals with prescribed occupation numbers. We analyze these two problems, propose an analytic optimal solution of the first and show that an optimal solution of the second problem can be found by solving a convex quadratic programming problem with box constraints and $p$ unknowns. We prove that the latter problem can be solved by the active-set method in at most $2p$ iterations. Subsequently, we analyze the set of optimal solutions $\mathcal{C}=\{Y\in\R^{n\times p}:Y^TY=I_p, Y^TNY=D\}$ of the first problem for $N$ symmetric and $D$ diagonal and find that a slight modification of it is a Riemannian manifold. We derive the geometric objects required to make an optimization over this manifold possible. We propose an augmented Lagrangian-based algorithm that uses these geometric tools and allows us to optimize an arbitrary smooth function over $\mathcal{C}$. This algorithm can be used to select a particular solution out of the latter set $\mathcal{C}$ by posing a new optimization problem. We compare it numerically with a similar algorithm that,however, does not apply these geometric tools and find that our algorithm yields better performance.The third part is devoted to low rank nearness problems in the $Q$-norm, where the matrix of interest is additionally of linear structure, meaning it lies in the set spanned by $s$ predefined matrices $U_1,\ldots, U_s\in\{0,1\}^{n\times p}$. These problems are often associated with model reduction, for example in speech encoding, filter design, or latent semantic indexing. We investigate three approaches that support any linear structure and examine further the geometric reformulation by Schuermans et al.\ (2003). We improve their algorithm in terms of reliability by applying the augmented Lagrangian method and show in our numerical tests that the resulting algorithm yields better performance than other existing methods.

16 citations


Cites background from "Comparing Two Matrices by Means of ..."

  • ...We briefly repeat the analysis in [28] and extend it by addressing the question of how to find the points at which the minimum value is attained....

    [...]

  • ...The minimum value of the first problem can be derived [28] by exploiting the structure of the stationary points....

    [...]

  • ...1) in [28], where the objective function is ‖Y ∗NY − U∗BU‖2 F for given Hermitian matrices N and B of possible different dimensions....

    [...]

  • ...The analysis of this section is mainly from [28] where the stationary points of (4....

    [...]

Proceedings Article
01 Jan 2015
TL;DR: An algorithm that solves optimization problems on a matrix manifold M ⊆ R with an additional rank inequality constraint with the help of new geometric objects is presented.
Abstract: This paper presents an algorithm that solves optimization problems on a matrix manifold M ⊆ R with an additional rank inequality constraint. New geometric objects are defined to facilitate efficiently finding a suitable rank. The convergence properties of the algorithm are given and a weighted low-rank approximation problem is used to illustrate the efficiency and effectiveness of the algorithm.

13 citations

27 Aug 2012
TL;DR: An augmented Lagrangian-based algorithm is proposed that minimizes an arbitrary smooth function over $\mathcal{C}$ and guarantees global convergence to a stationary point and is compared numerically with a similar algorithm that is to the authors' knowledge not guaranteed to converge.
Abstract: We investigate two two-sided optimization problems that have their application in atomic chemistry and whose matrix of unknowns $Y\in\R^{n\times p}$ ($n\ge p$) lies in the Stiefel manifold. We propose an analytic optimal solution of the first problem, and show that an optimal solution of the second problem can be found by solving a convex quadratic programming problem with box constraints and $p$ unknowns. We prove that the latter problem can be solved by the active-set method in at most $2p$ iterations. Subsequently, we analyze the set of the optimal solutions of both problems, which is of the form of $\mathcal{C}=\{Y\in\R^{n\times p}:Y^TY=I_p, Y^T\Lambda Y=\Delta\}$ for $\Lambda$ and $\Delta$ diagonal and we address the problem how an arbitrary smooth function over $\mathcal{C}$ can be minimized. We find that a slight modification of $\mathcal{C}$ is a Riemannian manifold for which geometric objects can be derived that are required to make an optimization over this manifold possible. By using these geometric tools we propose then an augmented Lagrangian-based algorithm that minimizes an arbitrary smooth function over $\mathcal{C}$ and guarantees global convergence to a stationary point. Latter is shown by investigating when the LICQ (Linear Independence Constraint Qualification) is satisfied. The algorithm can be used to select a particular solution out of the set $\mathcal{C}$ by posing a new optimization problem. Finally we compare this algorithm numerically with a similar algorithm that, however, does not apply these geometric tools and that is to our knowledge not guaranteed to converge. Our results show that our algorithm yields a significantly better performance.

2 citations


Cites background from "Comparing Two Matrices by Means of ..."

  • ...As the minimum value can be derived [9] by exploiting the structure of the stationary points we only need to find a point on the Stiefel manifold that attains this value....

    [...]

  • ...1) [9] and find that the eigenvalues δ∗ 1 , ....

    [...]

  • ...Absil for directing us to [9] and pointing out the connection of B(n, p) to the Grassmannian manifold....

    [...]

Proceedings Article
01 Jan 2009
TL;DR: In this article, an algorithm to compute a low-rank approximation of the similarity matrix S introduced by Blondel et al. in [1] was proposed. But the convergence properties of the algorithm were not analyzed.
Abstract: In this paper, we analyze an algorithm to compute a low-rank approximation of the similarity matrix S introduced by Blondel et al. in [1]. This problem can be reformulated as an optimization problem of a continuous function Φ(S) = tr ( S M2(S) ) where S is constrained to have unit Frobenius norm, and M2 is a non-negative linear map. We restrict the feasible set to the set of matrices of unit Frobenius norm with either k nonzero identical singular values or at most k nonzero (not necessarily identical) singular values. We first characterize the stationary points of the associated optimization problems and further consider iterative algorithms to find one of them. We analyze the convergence properties of our algorithm and prove that accumulation points are stationary points of Φ(S). We finally compare our method in terms of speed and accuracy to the full rank algorithm proposed in [1].

1 citations

References
More filters
Book
Roger A. Horn1
12 Jul 2010
TL;DR: The field of values as discussed by the authors is a generalization of the field of value of matrices and functions, and it includes singular value inequalities, matrix equations and Kronecker products, and Hadamard products.
Abstract: 1. The field of values 2. Stable matrices and inertia 3. Singular value inequalities 4. Matrix equations and Kronecker products 5. Hadamard products 6. Matrices and functions.

7,013 citations

Journal ArticleDOI
TL;DR: Parlett as discussed by the authors presents mathematical knowledge that is needed in order to understand the art of computing eigenvalues of real symmetric matrices, either all of them or only a few.
Abstract: According to Parlett, 'Vibrations are everywhere, and so too are the eigenvalues associated with them. As mathematical models invade more and more disciplines, we can anticipate a demand for eigenvalue calculations in an ever richer variety of contexts.' Anyone who performs these calculations will welcome the reprinting of Parlett's book (originally published in 1980). In this unabridged, amended version, Parlett covers aspects of the problem that are not easily found elsewhere. The chapter titles convey the scope of the material succinctly. The aim of the book is to present mathematical knowledge that is needed in order to understand the art of computing eigenvalues of real symmetric matrices, either all of them or only a few. The author explains why the selected information really matters and he is not shy about making judgments. The commentary is lively but the proofs are terse.

3,115 citations

Book
23 Sep 2002
TL;DR: In this paper, a review of topology, linear algebra, algebraic geometry, and differential equations is presented, along with an overview of the de Rham Theorem and its application in calculus.
Abstract: Preface.- 1 Smooth Manifolds.- 2 Smooth Maps.- 3 Tangent Vectors.- 4 Submersions, Immersions, and Embeddings.- 5 Submanifolds.- 6 Sard's Theorem.- 7 Lie Groups.- 8 Vector Fields.- 9 Integral Curves and Flows.- 10 Vector Bundles.- 11 The Cotangent Bundle.- 12 Tensors.- 13 Riemannian Metrics.- 14 Differential Forms.- 15 Orientations.- 16 Integration on Manifolds.- 17 De Rham Cohomology.- 18 The de Rham Theorem.- 19 Distributions and Foliations.- 20 The Exponential Map.- 21 Quotient Manifolds.- 22 Symplectic Manifolds.- Appendix A: Review of Topology.- Appendix B: Review of Linear Algebra.- Appendix C: Review of Calculus.- Appendix D: Review of Differential Equations.- References.- Notation Index.- Subject Index

3,051 citations


"Comparing Two Matrices by Means of ..." refers background in this paper

  • ...We refer the reader to [4,5] for details....

    [...]

Book
01 Jan 1980
TL;DR: Parlett as discussed by the authors presents mathematical knowledge that is needed in order to understand the art of computing eigenvalues of real symmetric matrices, either all of them or only a few.
Abstract: According to Parlett, 'Vibrations are everywhere, and so too are the eigenvalues associated with them. As mathematical models invade more and more disciplines, we can anticipate a demand for eigenvalue calculations in an ever richer variety of contexts.' Anyone who performs these calculations will welcome the reprinting of Parlett's book (originally published in 1980). In this unabridged, amended version, Parlett covers aspects of the problem that are not easily found elsewhere. The chapter titles convey the scope of the material succinctly. The aim of the book is to present mathematical knowledge that is needed in order to understand the art of computing eigenvalues of real symmetric matrices, either all of them or only a few. The author explains why the selected information really matters and he is not shy about making judgments. The commentary is lively but the proofs are terse.

3,022 citations

Book
23 Dec 2007
TL;DR: Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis and will be of interest to applied mathematicians, engineers, and computer scientists.
Abstract: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.

2,586 citations

Trending Questions (1)
What is the best way to compare two matrices?

The paper discusses optimization problems on a manifold to compare two matrices, showing how these problems relate to specific problems from the literature. It does not explicitly state the best way to compare two matrices.