scispace - formally typeset
Search or ask a question
Topic

Divide-and-conquer eigenvalue algorithm

About: Divide-and-conquer eigenvalue algorithm is a research topic. Over the lifetime, 2877 publications have been published within this topic receiving 81838 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The final part tries to introduce the reader to the fascinating setting of differential forms and homological techniques with the description of the Hodge–Laplace eigenvalue problem and its mixed equivalent formulations.
Abstract: We discuss the finite element approximation of eigenvalue problems associated with compact operators. While the main emphasis is on symmetric problems, some comments are present for non-self-adjoint operators as well. The topics covered include standard Galerkin approximations, non-conforming approximations, and approximation of eigenvalue problems in mixed form. Some applications of the theory are presented and, in particular, the approximation of the Maxwell eigenvalue problem is discussed in detail. The final part tries to introduce the reader to the fascinating setting of differential forms and homological techniques with the description of the Hodge–Laplace eigenvalue problem and its mixed equivalent formulations. Several examples and numerical computations complete the paper, ranging from very basic exercises to more significant applications of the developed theory.

454 citations

Book
01 Mar 1990
TL;DR: A starting point Formal problems in linear algebra The singular-value decomposition and its use to solve least-squares problems Handling larger problems Some comments on the formation of the cross-product matrix ATA.
Abstract: A starting point Formal problems in linear algebra The singular-value decomposition and its use to solve least-squares problems Handling larger problems Some comments on the formation of the cross-product matrix ATA Linear equations-a direct approach The Choleski decomposition The symmetric positive definite matrix again The algebraic eigenvalue generalized problem Real symmetric matrices The generalized symmetric matrix eigenvalue problem Optimization and nonlinear equations One-dimensional problems Direct search methods Descent to a minimum I-variable metric algorithms Descent to a minimum II-conjugate gradients Minimizing a nonlinear sum of squares Leftovers The conjugate gradients method applied to problems in linear algebra Appendices Bibliography Index

451 citations

Journal ArticleDOI
TL;DR: A new numerical algorithm for solving the symmetric eigenvalue problem is presented, which takes its inspiration from the contour integration and density matrix representation in quantum mechanics.
Abstract: A fast and stable numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques and takes its inspiration from the contour integration and density-matrix representation in quantum mechanics. It will be shown that this algorithm---named FEAST---exhibits high efficiency, robustness, accuracy, and scalability on parallel architectures. Examples from electronic structure calculations of carbon nanotubes are presented, and numerical performances and capabilities are discussed.

379 citations

Journal Article
TL;DR: A novel metric learning approach called DML-eig is introduced which is shown to be equivalent to a well-known eigen value optimization problem called minimizing the maximal eigenvalue of a symmetric matrix.
Abstract: The main theme of this paper is to develop a novel eigenvalue optimization framework for learning a Mahalanobis metric. Within this context, we introduce a novel metric learning approach called DML-eig which is shown to be equivalent to a well-known eigenvalue optimization problem called minimizing the maximal eigenvalue of a symmetric matrix (Overton, 1988; Lewis and Overton, 1996). Moreover, we formulate LMNN (Weinberger et al., 2005), one of the state-of-the-art metric learning methods, as a similar eigenvalue optimization problem. This novel framework not only provides new insights into metric learning but also opens new avenues to the design of efficient metric learning algorithms. Indeed, first-order algorithms are developed for DML-eig and LMNN which only need the computation of the largest eigenvector of a matrix per iteration. Their convergence characteristics are rigorously established. Various experiments on benchmark data sets show the competitive performance of our new approaches. In addition, we report an encouraging result on a difficult and challenging face verification data set called Labeled Faces in the Wild (LFW).

348 citations

Journal ArticleDOI
TL;DR: In this article, a method for finding certain eigenvalues of a generalized eigenvalue problem that lie in a given domain of the complex plane is proposed, which projects the matrix pencil onto a subspace associated with the eigen values that are located in the domain via numerical integration.

344 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
84% related
Differential equation
88K papers, 2M citations
83% related
Numerical analysis
52.2K papers, 1.2M citations
81% related
Eigenvalues and eigenvectors
51.7K papers, 1.1M citations
80% related
Boundary value problem
145.3K papers, 2.7M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20237
202229
20202
201810
2017100
201697