scispace - formally typeset
Search or ask a question

Showing papers on "Divide-and-conquer eigenvalue algorithm published in 2017"


Journal ArticleDOI
TL;DR: This article surveys nonlinear eigenvalue problems associated with matrix-valued functions which depend nonlinearly on a single scalar parameter, with a particular emphasis on their mathematical properties and available numerical solution techniques.
Abstract: Nonlinear eigenvalue problems arise in a variety of science and engineering applications and in the past ten years there have been numerous breakthroughs in the development of numerical methods. This article surveys nonlinear eigenvalue problems associated with matrix-valued functions which depend nonlinearly on a single scalar parameter, with a particular emphasis on their mathematical properties and available numerical solution techniques. Solvers based on Newton's method, contour integration, and sampling via rational interpolation are reviewed. Problems of selecting the appropriate parameters for each of the solver classes are discussed and illustrated with numerical examples. This survey also contains numerous MATLAB code snippets that can be used for interactive exploration of the discussed methods.

170 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the resulting algorithm is a general-purpose TRS solver, effective both for dense and large-sparse problems, including the so-called hard case, and obtaining approximate solutions efficiently when high accuracy is unnecessary.
Abstract: The state-of-the-art algorithms for solving the trust-region subproblem (TRS) are based on an iterative process, involving solutions of many linear systems, eigenvalue problems, subspace optimization, or line search steps. A relatively underappreciated fact, due to Gander, Golub, and von Matt [Linear Algebra Appl., 114 (1989), pp. 815--839], is that TRSs can be solved by one generalized eigenvalue problem, with no outer iterations. In this paper we rediscover this fact and discover its great practicality, which exhibits good performance both in accuracy and efficiency. Moreover, we generalize the approach in various directions, namely by allowing for an ellipsoidal constraint, dealing with the so-called hard case, and obtaining approximate solutions efficiently when high accuracy is unnecessary. We demonstrate that the resulting algorithm is a general-purpose TRS solver, effective both for dense and large-sparse problems, including the so-called hard case. Our algorithm is easy to implement: its essence i...

77 citations


Journal ArticleDOI
TL;DR: In this article, the eigenvalue decomposition of the limited-memory quasi-Newton approximation of the Hessian matrix is used to find a nearly-exact solution to the trust-region subproblem defined by the Euclidean norm with an insignificant computational overhead as compared with the cost of computing the quasi-newton direction in line-search limited memory methods.
Abstract: Limited-memory quasi-Newton methods and trust-region methods represent two efficient approaches used for solving unconstrained optimization problems. A straightforward combination of them deteriorates the efficiency of the former approach, especially in the case of large-scale problems. For this reason, the limited-memory methods are usually combined with a line search. We show how to efficiently combine limited-memory and trust-region techniques. One of our approaches is based on the eigenvalue decomposition of the limited-memory quasi-Newton approximation of the Hessian matrix. The decomposition allows for finding a nearly-exact solution to the trust-region subproblem defined by the Euclidean norm with an insignificant computational overhead as compared with the cost of computing the quasi-Newton direction in line-search limited-memory methods. The other approach is based on two new eigenvalue-based norms. The advantage of the new norms is that the trust-region subproblem is separable and each of the smaller subproblems is easy to solve. We show that our eigenvalue-based limited-memory trust-region methods are globally convergent. Moreover, we propose improved versions of the existing limited-memory trust-region algorithms. The presented results of numerical experiments demonstrate the efficiency of our approach which is competitive with line-search versions of the L-BFGS method.

39 citations



Journal ArticleDOI
TL;DR: In contrast to existing work on analyzing multiple eigenvalues of delay systems, all theory in a matrix framework is developed, i.e., without reduction of a problem to the analysis of a scalar characteristic quasi-polynomial.
Abstract: We contribute to the perturbation theory of nonlinear eigenvalue problems in three ways. First, we extend the formula for the sensitivity of a simple eigenvalue with respect to a variation of a parameter to the case of multiple nonsemisimple eigenvalues, thereby providing an explicit expression for the leading coefficients of the Puiseux series of the emanating branches of eigenvalues. Second, for a broad class of delay eigenvalue problems, the connection between the finite-dimensional nonlinear eigenvalue problem and an associated infinite-dimensional linear eigenvalue problem is emphasized in the developed perturbation theory. Finally, in contrast to existing work on analyzing multiple eigenvalues of delay systems, we develop all theory in a matrix framework, i.e., without reduction of a problem to the analysis of a scalar characteristic quasi-polynomial.

31 citations


Journal ArticleDOI
TL;DR: The tensor infinite Arnoldi method (TIAR) as discussed by the authors is applicable to a general class of nonlinear eigenvalue problems (NEPs) that are nonlinear in the eigen value.
Abstract: We present a new computational approach for a class of large-scale nonlinear eigenvalue problems (NEPs) that are nonlinear in the eigenvalue. The contribution of this paper is twofold. We derive a new iterative algorithm for NEPs, the tensor infinite Arnoldi method (TIAR), which is applicable to a general class of NEPs, and we show how to specialize the algorithm to a specific NEP: the waveguide eigenvalue problem. The waveguide eigenvalue problem arises from a finite-element discretization of a partial differential equation used in the study waves propagating in a periodic medium. The algorithm is successfully applied to accurately solve benchmark problems as well as complicated waveguides. We study the complexity of the specialized algorithm with respect to the number of iterations $m$ and the size of the problem $n$, both from a theoretical perspective and in practice. For the waveguide eigenvalue problem, we establish that the computationally dominating part of the algorithm has complexity $\mathcal{O...

26 citations


Journal ArticleDOI
TL;DR: The proposed real structure-preserving Jacobi algorithm can preserve the symmetry and JRS-symmetry of the real counterpart of quaternion Hermitian matrix and is generally superior to the state-of-the-art algorithm.
Abstract: A new real structure-preserving Jacobi algorithm is proposed for solving the eigenvalue problem of quaternion Hermitian matrix. By employing the generalized JRS-symplectic Jacobi rotations, the new quaternion Jacobi algorithm can preserve the symmetry and JRS-symmetry of the real counterpart of quaternion Hermitian matrix. Moreover, the proposed algorithm only includes real operations without dimension-expanding and is generally superior to the state-of-the-art algorithm. Numerical experiments are reported to indicate its efficiency and accuracy.

24 citations


Journal ArticleDOI
TL;DR: A universal limit theorem is proved for the halting time, or iteration count, of the power/inverse power methods and the QR eigenvalue algorithm and the universality theorem provides a complexity estimate for the algorithms which, in this random setting, holds with high probability.
Abstract: We prove a universal limit theorem for the halting time, or iteration count, of the power/inverse power methods and the QR eigenvalue algorithm. Specifically, we analyze the required number of iterations to compute extreme eigenvalues of random, positive definite sample covariance matrices to within a prescribed tolerance. The universality theorem provides a complexity estimate for the algorithms which, in this random setting, holds with high probability. The method of proof relies on recent results on the statistics of the eigenvalues and eigenvectors of random sample covariance matrices (i.e., delocalization, rigidity, and edge universality).

21 citations


Journal Article
TL;DR: In this paper, a sufficient condition for the existence of symmetric doubly stochastic matrices with prescribed spectrum was proposed, where the spectrum is defined as the spectrum of a doubly-stochastic inverse eigenvalue problem.
Abstract: ‎The symmetric doubly stochastic inverse eigenvalue problem (hereafter SDIEP) is to determine the necessary and sufficient conditions for an $n$-tuple $sigma=(1,lambda_{2},lambda_{3},ldots,lambda_{n})in mathbb{R}^{n}$ with $|lambda_{i}|leq 1,~i=1,2,ldots,n$‎, ‎to be the spectrum of an $ntimes n$ symmetric doubly stochastic matrix $A$‎. ‎If there exists an $ntimes n$ symmetric doubly stochastic matrix $A$ with $sigma$ as its spectrum‎, ‎then the list $sigma$ is s.d.s‎. ‎realizable‎, ‎or such that $A$ s.d.s‎. ‎realizes $sigma$‎. ‎In this paper‎, ‎we propose a new sufficient condition for the existence of the symmetric doubly stochastic matrices with prescribed spectrum‎. ‎Finally‎, ‎some results about how to construct new s.d.s‎. ‎realizable lists from the known lists are presented.

19 citations


Journal ArticleDOI
TL;DR: Based on the work of Lin and Xie (2015), the authors build a multigrid method to solve the transmission eigenvalue problems, which only need to solve a series of primal and dual eigen value problems on a coarse mesh and the associated boundary value problem on the finer and finer meshes.

19 citations


Journal ArticleDOI
TL;DR: In this article, a positive semi-definite eigenvalue problem for second-order self-adjoint elliptic differential operator definedon a bounded domain in the planewith smooth boundary and Dirichlet boundary condition is considered.
Abstract: A positive semi-definite eigenvalue problem for second-order self-adjoint elliptic differential operator definedon a bounded domain in the planewith smooth boundary and Dirichlet boundary condition is considered. This problem has a nondecreasing sequence of positive eigenvalues of finite multiplicity with a limit point at infinity. To the sequence of eigenvalues, there corresponds an orthonormal system of eigenfunctions. The original differential eigenvalue problem is approximated by the finite element method with numerical integration and Lagrange curved triangular finite elements of arbitrary order. Error estimates for approximate eigenvalues and eigenfunctions are established.


Journal ArticleDOI
TL;DR: Two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously.

Journal ArticleDOI
TL;DR: In this paper, a unified view on quadratically convergent algorithms for eigenvalue problems and inverse eigen value problems based on matrix equations is provided for the first time.

Posted Content
TL;DR: The exact value of the squared condition number for the polynomial eigenvalue problem is computed, when the input matrices have entries coming from the standard complex Gaussian distribution, showing that this problem is quite well conditioned.
Abstract: We compute the exact value of the squared condition number for the polynomial eigenvalue problem, when the input matrices have entries coming from the standard complex Gaussian distribution, showing that in general this problem is quite well conditioned.

Journal ArticleDOI
Seungwoo Lee1, Do Young Kwak1, Imbo Sim
TL;DR: This paper proves the stability and convergence of an immersed finite element method (IFEM) for eigenvalues using Crouzeix–Raviart P 1 -nonconforming approximation and shows that spectral analysis for the classical eigenvalue problem can be easily applied to the model problem.

Journal ArticleDOI
24 Feb 2017-Filomat
TL;DR: In this article, two kinds of symmetric tridiagonal plus paw form (hereafter TPPF) matrices are presented, and the corresponding inverse eigenvalue problems are to construct the matrices of their corresponding form, from the minimal and maximal eigenvalues of all their leading principal submatrices respectively.
Abstract: This paper presents two kinds of symmetric tridiagonal plus paw form (hereafter TPPF) matrices, which are the combination of tridiagonal matrices and bordered diagonal matrices. In particular, we exploit the interlacing properties of their eigenvalues. On this basis, the inverse eigenvalue problems for the two kinds of symmetric TPPF matrices are to construct the matrices of their corresponding form, from the minimal and the maximal eigenvalues of all their leading principal submatrices respectively. The necessary and sucient conditions for the solvability of the problems are derived. Our results are constructive and corresponding numerical algorithms and some examples are given.

Journal ArticleDOI
TL;DR: Hybrid algorithms for solving the partial generalized eigenvalue problem for symmetric positive definite sparse matrices of different structures by hybrid computers with graphic processors are proposed, coefficients for the efficiency of the algorithms are obtained and approbation of the developed algorithms for test and practical problems is carried out.
Abstract: Hybrid algorithms for solving the partial generalized eigenvalue problem for symmetric positive definite sparse matrices of different structures by hybrid computers with graphic processors are proposed, coefficients for the efficiency of the algorithms are obtained, and approbation of the developed algorithms for test and practical problems is carried out.

Journal ArticleDOI
TL;DR: An even more succinct proof is given, using novel ideas based on Karush–Kuhn–Tucker theory and nonlinear programming, for the simplest preconditioned eigensolver with a fixed step size.
Abstract: Preconditioned iterative methods for numerical solution of large matrix eigenvalue problems are increasingly gaining importance in various application areas, ranging from material sciences to data mining. Some of them, e.g., those using multilevel preconditioning for elliptic differential operators or graph Laplacian eigenvalue problems, exhibit almost optimal complexity in practice; i.e., their computational costs to calculate a fixed number of eigenvalues and eigenvectors grow linearly with the matrix problem size. Theoretical justification of their optimality requires convergence rate bounds that do not deteriorate with the increase of the problem size. Such bounds were pioneered by E. D'yakonov over three decades ago, but to date only a handful have been derived, mostly for symmetric eigenvalue problems. Just a few of known bounds are sharp. One of them is proved in doi:10.1016/S0024-3795(01)00461-X for the simplest preconditioned eigensolver with a fixed step size. The original proof has been greatly simplified and shortened in doi:10.1137/080727567 by using a gradient flow integration approach. In the present work, we give an even more succinct proof, using novel ideas based on Karush---Kuhn---Tucker theory and nonlinear programming.

Journal ArticleDOI
TL;DR: This work presents a new restart technique for iterative projection methods for nonlinear eigenvalue problems admitting minmax characterization of their eigenvalues that makes use of the minmax induced local enumeration of the eigen values in the inner iteration.
Abstract: In this work we present a new restart technique for iterative projection methods for nonlinear eigenvalue problems admitting minmax characterization of their eigenvalues Our technique makes use of the minmax induced local enumeration of the eigenvalues in the inner iteration In contrast to global numbering which requires including all the previously computed eigenvectors in the search subspace, the proposed local numbering only requires a presence of one eigenvector in the search subspace This effectively eliminates the search subspace growth and therewith the super-linear increase of the computational costs if a large number of eigenvalues or eigenvalues in the interior of the spectrum are to be computed The new restart technique is integrated into nonlinear iterative projection methods like the Nonlinear Arnoldi and Jacobi-Davidson methods The efficiency of our new restart framework is demonstrated on a range of nonlinear eigenvalue problems: quadratic, rational and exponential including an industrial real-life conservative gyroscopic eigenvalue problem modeling free vibrations of a rolling tire We also present an extension of the method to problems without minmax property but with eigenvalues which have a dominant either real or imaginary part and test it on two quadratic eigenvalue problems

Posted Content
TL;DR: In this article, the Lanczos method is used to estimate the spectral density of a matrix pencil when both the eigenvalues are Hermitian and the positive definite matrix is positive definite.
Abstract: The distribution of the eigenvalues of a Hermitian matrix (or of a Hermitian matrix pencil) reveals important features of the underlying problem, whether a Hamiltonian system in physics, or a social network in behavioral sciences. However, computing all the eigenvalues explicitly is prohibitively expensive for real-world applications. This paper presents two types of methods to efficiently estimate the spectral density of a matrix pencil $(A, B)$ when both $A$ and $B$ are Hermitian and, in addition, $B$ is positive definite. The first one is based on the Kernel Polynomial Method (KPM) and the second on Gaussian quadrature by the Lanczos procedure. By employing Chebyshev polynomial approximation techniques, we can avoid direct factorizations in both methods, making the resulting algorithms suitable for large matrices. Under some assumptions, we prove bounds that suggest that the Lanczos method converges twice as fast as the KPM method. Numerical examples further indicate that the Lanczos method can provide more accurate spectral densities when the eigenvalue distribution is highly non-uniform. As an application, we show how to use the computed spectral density to partition the spectrum into intervals that contain roughly the same number of eigenvalues. This procedure, which makes it possible to compute the spectrum by parts, is a key ingredient in the new breed of eigensolvers that exploit "spectrum slicing".

Journal ArticleDOI
TL;DR: The results reveal that the eigenvalue asymptotic behavior can be characterized by solving a simple generalized eigen value problem, leading to numerically efficient stability conditions.
Abstract: In this technical note we present a stability analysis approach for polynomially-dependent one-parameter systems. The approach, which appears to be conceptually appealing and computationally efficient and is referred to as an eigenvalue perturbation approach , seeks to characterize the analytical and asymptotic properties of eigenvalues of matrix-valued functions or operators. The essential problem dwells on the asymptotic behavior of the critical eigenvalues on the imaginary axis, that is, on how the imaginary eigenvalues may vary with respect to the varying parameter. This behavior determines whether the imaginary eigenvalues cross from one half plane into another, and hence plays a critical role in determining the stability of such systems. Our results reveal that the eigenvalue asymptotic behavior can be characterized by solving a simple generalized eigenvalue problem, leading to numerically efficient stability conditions.

Journal ArticleDOI
TL;DR: This paper derives feasibility conditions in the form of constrained convex programming under which the regional eigenvalue assignment of positive observers is possible and proposes a new method for solving the regional Eigenvalue problem ofpositive observers once the feasibility conditions are satisfied.

Journal ArticleDOI
TL;DR: In this paper, the reproducing kernel method (RKM) was used to approximate the eigenvalues of the Sturm-Liouville problem, and convergence of the approximate eigenfunctions produced by the RKM to the exact eigen functions was proven.
Abstract: This article is devoted to both theoretical and numerical studies of eigenvalues of regular fractional $2\alpha $-order Sturm-Liouville problem where $\frac{1}{2}< \alpha \leq 1$. In this paper, we implement the reproducing kernel method RKM) to approximate the eigenvalues. To find the eigenvalues, we force the approximate solution produced by the RKM satisfy the boundary condition at $x=1$. The fractional derivative is described in the Caputo sense. Numerical results demonstrate the accuracy of the present algorithm. In addition, we prove the existence of the eigenfunctions of the proposed problem. Uniformly convergence of the approximate eigenfunctions produced by the RKM to the exact eigenfunctions is proven.

Proceedings ArticleDOI
05 Mar 2017
TL;DR: A first order perturbation analysis for the JEVD algorithms based on the indirect LS criterion is performed and closed-form expressions for the eigenvector and eigenvalue matrices are presented.
Abstract: Joint EigenValue Decomposition (JEVD) algorithms are widely used in many application scenarios. These algorithms can be divided into different categories based on the cost function that needs to be minimized. Most of the frequently used algorithms in the literature use indirect least square (LS) criteria as a cost function. In this work, we perform a first order perturbation analysis for the JEVD algorithms based on the indirect LS criterion. We also present closed-form expressions for the eigenvector and eigenvalue matrices. The obtained expressions are asymptotic in the signal-to-noise ratio (SNR). Additionally, we use these results to obtain a statistical analysis, where we only assume that the noise has finite second order moments. The simulation results show that the proposed analytical expressions match well to the empirical results of JEVD algorithms which are based on the LS cost function.

Journal ArticleDOI
TL;DR: Barany and Solymosi as discussed by the authors introduced a new inclusion set for the eigenvalues of a real square matrix, called Ger\v{s}gorin disc of the second type.
Abstract: The research in this paper is motivated by a recent work of I. Barany and J. Solymosi [I. Barany and J. Solymosi. Gershgorin disks for multiple eigenvalues of non-negative matrices. Preprint arXiv no. 1609.07439, 2016.] about the location of eigenvalues of nonnegative matrices with geometric multiplicity higher than one. In particular, an answer to a question posed by Barany and Solymosi, about how the location of the eigenvalues can be improved in terms of their geometric multiplicities is obtained. New inclusion sets for the eigenvalues of a real square matrix, called Ger\v{s}gorin discs of the second type, are introduced. It is proved that under some conditions, an eigenvalue of a real matrix is in a Ger\v{s}gorin disc of the second type. Some relationships between the geometric multiplicities of eigenvalues and these new inclusion sets are established. Some other related results, consequences, and examples are presented. The results presented here apply not only to nonnegative matrices, but extend to all real matrices, and some of them do not depend on the geometric multiplicity.

Book ChapterDOI
TL;DR: In this article, the eigenvalue distribution of a large Jordan block subject to a small random Gaussian perturbation was studied and it was shown that as the dimension of the matrix gets large, with probability close to 1, most of the eigvalues are close to a circle.
Abstract: We study the eigenvalue distribution of a large Jordan block subject to a small random Gaussian perturbation. A result by E. B. Davies and M. Hager shows that as the dimension of the matrix gets large, with probability close to 1, most of the eigenvalues are close to a circle.

Journal ArticleDOI
TL;DR: This work proposes a numerical approach based on quadratic support functions that overestimate the smallest eigenvalue function globally and establishes the local convergence of the algorithm under mild assumptions and deduce a precise rate of convergence result by viewing the algorithm as a fixed point iteration.
Abstract: Optimization of convex functions subject to eigenvalue constraints is intriguing because of peculiar analytical properties of eigenvalue functions and is of practical interest because of a wide range of applications in fields such as structural design and control theory. Here we focus on the optimization of a linear objective subject to a constraint on the smallest eigenvalue of an analytic and Hermitian matrix-valued function. We propose a numerical approach based on quadratic support functions that overestimate the smallest eigenvalue function globally. The quadratic support functions are derived by employing variational properties of the smallest eigenvalue function over a set of Hermitian matrices. We establish the local convergence of the algorithm under mild assumptions and deduce a precise rate of convergence result by viewing the algorithm as a fixed point iteration. The convergence analysis reveals that the algorithm is immune to the nonsmooth nature of the smallest eigenvalue. We illustrate the ...

Proceedings ArticleDOI
01 Jan 2017
TL;DR: Numerical simulations show that the proposed detection rule perform better than the traditional eigenvalue-based algorithm while also proving to be more robust.
Abstract: Spectrum sensing is an essential problem in cognitive radio. Blind detection techniques such as the algorithm based on random matrix theory which is shown to outperform energy detection especially in case of noise uncertainty, sense the presence of a primary user's signal without prior knowledge of the signal characteristics, channel and noise power. In this paper, we improve the maximum and minimum eigenvalue algorithm from two aspects. Using some recent random matrix theory results, a new threshold based on the distribution of minimum eigenvalue is introduced first. Then the signals received by each cognitive user are decomposed into I and Q components to ensure maximum exploitation of signal correlation among the temporal, spatial and phase correlation (between I and Q components) present in the received signals. Numerical simulations show that the proposed detection rule perform better than the traditional eigenvalue-based algorithm while also proving to be more robust.

Journal ArticleDOI
TL;DR: For symmetric eigenvalue problems, a three-term recurrence polynomial filter is constructed by means of Chebyshev polynomials and this filtering strategy is applied to the Davidson method and proposed the filtered-Davidson method.
Abstract: For symmetric eigenvalue problems, we constructed a three-term recurrence polynomial filter by means of Chebyshev polynomials. The new filtering technique does not need to solve linear systems and only needs matrix-vector products. It is a memory conserving filtering technique for its three-term recurrence relation. As an application, we use this filtering strategy to the Davidson method and propose the filtered-Davidson method. Through choosing suitable shifts, this method can gain cubic convergence rate locally. Theory and numerical experiments show the efficiency of the new filtering technique.