scispace - formally typeset
Search or ask a question

Showing papers on "Divide-and-conquer eigenvalue algorithm published in 2015"


Journal ArticleDOI
TL;DR: The CORK family of rational Krylov methods exploits the structure of the linearization pencils by using a generalization of the compact Arnoldi decomposition, so that the extra memory and orthogonalization costs due to the linearizations of the original eigenvalue problem are negligible for large-scale problems.
Abstract: We propose a new uniform framework of compact rational Krylov (CORK) methods for solving large-scale nonlinear eigenvalue problems $A(\lambda) x = 0$. For many years, linearizations were used for solving polynomial and rational eigenvalue problems. On the other hand, for the general nonlinear case, $A(\lambda)$ can first be approximated by a (rational) matrix polynomial and then a convenient linearization is used. However, the major disadvantage of linearization-based methods is the growing memory and orthogonalization costs with the iteration count, i.e., in general they are proportional to the degree of the polynomial. Therefore, the CORK family of rational Krylov methods exploits the structure of the linearization pencils by using a generalization of the compact Arnoldi decomposition. In this way, the extra memory and orthogonalization costs due to the linearization of the original eigenvalue problem are negligible for large-scale problems. Furthermore, we prove that each CORK step breaks down into an ...

80 citations


Journal ArticleDOI
Xuefeng Liu1
TL;DR: For eigenvalue problems of self-adjoint differential operators, a universal framework is proposed to give explicit lower and upper bounds for the eigenvalues by applying Crouzeix-Raviart finite elements and adopting the interval arithmetic.

79 citations


Journal ArticleDOI
TL;DR: In this article, it is argued that in the presence of an eigenvalue cluster, the entire approximate eigenspace associated with the cluster should be considered as a whole, instead of each individual approximate eigenvectors, and likewise for approximating clusters of eigenvalues.
Abstract: The Lanczos method is often used to solve a large scale symmetric matrix eigenvalue problem. It is well-known that the single-vector Lanczos method can only find one copy of any multiple eigenvalue (unless certain deflating strategy is incorporated) and encounters slow convergence towards clustered eigenvalues. On the other hand, the block Lanczos method can compute all or some of the copies of a multiple eigenvalue and, with a suitable block size, also compute clustered eigenvalues much faster. The existing convergence theory due to Saad for the block Lanczos method, however, does not fully reflect this phenomenon since the theory was established to bound approximation errors in each individual approximate eigenpairs. Here, it is argued that in the presence of an eigenvalue cluster, the entire approximate eigenspace associated with the cluster should be considered as a whole, instead of each individual approximate eigenvectors, and likewise for approximating clusters of eigenvalues. In this paper, we obtain error bounds on approximating eigenspaces and eigenvalue clusters. Our bounds are much sharper than the existing ones and expose true rates of convergence of the block Lanczos method towards eigenvalue clusters. Furthermore, their sharpness is independent of the closeness of eigenvalues within a cluster. Numerical examples are presented to support our claims.

42 citations


Journal ArticleDOI
TL;DR: A special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form are presented.
Abstract: We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires half of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.

40 citations


Journal ArticleDOI
TL;DR: The eigenvalue distribution of the sum of two Hermitian matrices, when one of them is conjugated by a Haar distributed unitary matrix, is asymptotically given by the free convolution of their spectral distributions as discussed by the authors.
Abstract: The eigenvalue distribution of the sum of two large Hermitian matrices, when one of them is conjugated by a Haar distributed unitary matrix, is asymptotically given by the free convolution of their spectral distributions. We prove that this convergence also holds locally in the bulk of the spectrum, down to the optimal scales larger than the eigenvalue spacing. The corresponding eigenvectors are fully delocalized. Similar results hold for the sum of two real symmetric matrices, when one is conjugated by a Haar orthogonal matrix.

36 citations


Journal ArticleDOI
TL;DR: A spectral analysis and a novel iterative algorithm are proposed for the computation of a few positive real eigenvalues and the corresponding eigenfunctions of the transmission eigenvalue problem and show that the proposed method can find those desired smallest positive real transmission eigens accurately, efficiently, and robustly.
Abstract: The transmission eigenvalue problem, besides its critical role in inverse scattering problems, deserves special interest of its own due to the fact that the corresponding differential operator is neither elliptic nor self-adjoint. In this paper, we provide a spectral analysis and propose a novel iterative algorithm for the computation of a few positive real eigenvalues and the corresponding eigenfunctions of the transmission eigenvalue problem. Based on approximation using continuous finite elements, we first derive an associated symmetric quadratic eigenvalue problem (QEP) for the transmission eigenvalue problem to eliminate the nonphysical zero eigenvalues while preserve all nonzero ones. In addition, the derived QEP enables us to consider more refined discretization to overcome the limitation on the number of degrees of freedom. We then transform the QEP to a parameterized symmetric definite generalized eigenvalue problem (GEP) and develop a secant-type iteration for solving the resulting GEPs. Moreover, we carry out spectral analysis for various existence intervals of desired positive real eigenvalues, since a few lowest positive real transmission eigenvalues are of practical interest in the estimation and the reconstruction of the index of refraction. Numerical experiments show that the proposed method can find those desired smallest positive real transmission eigenvalues accurately, efficiently, and robustly.

31 citations


Journal ArticleDOI
TL;DR: For general self-adjoint eigenvalue problems, the shifted-inverse iteration based on the multigrid discretizations developed in recent years is an efficient computation method for eigen value problems.
Abstract: The shifted-inverse iteration based on the multigrid discretizations developed in recent years is an efficient computation method for eigenvalue problems. In this paper, for general self-adjoint ei...

29 citations


Journal ArticleDOI
TL;DR: A new algorithm for solving the eigenvalue problem for an n × n real symmetric arrowhead matrix with high relative accuracy in O ( n 2 ) operations under certain circumstances is presented, based on a shift-and-invert approach.

29 citations


Journal ArticleDOI
TL;DR: In this paper, error estimates of the finite element method with numerical integration for differential eigenvalue problems are presented, and the theoretical results are illustrated by numerical experiments for a model problem.

27 citations


25 Mar 2015
TL;DR: In this article, it is shown that good rational filter functions can be computed using (nonlinear least squares) optimization techniques as opposed to designing those functions based on a thorough understanding of complex analysis.
Abstract: Solving (nonlinear) eigenvalue problems by contour integration, requires an effective discretization for the corresponding contour integrals. In this paper it is shown that good rational filter functions can be computed using (nonlinear least squares) optimization techniques as opposed to designing those functions based on a thorough understanding of complex analysis. The conditions that such an effective filter function should satisfy, are derived and translated in a nonlinear least squares optimization problem solved by optimization algorithms from Tensorlab. Numerical experiments illustrate the validity of this approach.

25 citations


Journal ArticleDOI
TL;DR: An efficient spectral-Galerkin method and a rigorous error analysis are developed for the generalized eigenvalue problems associated to a transmission eigen value problem and an iterative scheme is presented, based on computation of the first Transmission Eigenvalue, to estimate the index of refraction of an inhomogeneous medium.
Abstract: We first develop an efficient spectral-Galerkin method and a rigorous error analysis for the generalized eigenvalue problems associated to a transmission eigenvalue problem. Then, we present an iterative scheme, based on computation of the first transmission eigenvalue, to estimate the index of refraction of an inhomogeneous medium. We present ample numerical results to demonstrate the effectiveness and accuracy of our approach.

Journal ArticleDOI
TL;DR: The algorithm presented in this paper not only needs not to compute the largest eigenvalue of the related matrix but also need not to use any line search scheme.

Journal ArticleDOI
TL;DR: In various applications, for instance, in the detection of a Hopf bifurcation or in solving separable boundary value problems using the two‐parameter eigen value problem, one has to solve a generalized eigenvalue problem with 2 × 2 operator determinants of the form (B1⊗A2−A1 ⊗B2)z=μ(B1C2−C1⍗B 2)z.
Abstract: In various applications, for instance in the detection of a Hopf bifurcation or in solving separable boundary value problems using the two-parameter eigenvalue problem, one has to solve a generalized eigenvalue problem of the form (B1 ⊗A2 −A1 ⊗B2)z = μ(B1 ⊗ C2 − C1 ⊗B2)z, where matrices are 2 × 2 operator determinants. We present efficient methods that can be used to compute a small subset of the eigenvalues. For full matrices of moderate size we propose either the standard implicitly restarted Arnoldi or Krylov–Schur iteration with shift-and-invert transformation, performed efficiently by solving a Sylvester equation. For large problems, it is more efficient to use subspace iteration based on low-rank approximations of the solution of the Sylvester equation combined with a Krylov–Schur method for the projected problems.

Journal ArticleDOI
TL;DR: In this article, a modified parameter perturbation method is presented to predict the eigenvalue intervals of the uncertain structures with interval parameters, where interval variables are used to quantitatively describe all the uncertain parameters.
Abstract: In overcoming the drawbacks of traditional interval perturbation method due to the unpredictable effect of ignoring higher order terms, a modified parameter perturbation method is presented to predict the eigenvalue intervals of the uncertain structures with interval parameters. In the proposed method, interval variables are used to quantitatively describe all the uncertain parameters. Different order perturbations in both eigenvalues and eigenvectors are fully considered. By retaining higher order terms, the original dynamic eigenvalue equations are transformed into interval linear equations based on the orthogonality and regularization conditions of eigenvectors. The eigenvalue ranges and corresponding eigenvectors can be approximately predicted by the parameter combinatorial approach. Compared with the Monte Carlo method, two numerical examples are given to demonstrate the accuracy and efficiency of the proposed algorithm to solve both the real eigenvalue problem and complex eigenvalue problem.

Journal ArticleDOI
TL;DR: In this paper, the authors conduct a systematic study on the hyperbolic quadratic eigenvalue problem (HQEP) both theoretically and numerically, and present a detailed convergence analysis for the steepest descent/ascent and nonlinear conjugate gradient type methods.
Abstract: The hyperbolic quadratic eigenvalue problem (HQEP) was shown to admit Courant‐Fischer type min‐max principles in 1955 by Duffin and Cauchy type interlacing inequalities in 2010 by Veseli´ c. It can be regarded as the closest analog (among all kinds of quadratic eigenvalue problem) to the standard Hermitian eigenvalue problem (among all kinds of standard eigenvalue problem). In this paper, we conduct a systematic study on the HQEP both theoretically and numerically. On the theoretical front, we generalize Wielandt‐Lidskii type min‐max principles and, as a special case, Fan type trace min/max principles and establish Weyl type and Wielandt‐Lidskii‐Mirsky type perturbation results when an HQEP is perturbed to another HQEP. On the numerical front, we justify the natural generalization of the Rayleigh‐Ritz procedure with existing principles and our new optimization principles, and, as consequences of these principles, we extend various current optimization approaches—steepest descent/ascent and nonlinear conjugate gradient type methods for the Hermitian eigenvalue problem—to calculate a few extreme eigenvalues (of both positive and negative type). A detailed convergence analysis is given for the steepest descent/ascent methods. The analysis reveals the intrinsic quantities that control convergence rates and consequently yields ways of constructing effective preconditioners. Numerical examples are presented to demonstrate the proposed theory and algorithms.

Journal ArticleDOI
TL;DR: A robust and efficient eigensolver for computing a few smallest positive eigenvalues of the three-dimensional Maxwell's transmission eigenvalue problem is studied and a novel method to solve the linear systems in each iteration of LOBPCG is proposed.
Abstract: We study a robust and efficient eigensolver for computing a few smallest positive eigenvalues of the three-dimensional Maxwell's transmission eigenvalue problem. The discretized governing equations by the Nedelec edge element result in a large-scale quadratic eigenvalue problem (QEP) for which the spectrum contains many zero eigenvalues and the coefficient matrices consist of patterns in the matrix form $XY^{-1}Z$, both of which prevent existing eigenvalue solvers from being efficient. To remedy these difficulties, we rewrite the QEP as a particular nonlinear eigenvalue problem and develop a secant-type iteration, together with an indefinite locally optimal block preconditioned conjugate gradient (LOBPCG) method, to sequentially compute the desired positive eigenvalues. Furthermore, we propose a novel method to solve the linear systems in each iteration of LOBPCG. Intensive numerical experiments show that our proposed method is robust, although the desired real eigenvalues are surrounded by complex eigenv...

Journal ArticleDOI
TL;DR: A successive quadratic approximations method is proposed, which reduces the nonlinear eigenvalue problem into a sequence of quadratics problems and the convergence for the new method is investigated.

Journal ArticleDOI
TL;DR: In this article, a new method of eigenvector-sensitivity analysis for real symmetric systems with repeated eigenvalues and eigenvalue derivatives is proposed, and the derivation is completed by using information from the second and third derivatives of the eigenproblem.
Abstract: This paper proposes a new method of eigenvector-sensitivity analysis for real symmetric systems with repeated eigenvalues and eigenvalue derivatives. The derivation is completed by using information from the second and third derivatives of the eigenproblem, and is applicable to the case of repeated eigenvalue derivatives. The extended systems with a nonsingular coefficient matrix are constructed to calculate the particular solutions. An efficient approach is developed to compute the homogeneous solutions associated with the repeated eigenvalue derivatives. Different constraints for calculating the eigenvector sensitivities are derived and discussed. Four numerical examples are presented to demonstrate the validity of the proposed method.

Journal ArticleDOI
TL;DR: The same as the direct eigenvalue solving by the nonconforming finite element method, this multi-level correction method can also produce the lower-bound approximations of the eigenvalues.
Abstract: In this paper, a type of multi-level correction scheme is proposed to solve eigenvalue problems by nonconforming finite element methods. With this new scheme, the accuracy of eigenpair approximations can be improved after each correction step which only needs to solve a source problem on finer finite element space and an eigenvalue problem on a coarse finite element space. This correction scheme can improve the efficiency of solving eigenvalue problems by the nonconforming finite element method. Furthermore, the same as the direct eigenvalue solving by the nonconforming finite element method, this multi-level correction method can also produce the lower-bound approximations of the eigenvalues.

Journal ArticleDOI
TL;DR: An algorithm for computing the real stability radius of a real and stable matrix, which utilizes a recently developed technique for detecting the loss of stability in a large dynamical system.
Abstract: We present two new algorithms for investigating the stability of large and sparse matrices subject to real perturbations. The first algorithm computes the real structured pseudospectral abscissa and is based on the algorithm for computing the pseudospectral abscissa proposed by Guglielmi and Overton [SIAM J. Matrix Anal. Appl., 32 (2011), pp. 1166--1192]. It entails finding the rightmost eigenvalues for a sequence of large matrices, and we demonstrate that these eigenvalue problems can be solved in a robust manner by an unconventional eigenvalue solver. We also develop an algorithm for computing the real stability radius of a real and stable matrix, which utilizes a recently developed technique for detecting the loss of stability in a large dynamical system. Both algorithms are tested on large and sparse matrices.

Journal ArticleDOI
TL;DR: In this article, a shift-and-invert approach was proposed to solve the eigenvalue problem for real symmetric matrix which is a rank-one modification of a diagonal matrix.

Book ChapterDOI
01 Jan 2015
TL;DR: In this paper, the authors established the existence of two nontrivial weak solutions of a one parameter non-local eigenvalue problem under homogeneous Dirichlet boundary conditions in bounded domains.
Abstract: In this paper we establish the existence of two nontrivial weak solutions of a one parameter non-local eigenvalue problem under homogeneous Dirichlet boundary conditions in bounded domains, involving a general non-local elliptic p-Laplacian operator.

Journal ArticleDOI
18 Jun 2015
TL;DR: Newton algorithm on Riemannian manifold is proposed to calculate the largest eigenvalue and the corresponding eigenvector and extensive simulation test is performed to demonstrate the effectiveness and efficiency of the algorithm.
Abstract: Attitude determination problem is formulated by Wahba as an optimization problem. This problem is reduced by Davenport to an eigenvalue and eigenvector problem of the K-matrix. Several popular solu...

Journal ArticleDOI
TL;DR: In this article, an eigenvalue problem involving a homogeneous Neumann boundary condition was analyzed in a smooth bounded domain and it was shown that the set of eigenvalues of the problem possesses a continuous family of Eigenvalues plus exactly one more Eigenvalue which is isolated.
Abstract: We analyze an eigenvalue problem, involving a homogeneous Neumann boundary condition, in a smooth bounded domain. We show that the set of eigenvalues of the problem possesses a continuous family of eigenvalues plus exactly one more eigenvalue which is isolated. Our results complement those obtained in Mihailescu (2011).

Journal ArticleDOI
TL;DR: In this paper, the eigenvalues of discrete linear second-order Neumann eigenvalue problems with sign-changing weight were studied, and it was shown that the number of positive eigen values is equal to the numbers of positive elements in the weight function.

Journal ArticleDOI
TL;DR: Jacobi-Davidson type methods for polynomial two-parameter eigenvalue problems (PMEP) are proposed, which may for instance be used for computing zeros of a system of scalar bivariate polynomials close to a given target.

Journal ArticleDOI
07 Jan 2015
TL;DR: A multigrid method is proposed to solve eigen value problems by means of the finite element method based on the shifted-inverse power iteration technique to improve the overall efficiency of the eigenvalue problem solving.
Abstract: A multigrid method is proposed to solve eigenvalue problems by means of the finite element method based on the shifted-inverse power iteration technique. With this scheme, solving eigenvalue problem is transformed to solving a series of nonsingular boundary value problems on multilevel meshes. As replacing the difficult eigenvalue solving by an easier solving of boundary value problems, the multigrid way can improve the overall efficiency of the eigenvalue problem solving. Some numerical experiments are presented to validate the efficiency of this new method.

Journal ArticleDOI
TL;DR: The objective of this article is to carry out the necessary extensions for polynomials expressed in the Lagrange basis by constructing one-sided factorizations that give simple expressions relating the eigenvectors of the linearization to the eigensolver of the polynomial eigenvalue problem.
Abstract: This article considers the backward error of the solution of polynomial eigenvalue problems expressed as Lagrange interpolants. One of the most common strategies to solve polynomial eigenvalue problems is to linearize, which is to say that the polynomial eigenvalue problem is transformed into an equivalent larger linear eigenvalue problem, and solved using any appropriate eigensolver. Much of the existing literature on the backward error of polynomial eigenvalue problems focuses on polynomials expressed in the classical monomial basis. Hence, the objective of this article is to carry out the necessary extensions for polynomials expressed in the Lagrange basis. We construct one-sided factorizations that give simple expressions relating the eigenvectors of the linearization to the eigenvectors of the polynomial eigenvalue problem. Using these relations, we are able to bound the backward error of an approximate eigenpair of the polynomial eigenvalue problem relative to the backward error of an approximate ei...

Journal ArticleDOI
TL;DR: The problem of finding J -hamiltonian normal solutions for the inverse eigenvalue problem for a complex square matrix A is solved.

Journal ArticleDOI
TL;DR: In this article, the authors introduced a type of full multigrid method for the nonlinear eigenvalue problem, which transforms the solution of nonlinear Eigenvalue problems into a series of solutions of the corresponding linear boundary value problems on the sequence of finite element spaces.
Abstract: This paper is to introduce a type of full multigrid method for the nonlinear eigenvalue problem. The main idea is to transform the solution of nonlinear eigenvalue problem into a series of solutions of the corresponding linear boundary value problems on the sequence of finite element spaces and nonlinear eigenvalue problems on the coarsest finite element space. The linearized boundary value problems are solved by some multigrid iterations. Besides the multigrid iteration, all other efficient iteration methods for solving boundary value problems can serve as the linear problem solver. We will prove that the computational work of this new scheme is truly optimal, the same as solving the linear corresponding boundary value problem. In this case, this type of iteration scheme certainly improves the overfull efficiency of solving nonlinear eigenvalue problems. Some numerical experiments are presented to validate the efficiency of the new method.