A block Newton method for nonlinear eigenvalue problems
read more
Citations
Function theory in several complex variables
An integral method for solving nonlinear eigenvalue problems
The nonlinear eigenvalue problem
NLEIGS: A Class of Fully Rational Krylov Methods for Nonlinear Eigenvalue Problems
Chebyshev interpolation for nonlinear eigenvalue problems
References
Matrix perturbation theory
Numerical solution of saddle point problems
Functions of Matrices: Theory and Computation
Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide
Function theory of several complex variables
Related Papers (5)
Nonlinear eigenvalue problems: a challenge for modern eigenvalue methods
Frequently Asked Questions (17)
Q2. What are the future works in "A block newton method for nonlinear eigenvalue problems" ?
A logical next step of future research is to employ invariant pairs in single-vector methods for safely locking and purging converged eigenpairs, similar to the work by Meerbergen [ 19 ] on the quadratic eigenvalue problem.
Q3. What is the next step of future research?
A logical next step of future research is to employ invariant pairs in single-vector methods for safely locking and purging converged eigenpairs, similar to the work by Meerbergen [19] on the quadratic eigenvalue problem.
Q4. What is the step size of the first 2 iterations?
During the first 2 iterations the step size is at the allowed minimum 2−3 before it successively increases to 1 at the sixth step, after which quadratic convergence sets in.
Q5. What is the value of the stability analysis of the corresponding DDE?
0. For the stability analysis of the corresponding DDE ẋ(t) = A0x(t)+ A1x(t − τ), it is of interest to compute eigenvalues with large real part.
Q6. What is the minimumity index of a pair of X, S?
If a pair (X, S) ∈ Cn×k×Ck×k is minimal then its minimality index cannot exceed k.Proof Since (X, S) is minimal, there is l ∈ N such that rank (Vl(X, S)) = k.
Q7. What is the simplest way to solve a sparse system?
Moreover,if the matrices A j are sparse then (20) is a bordered sparse system and a sparse direct solver, possibly adapted to such bordered matrices [3], could be used.
Q8. What is the simplest way to compute a eigenvalue?
To obtain an initial guess, the authors approximate T (λ) by a polynomialT (λ) ≈ P(λ) := λI − A0 − A1 ∑i=01 i ! (−λτ) i . (26)and compute the k eigenvalues λ1, . . . , λk of P that have largest real part.
Q9. What is the simplest invariant pair for X?
In turn, (X, S) is a simple invariant pair if and only if the linear matrix operatorL̃ : Cn×k × Ck×k → Cn×k × Ck×k ( X, S)→ (DP( X, S), DV( X, S)) ,is invertible, whereDP : ( X, S) → T( X, S)+ m∑j=1 A j X [Dp j (S)]( S).Using (12) the authors obtain from the results in [18] that[ f j (S) [D f j (S)]( S)0 f j (S)] = f j ([ S S 0 S ]) = p j ([ S S 0 S ])= [p j (S) [Dp j (S)]( S) 0 p j (S)]for j = 1, . . . , m.
Q10. What is the nonlinear eigenvalue problem of finding pairs (x, ?
Cn×n holomorphic on an open set ⊆ C, the authors consider the nonlinear eigenvalue problem of finding pairs (x, λ) ∈ Cn × with x = 0 such thatT (λ)x = 0.
Q11. What is the way to represent eigenvalues?
When little is known about a nonlinear eigenvalue problem at hand, the concept of invariant pairs proposed in this paper offers a robust way of representing several eigenvalues and eigenvectors simultaneously.
Q12. What is the way to avoid converged eigenvalues?
In methods that determine several eigenvalues subsequently, such as Krylov subspace or Jacobi-Davidson methods [2], repeated convergence towards an eigenvalue is usually avoided by reorthogonalization against converged eigenvectors.
Q13. What is the simplest method for computing invariant pairs?
Algorithm 1 Newton method for computing invariant pairs Input: Initial pair (X0, S0) ∈ Cn×k × Ck×k such that Vl(X0, S0)H Vl(X0, S0) =
Q14. What is the suitable extension of the nonlinear eigenvalue problem?
Ck×k is called an invariant pair of the nonlinear eigenvalue problem (3) ifA1 X f1(S)+ A2 X f2(S)+ · · · + Am X fm(S) = 0. (4)Note that the matrix functions f1(S), . . . , fm(S) are well defined under the given assumptions [16].
Q15. Why did Volker Mehrmann develop a block Newton method?
To compute such invariant pairs, the authors have developed a block Newton method and described some algorithmic details, mainly to maintain a reasonable computational cost.
Q16. What is the simplest way to improve global convergence?
3.3 Improving global convergenceIn an attempt to improve the global convergence of Algorithm 1, the authors have implemented a simple Armijo rule based on the residual norm‖T(X, S)‖F = ‖A1 X f1(S)+
Q17. What is the Fréchet derivative of f j?
[D f j (S)] denotes the Fréchet derivative of f j at S. Note that the Fréchet derivative DS j of the map S → S j can be written asDS j : S → j∑i=0 Si S S j−i−1.