The Quadratic Eigenvalue Problem
read more
Citations
Energy transport in saturated porous media
Displacement modes of a thin-walled beam model with deformable cross sections
A model reduction method for fast finite element analysis of continuously symmetric waveguides
An Arnoldi reduction strategy applied to the semi-analytical finite element method to model railway track vibrations
Waveguide Propagation Modes and Quadratic Eigenvalue Problems
References
The Theory of Matrices
The algebraic eigenvalue problem
Related Papers (5)
Frequently Asked Questions (17)
Q2. What is the projection method for L()?
The projection method approximates an eigenvector x of L(λ) by a vector x̃ = V ξ ∈ Kk with corresponding approximate eigenvalue λ̃. AsW ∗L(λ̃)x̃ = W ∗L(λ̃)V ξ = Lk(λ̃)ξ = 0, the projection method forces the residual r = L(λ̃)x̃ to be orthogonal to Lk.
Q3. What is the eigenvalue of the skew-Hamiltonian matrix?
The reduction to a Hamiltonian eigenproblem uses the fact that when the skew-Hamiltonian matrix B is nonsingular, it can be written in factored form asB = B1B2 = [ The author12C 0 M ] [ M 12C 0 The author] with BT2 J = JB1.(5.2)Then H = B−11 AB −1 2 is Hamiltonian.
Q4. What is the simplest way to construct a Krylov subspace?
It produces a non-Hermitian tridiagonal matrix Tk and a pair of matrices Vk and Wk such that W ∗kVk = The authorand whose columns form bases for the Krylov subspaces Kk(S, v) and Kk(S∗, w), where v and w are starting vectors such that w∗v = 1.
Q5. What is the simplest way to measure the robustness of a numerical algorithm?
To measure the robustness of the system, one can take as a global measureν2 = 2n∑ k=1 ω2kκ(λk) 2,where the ωk are positive weights.
Q6. What is the advantage of working with a linearization of Q() and a?
One important advantage of working with a linearization of Q(λ) and a Krylov subspace method is that one can get at the same time the partial Schur decomposition of the single matrix S that is used to define the Krylov subspaces.
Q7. What is the spectral transformation used to approximate eigenvalues?
The shift-and-invert spectral transformation f(λ) = 1/(λ − σ) and the Cayley spectral transformation f(λ) = (λ− β)/(λ− σ) (for β = σ), used to approximate eigenvalues λ closest to the shift σ, are other possible spectral transformations that are discussed in [7], for example.
Q8. What is the way to deflate a Ritz eigenpair?
If (λ̃, x̃) is a converged Ritz eigenpair that belongs to the set of desired eigenvalues, one may want to lock it and then continue to compute the remaining eigenvalues without altering (λ̃, x̃).
Q9. What is the main drawback of the pseudo-Lanczos algorithm?
Parlett and Chen [114] introduced a pseudo-Lanczos algorithm for symmetric pencils that uses an indefinite inner product and respects the symmetry of the problem.
Q10. What is the distribution of the eigenvalues of G() in the complex?
G(λ)∗ = G(−λ̄),(3.19)the distribution of the eigenvalues of G(λ) in the complex plane is symmetric with respect to the imaginary axis.
Q11. What is the common method for finding eigenpairs of large QEPs?
Most algorithms for large QEPs proceed by generating a sequence of subspaces {Kk}k≥0 that contain increasingly accurate approximations to the desired eigenvectors.
Q12. What is the way to solve the eigenvalue problem?
One approach, adopted by MATLAB 6’s polyeig function for solving the polynomial eigenvalue problem and illustrated in Algorithm 5.1 below, is to use whichever part of ξ̃ yields the smallest backward error (4.7).
Q13. What can be done to solve the symmetric indefinite GEP?
If the pencil is indefinite, the HR [21], [23], LR [124], and Falk–Langemeyer [45] algorithms can be employed in order to take advantage of the symmetry of the GEP, but all these methods can be numerically unstable and can even break down completely.
Q14. What is the disadvantage of the Arnoldi and Lanczos methods?
(Note that the matrix Hk is the Galerkin projection of S and not of A− λB.)A major disadvantage of the shift-and-invert Arnoldi and Lanczos methods is that a change of shift σ requires building a new Krylov subspace: all information built with the old σ is lost.
Q15. What is the way to deflate a converged eigenvalue?
If the converged (λ̃, x̃) does not belong to the set of wanted eigenvalues, one may want to remove it from the current subspace Kk.
Q16. What is the oblique projection of L() onto Kk?
In this case Lk is the orthogonal projection of L(λ) onto Kk. When W = V , Lk is the oblique projection of L(λ) onto Kk along Lk.
Q17. What is the reorthogonalization of columns of Vk?
This process does not guarantee orthogonality of the columns of Vk in floating point arithmetic, so reorthogonalization is recommended to improve the numerical stability of the method [32], [135].