scispace - formally typeset
Search or ask a question

Showing papers on "Divide-and-conquer eigenvalue algorithm published in 1994"



Book ChapterDOI
01 Jan 1994
TL;DR: The Rational Krylov algorithm for the nonsymmetric matrix pencil eigenvalue problem is described, a generalization of the shifted and inverted Lanczos (or Araoldi) algorithm, in which several shifts are used in one run.
Abstract: The Rational Krylov algorithm for the nonsymmetric matrix pencil eigenvalue problem is described It is a generalization of the shifted and inverted Lanczos (or Araoldi) algorithm, in which several shifts are used in one run It computes an orthogonal basis and a small Hessenberg pencil The eigensolution of the Hessenberg pencil, gives Ritz approximations to the solution of the original pencil

98 citations


Journal ArticleDOI
TL;DR: In this article, an analysis of the eigenvalue problem for block matrices and the derivation of two shifted eigen value problems that are more suited to numerical solution by iterative algorithms like simultaneous iteration and Arnoldi's method are discussed.
Abstract: Block matrices with a special structure arise from mixed finite element discretizations of incompressible flow problems. This paper is concerned with an analysis of the eigenvalue problem for such matrices and the derivation of two shifted eigenvalue problems that are more suited to numerical solution by iterative algorithms like simultaneous iteration and Arnoldi's method. The application of the shifted eigenvalue problems to the determination of the eigenvalue of smallest real part is discussed and a numerical example arising from a stability analysis of double-diffusive convection is described.

77 citations


Journal ArticleDOI
TL;DR: The integrated structural and control optimization problem is formulated by including constraints on displacements, stresses, and closed-loop eigenvalues and the corresponding damping factors and then parallel algorithms are presented for integrated optimization of structures on shared-memory multiprocessors such as the CRAY YMP 8/864 supercomputer.
Abstract: Optimization of combined structural and control systems is a complex problem requiring an inordinate amount of computer-processing time, especially the solution of the eigenvalue problem of a general unsymmetric square real matrix with complex eigenvalues and eigenvectors, which is frequently used in such problem. The few algorithms presented in the literature thus far have been applied to small structures with a few members and controllers only. Parallel processing on new-generation multiprocessor computers provides an opportunity to solve large-scale problems. In this paper, the integrated structural and control optimization problem is formulated by including constraints on displacements, stresses, and closed-loop eigenvalues and the corresponding damping factors. Then, parallel algorithms are presented for integrated optimization of structures on shared-memory multiprocessors such as the CRAY YMP 8/864 supercomputer. In particular, parallel algorithms are presented for the solution of complex eigenvalue problems encountered in structural control problems using the method of matrix iteration for dominant eigenvalue(s). The solution is divided into two parts. The first part is the iteration for dominant eigenvalue(s) and the corresponding eigenvector(s) and the second part is the reduction of the matrix to obtain the smaller eigenvalue(s) and the corresponding eigenvector(s).

68 citations


Journal ArticleDOI
TL;DR: In this paper, the first Neumann eigenvalue on Riemannian manifolds was obtained by using coupling methods, and lower bounds were obtained for the Neumann Eigenvalue.
Abstract: By using coupling methods, some lower bounds are obtained for the first Neumann eigenvalue on Riemannian manifolds. This method is new and the results improve some known estimates. An example shows that our estimates can be sharp.

58 citations


Journal ArticleDOI
TL;DR: An error analysis of the Lanczos algorithm in finite-precision arithmetic for solving the standard nonsymmetric eigenvalue problem, if no breakdown occurs.
Abstract: This paper presents an error analysis of the Lanczos algorithm in finite-precision arithmetic for solving the standard nonsymmetric eigenvalue problem, if no breakdown occurs. An analog of Paige's theory on the relationship between the loss of orthogonality among the Lanezos vectors and the convergence of Ritz values in the symmetric Lanczos algorithm is discussed. The theory developed illustrates that in the nonsymmetric Lanczos scheme, if Ritz values are well conditioned, then the loss of biorthogonality among the computed Lanczos vectors implies the convergence of a group of Ritz triplets in terms of small residuals. Numerical experimental results confirm this observation.

50 citations


Journal ArticleDOI
TL;DR: In this paper, the generalized rank annihilation method (GRAM) is compared and a slightly different eigenvalue problem is derived, which facilitates a comparison with other PCA-based methods for curve resolution and calibration.
Abstract: SUMMARY Rank annihilation factor analysis (RAFA) is a method for multicomponent calibration using two data matrices simultaneously, one for the unknown and one for the calibration sample. In its most general form, the generalized rank annihilation method (GRAM), an eigenvalue problem has to be solved. In this first paper different formulations of GRAM are compared and a slightly different eigenvalue problem will be derived. The eigenvectors of this specific eigenvalue problem constitute the transformation matrix that rotates the abstract factors from principal component analysis (PCA) into their physical counterparts. This reformulation of GRAM facilitates a comparison with other PCA-based methods for curve resolution and calibration. Furthermore, we will discuss two characteristics common to all formulations of GRAM, i.e. the distinct possibility of a complex and degenerate solution. It will be shown that a complex solution-contrary to degeneracy-should not arise for components present in both samples for model data.

50 citations


Journal ArticleDOI
TL;DR: The authors propose two algorithms, based on a double Lie-bracket equation recently studied by Brockett, that appear to be suitable for implementation in parallel processing environments and achieve the eigenvalue decomposition of a symmetric matrix and the singular value decompose of an arbitrary matrix.
Abstract: Recent work has shown that the algebraic question of determining the eigenvalues, or singular values, of a matrix can be answered by solving certain continuous-time gradient flows on matrix manifolds. To obtain computational methods based on this theory, it is reasonable to develop algorithms that iteratively approximate the continuous-time flows. In this paper the authors propose two algorithms, based on a double Lie-bracket equation recently studied by Brockett, that appear to be suitable for implementation in parallel processing environments. The algorithms presented achieve, respectively, the eigenvalue decomposition of a symmetric matrix and the singular value decomposition of an arbitrary matrix. The algorithms have the same equilibria as the continuous-time flows on which they are based and inherit the exponential convergence of the continuous-time solutions.

48 citations



Journal ArticleDOI
TL;DR: It is proved that the generic bulge-chasing algorithm implicitly performs iterations of the generic $GZ$ algorithm, which means that the convergence theorems that are proved for the generic GZ algorithm hold for theGeneric bulges chasing algorithm as well.
Abstract: A generic $GZ$ algorithm for the generalized eigenvalue problem $Ax=\lambda Bx$ is presented. This is actually a large class of algorithms that includes multiple-step $QZ$ and $LZ$ algorithms, as well as $QZ$-$LZ$ hybrids, as special cases. First the convergence properties of the $GZ$ algorithm are discussed, then a study of implementations is undertaken. The notion of an elimination rule is introduced as a device for studying the $QZ$, $LZ$ and other algorithms simultaneously. To each elimination rule there corresponds an explicit $GZ$ algorithm. Through a careful study of the steps involved in executing the explicit algorithm, it is discovered how to implement the algorithm implicitly by bulge chasing. The approach taken here was introduced by Miminis and Paige in the context of the $QR$ algorithm for the ordinary eigenvalue problem. It is more involved than the standard approach, but it yields a much clearer picture of the relationship between the implicit and explicit versions of the algorithm. Furthermore, it is more general than the standard approach, as it does not require the use of a theorem of "Implicit-$Q$" type. Finally a generalization of the implicit $GZ$ algorithm, the generic bulge-chasing algorithm, is introduced. It is proved that the generic bulge-chasing algorithm implicitly performs iterations of the generic $GZ$ algorithm. Thus the convergence theorems that are proved for the generic $GZ$ algorithm hold for the generic bulge-chasing algorithm as well.

34 citations


Journal ArticleDOI
TL;DR: Adaptive Chebyshev iterative methods as discussed by the authors use modified moments determined during the iterations to compute the eigenvalue estimates from modified moments, which requires less computer storage than when eigen value estimates are computed by a power method and yields faster convergence.
Abstract: Large, sparse nonsymmetric systems of linear equations with a matrix whose eigenvalues lie in the right half plane may be solved by an iterative method based on Chebyshev polynomials for an interval in the complex plane. Knowledge of the convex hull of the spectrum of the matrix is required in order to choose parameters upon which the iteration depends. Adaptive Chebyshev algorithms, in which these parameters are determined by using eigenvalue estimates computed by the power method or modifications thereof, have been described by Manteuffel [1978]. This paper presents adaptive Chebyshev iterative methods, in which eigenvalue estimates are computed from modified moments determined during the iterations. The computation of eigenvalue estimates from modified moments requires less computer storage than when eigenvalue estimates are computed by a power method and yields faster convergence for many problems.

Journal ArticleDOI
TL;DR: In this paper, it was shown that sampling expansion can be written as a Lagrange interpolation series, provided the kernel satisfies suitable conditions, and two concrete fourth-order eigenvalue problems are examined, a regular one as an application of the general result and a singular one.

Journal ArticleDOI
TL;DR: This algorithm combines the advantages of existing algorithms such as QR, bisection/multisection, and Cuppen’s divide-and-conquer method and is fully parallel and competitive in speed with the most efficient QR algorithm in serial mode.
Abstract: This paper presents an algorithm for the eigenvalue problem of symmetric tridiagonal matrices. The algorithm employs the determinant evaluation, split-and-merge strategy, and the Laguerre iteration. The method directly evaluates eigenvalues and uses inverse iteration as an option when eigenvectors are needed. This algorithm combines the advantages of existing algorithms such as QR, bisection/multisection, and Cuppen’s divide-and-conquer method. It is fully parallel and competitive in speed with the most efficient QR algorithm in serial mode. On the other hand, the algorithm is as accurate as any standard algorithm for the symmetric tridiagonal eigenproblem and enjoys the flexibility in evaluating partial spectrum.

Journal ArticleDOI
N. Kamiya1, S.T. Wu
TL;DR: A new‐type eigenvalue formulation of the two‐dimensional Helmholtz equation is presented and can reduce the users’ task in preprocessing and initial rough estimation when compared with the existing domain‐type solvers.
Abstract: A new‐type eigenvalue formulation of the two‐dimensional Helmholtz equation is presented in this paper. A boundary integral equation is derived using the T‐complete functions relevant to the Trefftz method, which is further transformed to the generalized eigenvalue problem. Boundary discretization and a standard eigenvalue computation routine, offered as a black box, are sufficient for the determination of the eigenvalues. The proposed method can reduce the users’ task in preprocessing and initial rough estimation when compared with the existing domain‐type solvers.

Journal ArticleDOI
TL;DR: These are the first codes capable of solving numerically such general eigenvalue problems for Hamiltonian systems of ordinary differential equations and one implements a new new method of solving a differential equation whose solution is a unitary matrix.
Abstract: This paper discusses the numerical solution of eigenvalue problems for Hamiltonian systems of ordinary differential equations. Two new codes are presented which incorporate the algorithms described here; to the best of the author’s knowledge, these are the first codes capable of solving numerically such general eigenvalue problems. One of these implements a new new method of solving a differential equation whose solution is a unitary matrix. Both codes are fully documented and are written inPfort-verifiedFortran 77, and will be available in netlib/aicm/sl11f and netlib/aicm/sl12f.

Journal ArticleDOI
TL;DR: The bisection method is the most efficient in finding the set of eigenvalues of the algebraic eigenvalue problem Αχ = λΒχ, where A and Β are symmetric band matrices of order N.
Abstract: In the paper solvability of an algebraic eigenvalue problem of the type of Α(λ)Χ = Β(λ)χ is studied. Here the matrices Α(μ) and Β(μ) are symmetric and depend on the numerical parameter μ in a special way. To calculate the eigenvalues a bisecting algorithm is proposed. The algorithm is based on the triangular factorization of the matrix Α(μ) μΒ(μ) and the Silvester theorem on inertia. The method of inverse iterations with shift in determining eigenvectors corresponding to the eigenvalues obtained is studied. The bisection method is the most efficient in finding the set of eigenvalues of the algebraic eigenvalue problem Αχ = λΒχ, where A and Β are symmetric band matrices of order N. The method is based on the well-known property of the Sturm system pQ^)9p^)9...9pN(X) for the characteristic polynomial ρΝ(λ) = detT(A), Τ(λ) =A λΒ, where />0(A) = 1 and /^-(A), i = l,...,N, is the z'-th principal minor of the matrix Τ(λ). The property is that the number of coincidences (inversions) of signs of the neighbouring polynomials ρ·(μ), i = 0,1,..., N, of the Sturm system is equal to the number of eigenvalues of the problem Ax = λΒχ that are larger (smaller) than μ, if /$(/!)* Ο , / =!,...,# [8]. At first the bisection method was used for localizing eigenvalues of tridiagonal matrices, for which the Sturm system is constructed by the well-known trinomial recurrent relations (see, for example, [13]). Later such an approach was extended to generalized eigenvalue problems with filled matrices. In this case, to check the signs of the Sturm system polynomials at a point μ is to investigate the signs of the elements of the diagonal matrix £>(μ) in the triangular factorization for the Gauss method. Here £(μ) is the lower triangular matrix with unit diagonal elements and £(μ) is the transpose to £(μ). Κ is y to check that Ο(μ) = άίζ&(ρι(μ)/Ρα(μ),ρ2(μ)/ρι(μ),...,ρΝ(μ)/ρΝ_ι(μ)) and therefore the number of positive (negative) elements of the matrix Ο(μ) coincides with the number of eigenvalues that are larger (smaller) than //, if Ρΐ(μ) * 0» / = Ι,...ν/V. This fact allows one to abandon the construction of the Sturm polynomial system and justify the bisecting algorithm by the Silvester theorem on inertia (see, for example, [7]). Later the algorithm for dividing the spectrum was extended to eigenvalue problems with parameters entering in a nonlinear way. Kazan' State University, Kazan' 420008, Russia 418 R.Z.Dautov et. al Kuznetsov and Matsokin [5] apply the bisection method to solve the eigenvalue problem Τ(λ)χ = 0 with a square symmetric matrix Τ(λ) =A (1/λ)Β AC of order N, where A is a symmetric positive definite tridiagonal matrix, while Β and C are diagonal matrices with positive diagonal elements. Assuming that there exists a point A* > 0 at which the matrix Γ(Α*) is positive definite, they proved that there exist real eigenvalues 0 < λ_Ν < ... < A_1< A* < A! < ... < λΝ of the problem in hand. They also showed that the number of sign inversions of the neighbouring elements of the Sturm sequence ρ^(μ),...,ρΝ(μ) for the polynomial ρΝ(μ) = detT()w) at μ < Α* (μ > A*) coincides with the number of eigenvalues of the problem Τ(λ)χ = 0 in the interval (μ, A*) [in the interval (A*,//)]. Besides, a considerably weakened condition on the matrix Γ(Α*) was proposed. Sapagovene [9] considers the problem Τ(λ)χ = 0, where Γ(Α) = λ Α + λΒ -C9 while A, B, and C are square symmetric matrices of order N, with A and C being positive definite. To localize the eigenvalues the triangular factorization Τ(μ) =L(/*)L*(//) is used. The factorization is obtained by the square-root method. Here L(//) is the lower triangular matrix with the elements ^-(μ), %·(/*) = 0 at ί <;, /,; = 1,...,ΛΓ, and L*( ) is the conjugate matrix to £(μ). In this case, the Sturm polynomials are given by the relations

Journal ArticleDOI
L. Kaufman1
TL;DR: The implicit QR algorithm as mentioned in this paper is a serial iterative algorithm for determining all the eigenvalues of an n × n symmetric tridiagonal matrix A. In contrast to the original algorithm, which cannot take advantage of the architectures of parallel or vector machines, each iteration of the new algorithm mainly involves synchronous, lock-step operations which can effectively use vector and concurrency capabilities of SIMD machines.

Journal ArticleDOI
TL;DR: In this paper, the class C of integral domains A having the property that each totally real integral element over A is an eigenvalue of a symmetric matrix over A was investigated.

Journal ArticleDOI
TL;DR: The superlinear convergence is proved right from the start of the schemes which allows us to improve the complexity bounds of [3], and the effectiveness of the algorithms is confirmed by numerical results.
Abstract: We propose globally convergent iteration schemes for updating the eigenvalues of a symmetric matrix after a rank-1 modification. Such calculations are the core of the divide-and-conquer technique for the symmetric tridiagonal eigenvalue problem. We prove the superlinear convergence right from the start of our schemes which allows us to improve the complexity bounds of [3]. The effectiveness of our algorithms is confirmed by numerical results which are reported and discussed.

Journal Article
TL;DR: The recently proposed Jacobi-Davidson method for calculating extreme eigenvalues of large matrices to a generalized eigenproblem is applied, leading to an algorithm that computes the extreme eigensolutions of a matrix pencil (A;B), where A and B are general matrices.

Journal ArticleDOI
TL;DR: An algorithm, parallel in nature, for finding eigenvalues of a symmetric definite tridiagonal matrix pencil using the determinant evaluation, split-and-merge strategy and Laguerre's iteration is presented.
Abstract: In this paper we present an algorithm, parallel in nature, for finding eigenvalues of a symmetric definite tridiagonal matrix pencil. Our algorithm employs the determinant evaluation, split-and-merge strategy and Laguerre's iteration. Numerical results on both single and multiprocessor computers are presented which show that our algorithm is reliable, efficient and accurate. It also enjoys flexibility in evaluating a partial spectrum.

Journal ArticleDOI
TL;DR: In this article, the Rayleigh functional of the nonlinear problem is used to improve the eigenvalue approximations considerably, and two examples are presented to demonstrate the efficiency of the method.


Journal ArticleDOI
TL;DR: It is demonstrated that, in each of two problems which involve physical symmetry, the appropriate eigenvalue equation factorizes into two equations, one of which corresponds to solutions which are physical symmetry solutions.
Abstract: It is demonstrated that, in each of two problems which involve physical symmetry, the appropriate eigenvalue equation factorizes into two equations, one of which corresponds to solutions which are ...

Journal ArticleDOI
TL;DR: In this article, the authors considered numerical methods of solving the following eigenvalue problems for singular two-parameter polynomial matrices: separating the continuous and discrete spectra, calculating the points of the discrete spectrum, constructing the minimal null-space basis of polynomials.
Abstract: The paper considers numerical methods of solving the following eigenvalue problems for singular two-parameter polynomial matrices: separating the continuous and discrete spectra, calculating the points of the discrete spectrum, constructing the minimal null-space basis of polynomial solutions. The methods suggested need no prior linearization of the problem. They are based on the rank factorization of the matrix operator. The proposed methods and algorithms are not justified here. An extended version of the paper that includes other algebraic problems as well as eigenvalue problems is being prepared for the press under the same title. 1. FORMULATION OF THE PROBLEM Let 1 + ... + c0oo (i.i) be an m χ η two-parameter polynomial matrix of the rank p < min(m,n), <^(μ), k = 0,1,..., s, are polynomial with respect to μ matrices of sizes mxn. Two methods are suggested in order to solve eigenvalue problems for Ρ(λ,μ). The first one is based on the prior linearization of the problem in one of the parameters. The method consists in passing to the companion pencil Α(μ) λΒ(μ) of larger polynomial matrices. The second method is based on the rank factorization of Ρ(λ,μ) of the form: Ρ(λ,μ)ΐν(λ,μ) = (Δ(λ,μ),0} (1.2) which is further referred to as AW -2 factorization of Ρ(λ,μ). Here 0 is the mx(n-p) zero matrix, Δ(λ,μ) is the mxp two-parameter polynomial matrix of full column rank. The matrix Ψ(λ,μ) is unimodular with respect to A and regular with respect to μ, and hence detJf^A,^) s φ(μ) is a scalar polynomial with respect to μ. Such a matrix will be called the μ-modular one. We pose a problem on finding nontrivial solutions to the equations Ρ(λ,μ)χ = 0 and γΡ(λ,μ) = 0 which are rightand left-hand eigenvectors that correspond to points of the matrix spectrum; rightand left-hand polynomial solutions that belong to rightand left-hand null-spaces of the matrix Ρ(λ,μ), respectively. * St.-Petersburg Division of Steklov Mathematical Institute of the Russian Academy of Sciences, St.-Petersburg, 191011, Russia. 112 V. N. Kublanovskaya The spectrum of the matrix Ρ(λ,μ) is the locus (λ^μ^) where The spectrum point coordinates are the solutions to the system of nonlinear algebraic equations:


01 Jun 1994
TL;DR: In this paper, a method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented, which involves a reparameterization of the multivariable structural eigen value problem in terms of a single positive-valued parameter.
Abstract: A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders) Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems

Journal ArticleDOI
TL;DR: The results obtained complete the stability picture augmenting the energy stability results, and here a real eigenvalue of a Hermitian eigen value problem had to be determined.
Abstract: Some of the most challenging eigenvalue problems arise in the stability analysis of solutions to parameter-dependent nonlinear partial differential equations. Linearized stability analysis requires the computation of a certain purely imaginary eigenvalue pair of a very large, sparse complex matrix pencil. A computational strategy, the core of which is a method of inverse iteration type with preconditioned conjugate gradients, is used to solve this problem for the stability of thermocapillary convection. This convection arises in the float-zone model of crystal growth governed by the Boussinesq equations. The results obtained complete the stability picture augmenting the energy stability results [Mittelmann, et al., SIAM J. Sci. Statist. Comput., 13 (1992), pp. 411–424] and recent experimental results. Here a real eigenvalue of a Hermitian eigenvalue problem had to be determined.

Posted Content
TL;DR: In this article, a recursive method is derived to calculate all eigenvalue correlation functions of a random hermitian matrix in the large size limit, and after smoothing of the short scale oscillations.
Abstract: A recursive method is derived to calculate all eigenvalue correlation functions of a random hermitian matrix in the large size limit, and after smoothing of the short scale oscillations. The property that the two-point function is universal, is recovered and the three and four-point functions are given explicitly. One observes that higher order correlation functions are linear combinations of universal functions with coefficients depending on an increasing number of parameters of the matrix distribution.

Journal ArticleDOI
H.C. Chen1
TL;DR: The method uses a special set of trial vectors to reduce a large eigenvalue problem to a much smaller one, taking full advantage of the sparseness and symmetry of the system matrices and requires no complex arithmetic, therefore making it very economical for solving large-sized problems.