scispace - formally typeset
Search or ask a question

Showing papers on "Divide-and-conquer eigenvalue algorithm published in 1995"


Journal ArticleDOI
TL;DR: A computationally efficient implementation of rigorous coupled-wave analysis is presented in this article, where the eigenvalue problem for a one-dimensional grating in a conical mounting is reduced to two eigen value problems in the corresponding nonconical mounting, yielding two n × n matrices to solve for eigenvalues and eigenvectors.
Abstract: A computationally efficient implementation of rigorous coupled-wave analysis is presented The eigenvalue problem for a one-dimensional grating in a conical mounting is reduced to two eigenvalue problems in the corresponding nonconical mounting This reduction yields two n × n matrices to solve for eigenvalues and eigenvectors, where n is the number of orders retained in the computation For a two-dimensional grating, the size of the matrix in the eigenvalue problem is reduced to 2n × 2n These simplifications reduce the computation time for the eigenvalue problem by 8–32 times compared with the original computation time In addition, we show that with rigorous coupled-wave analysis one analytically satisfies reciprocity by retaining the appropriate choice of spatial harmonics in the analysis Numerical examples are given for metallic lamellar gratings, pulse-width-modulated gratings, deep continuous surface-relief gratings, and two-dimensional gratings

197 citations


Journal ArticleDOI
TL;DR: The main idea is to minimize the maximum eigen value subject to a constraint that this eigenvalue has a certain multiplicity, and the manifold $\Omega$ of matrices with such multiple eigenvalues is parameterized using a matrix exponential representation, leading to the definition of an appropriate Lagrangian function.
Abstract: Let $A$ denote an $n \times n$ real symmetric matrix-valued function depending on a vector of real parameters, $x \in \Re^{m}$. Assume that $A$ is a twice continuously differentiable function of $x$, with the second derivative satisfying a Lipschitz condition. Consider the following optimization problem: minimize the largest eigenvalue of $A(x)$. Let $x^*$ denote a minimum. Typically, the maximum eigenvalue of $A(x^*)$ is multiple, so the objective function is not differentiable at $x^*$, and straightforward application of Newton's method is not possible. Nonetheless, the formulation of a method with local quadratic convergence is possible. The main idea is to minimize the maximum eigenvalue subject to a constraint that this eigenvalue has a certain multiplicity. The manifold $\Omega$ of matrices with such multiple eigenvalues is parameterized using a matrix exponential representation, leading to the definition of an appropriate Lagrangian function. Consideration of the Hessian of this Lagrangian function leads to the second derivative matrix used by Newton's method. The convergence proof is nonstandard because the parameterization of $\Omega$ is explicitly known only in the limit. In the special case of multiplicity one, the maximum eigenvalue is a smooth function and the method reduces to a standard Newton iteration.

95 citations



Journal ArticleDOI
TL;DR: In this paper, the authors derive linear inequalities which relate the spectrum of a set of Hermitian matrices A1,..., Ar ∈ C n×n with the spectrum spectrum of the sum A1 + · · · + Ar.
Abstract: Using techniques from algebraic topology we derive linear inequalities which relate the spectrum of a set of Hermitian matrices A1 , . . . , Ar ∈ C n×n with the spectrum of the sum A1 + · · · + Ar . These extend eigenvalue inequalities due to Freede-Thompson and Horn for sums of eigenvalues of two Hermitian matrices.

52 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to assess the quality of eigenvalue preconditioning and to propose strategies to improve robustness, and to confirm the robustness of the generalized Davidson method.

52 citations



Journal ArticleDOI
TL;DR: In this paper, the problem of diffusions on finitely ramified fractals is translated into the theory of Dirichlet forms and applied to a different fixed point approach, Hilbert's projective metric on cones.

41 citations


01 Jan 1995
TL;DR: This work illustrates how monotonicity can fail, and incorrect eigenvalues may be computed, because of roundoff or as a result of using networks of heterogeneous parallel processors, and shows several ways to fix it.
Abstract: Bisection is a parallelizable method for finding the eigenvalues of real symmetric tridiagonal matrices, or more generally symmetric acyclic matrices. Ideally, one would like an implementation that was simultaneously parallel, load balanced, devoid of communication, capable of running on networks of heterogenous workstations, and of course correct. But this is surprisingly difficult to achieve. The reason is that bisection requires a function Count(x) which counts the number of eigenvalues less than x. In exact arithmetic Count(x) is a monotonic increasing function of x, and the logic of the algorithm depends on this. However, monotonicity can fail, and incorrect eigenvalues may be computed, because of roundoff or as a result of using networks of heterogeneous parallel processors. We illustrate this problem, which even arises in some serial algorithms like the EISPACK routine bisect, and show several ways to fix it. One of these ways has been incorporated into the ScaLAPACK library.

36 citations


Journal ArticleDOI
TL;DR: In this article, the nonlinear eigenvalue problem for p-Laplacian is considered and the existence and C 1,α-regularity of the weak solution is proved.
Abstract: The nonlinear eigenvalue problem for p-Laplacian is considered. We assume that 1 < p < N and that the function f is of subcritical growth with respect to the variable u. The existence and C1,α-regularity of the weak solution is proved.

34 citations


Journal ArticleDOI
TL;DR: The numerical solution of the nonlinear eigenvalue problemA(λ)x=0, where the matrixA( λ) is dependent on the eigen value parameter λ nonlinearly is considered.
Abstract: We consider the numerical solution of the nonlinear eigenvalue problemA(λ)x=0, where the matrixA(λ) is dependent on the eigenvalue parameter λ nonlinearly. Some new methods (the BDS methods) are presented, together with the analysis of the condition of the methods. Numerical examples comparing the methods are given.

31 citations


Journal ArticleDOI
TL;DR: In this paper, an efficient numerical algorithm was proposed to compute a few of the smallest positive eigenvalues of the quadratic eigenvalue problem and their associated eigenvectors.


Journal ArticleDOI
TL;DR: In this article, the first non-trivial eigenvalue of second order self-adjoint elliptic operators in Rd was studied and lower bounds were obtained by using a probabilistic approach and some geometric consideration.

Journal ArticleDOI
TL;DR: Two Primal-dual interior point algorithms are presented for the problem of maximizing the smallest eigenvalue of a symmetric matrix over diagonal perturbations, and prove to be simple, robust, and efficient.
Abstract: Two Primal-dual interior point algorithms are presented for the problem of maximizing the smallest eigenvalue of a symmetric matrix over diagonal perturbations. These algorithms prove to be simple, robust, and efficient. Both algorithms are based on transforming the problem to one with constraints over the cone of positive semidefinite matrices, i.e. Lowner order constraints. One of the algorithms does this transformation through an intermediate transformation to a trust region subproblem. This allows the removal of a dense row

Journal ArticleDOI
TL;DR: A robust and efficient, adaptive MG eigenvalue algorithm using the miltigrid projection (MGP) coupled with backrotations and robustness tests to overcome major computational difficulties related to equal and closely clustered eigenvalues.
Abstract: Multigrid (MG) algorithms for large-scale eigenvalue problems (EP), obtained from discretizations of partial differential EP, have often been shown to be more efficient than single level eigenvalue algorithms. This paper describes a robust and efficient, adaptive MG eigenvalue algorithm. The robustness of the present approach is a result of a combination of MG techniques introduced here, i.e., the completion of clusters; the adaptive treatment of clusters; the simultaneous treatment of solutions in each cluster; the miltigrid projection (MGP) coupled with backrotations; and robustness tests. Due to the MGP, the algorithm achieves a better computational complexity and better convergence rates than previous MG eigenvalue algorithms that use only fine level projections. These techniques overcome major computational difficulties related to equal and closely clustered eigenvalues. Some of these difficulties were not treated in previous MG algorithms. Computational examples for the Schr\"odinger eigenvalue problem in two and three dimensions are demonstrated for cases of special computational difficulties, which are due to equal and closely clustered eigenvalues. For these cases, the algorithm requires O(qN) operations for the calculation of q eigenvectors of size N, using a second order approximation. The total computational cost is equivalent to only a few Gause-Seidel relaxations per eigenvector.

Journal ArticleDOI
TL;DR: An optimality condition is given which ensures that the largest eigenvalue is within ϵ error bound of the solution, and a new line search rule is proposed, and it is shown to have good descent properties.


Journal ArticleDOI
TL;DR: The algorithm overcomes the above mentioned difficulties combining the following techniques: a MG simultaneous treatment of the eigenvectors and non linearity, and with the global constrains; MG stable subspace continuation techniques for the treatment of nonlinearity; and a MG projection coupled with backrotations for separation of solutions.
Abstract: Algorithms for nonlinear eigenvalue problems (EP's) often require solving self-consistently a large number of EP's. Convergence difficulties may occur if the solution is not sought in an appropriate region, if global constraints have to be satisfied, or if close or equal eigenvalues are present. Multigrid (MG) algorithms for nonlinear problems and for EP's obtained from discretizations of partial differential EP have often been shown to be more efficient than single level algorithms. This paper presents MG techniques and a MG algorithm for nonlinear Schr\"odinger Poisson EP's. The algorithm overcomes the above mentioned difficulties combining the following techniques: a MG simultaneous treatment of the eigenvectors and nonlinearity, and with the global constrains; MG stable subspace continuation techniques for the treatment of nonlinearity; and a MG projection coupled with backrotations for separation of solutions. These techniques keep the solutions in an appropriate region, where the algorithm converges fast, and reduce the large number of self-consistent iterations to only a few or one MG simultaneous iteration. The MG projection makes it possible to efficiently overcome difficulties related to clusters of close and equal eigenvalues. Computational examples for the nonlinear Schr\"odinger-Poisson EP in two and three dimensions, presenting special computational difficulties that are due to the nonlinearity and to the equal and closely clustered eigenvalues are demonstrated. For these cases, the algorithm requires O(qN) operations for the calculation of q eigenvectors of size N and for the corresponding eigenvalues. One MG simultaneous cycle per fine level was performed. The total computational cost is equivalent to only a few Gauss-Seidel relaxations per eigenvector. An asymptotic convergence rate of 0.15 per MG cycle is attained.

Journal ArticleDOI
TL;DR: In this article, a random-force method is proposed for nonlinear eigenvalue problems, in which a random disturbance is applied, and the response to this gives the eigenvector.
Abstract: In nonlinear eigenvalue problems, the standard method for calculating eigenvectors is to first calculate the eigenvalue. The nonlinear governing matrix is then formed using the calculated eigenvalue, a random disturbance is applied, and the response to this gives the eigenvector. In stiffness analyses this is known as the random-force method. It is well established that this approach gives eigenvectors with accuracy of the same order as the eigenvalue, provided the eigenvector is “well presented” by the parameters used in the problem description—the “freedoms.” However, in nonlinear formulations some modes may be poorly represented, or completely unrepresented, by freedom movements—the latter are referred to as\hu\u*=0 modes. The eigenvalues for these modes are found in the normal course of the analysis, but the application of random forces will give modes of lower accuracy, or in the case of the \hu\u*=0 modes, no accuracy at all. A complement to the commonly used random-force method is shown to give eigenmode accuracy that is similar to the eigenvalue accuracy, whether the mode is well represented, poorly represented, or not represented by the freedom movements.

Journal ArticleDOI
TL;DR: The results presented here improve and unify some previous results and generalize the well-known inequality that the spectral radius is bounded by the trace for symmetric positive semidefinite matrices to block form.
Abstract: Eigenvalue estimates of block incomplete preconditioners are considered. We investigate how the block diagonal entries and off-block diagonal entries influence the bounds of all eigenvalues. The results presented here improve and unify some previous results. We generalize the well-known inequality that the spectral radius is bounded by the trace for symmetric positive semidefinite matrices to block form. Some of the methods can also be useful to estimate lower bounds of block incomplete preconditioners.

Journal ArticleDOI
TL;DR: The state of the art of the algorithmic techniques and the software scene for the nonsymmetric eigenvalue problem is examined.
Abstract: With the growing demands from disciplinary and interdisciplinary fields of science and engineering for the numerical solution of the nonsymmetric eigenvalue problem, competitive new techniques have been developed for solving the problem. In this paper we examine the state of the art of the algorithmic techniques and the software scene for the problem. Some current developments are also outlined.

BookDOI
01 Jan 1995
TL;DR: The International Workshop on Operator Theory and Boundary Eigenvalue Problems (IWOTA) was held at the Technical University of Vienna, Austria, from 27 to 30 July 1993 as mentioned in this paper.
Abstract: Book Condition: New. Publisher/Verlag: Springer, Basel | International Workshop in Vienna, July 27-30, 1993 | The Workshop on Operator Theory and Boundary Eigenvalue Problems was held at the Technical University, Vienna, Austria, July 27 to 30, 1993. It was the seventh workshop in the series of IWOTA (International Workshops on Operator Theory and Applications). The main topics at the workshop were interpolation problems and analytic matrix functions, operator theory in spaces with indefinite scalar products, boundary value problems for differential...

Proceedings ArticleDOI
22 Sep 1995
TL;DR: Computationally efficient and stable implementations of rigorous coupled-wave analysis for 1D and 2D surface-relief gratings are presented and the required computer memory is decreased thus complicated grating diffraction problems can be solved efficiently.
Abstract: Computationally efficient and stable implementations of rigorous coupled-wave analysis for 1D and 2D surface-relief gratings are presented. The eigenvalue problem for a 1D grating in a conical mounting is reduced to two eigenvalue problems in the corresponding nonconical mounting. The matrix in the eigenvalue problem of 2D gratings is reduced in size by a factor of two. These simplifications reduce the computation time for the eigenvalue problem by 8 to 32 times compared to the original computation time. The required computer memory is also decreased thus complicated grating diffraction problems can be solved efficiently.

Journal ArticleDOI
TL;DR: In this paper, the authors compared two strategies to compute singular points based on different eigenvalue problems and showed a simple algorithm to calculate critical load factors used in engineering buckling analysis from the eigenvalues of the standard eigen value problem (K T - ω 1) φ = 0.
Abstract: The investigation of the non-linear response of shell-like structures requires insight into stability behaviour. In the paper we compare two strategies to compute singular points based on different eigenvalue problems. We show a simple algorithm to calculate critical load factors Λ used in engineering buckling analysis from the eigenvalues of the standard eigenvalue problem (K T - ω1) φ = 0. Some numerical examples illustrate the derived results and algorithms

Journal ArticleDOI
Ji-guang Sun1
TL;DR: In this article, the optimal Hermitian backward perturbations for Hermitians were studied and it was shown that not all the optimal perturbation are small when the computed eigenvectors have a small residual and are close to orthonormal.
Abstract: Through different orthogonal decompositions of computed eigenvectors we can define different Hermitian backward perturbations for a Hermitian eigenvalue problem. Certain optimal Hermitian backward perturbations are studied. The results show that not all the optimal Hermitian backward perturbations are small when the computed eigenvectors have a small residual and are close to orthonormal.

01 Jan 1995
TL;DR: In this article, a sufficient condition for the solvability of algebraic inverse eigenvalue problems is given, which is better than that of the paper [4] in some cases.
Abstract: Applying constructed homotopy and its properties,we gel some sufficient conditions for the solvability of algebraic inverse eigenvalue problems,which are better than that of the paper [4] in some cases. Inverse eigenvalue problems,solvability,sufficient conditions.

01 Jan 1995
TL;DR: This work has developed a parallel adaptive eigenvalue solver and applied it to a model problem in theoretical materials science, which reduces computation time and memory consumption by more than two orders of magnitude.
Abstract: We have developed a parallel adaptive eigenvalue solver and applied it to a model problem in theoretical materials science. Our method combines adaptive mesh reenement techniques with a novel multigrid eigenvalue algorithm. By exploiting adaptivity, we have reduced computation time and memory consumption by more than two orders of magnitude. We have implemented our solver using the LPARX parallel programming system, which considerably simpliied the programming and enabled us to run the same code on a diversity of high performance parallel architectures.

Journal ArticleDOI
TL;DR: Wielandt as mentioned in this paper proved an eigenvalue inequality for partitioned symmetric matrices, which turned out to be very useful in statistical applications, and a simple proof yielding sharp bounds was given.