scispace - formally typeset
Search or ask a question
Author

Bahram Nour-Omid

Bio: Bahram Nour-Omid is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Lanczos algorithm & Lanczos resampling. The author has an hindex of 13, co-authored 16 publications receiving 694 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the Lanczos vectors are obtained by an inverse iteration procedure in which orthogonality is imposed between the vectors resulting from successive iteration cycles, which provides for a very efficient time-stepping solution.
Abstract: A procedure for deriving the Lanczos vectors is explained and their use in structural dynamics analysis as an alternative to modal co-ordinates is discussed. The vectors are obtained by an inverse iteration procedure in which orthogonality is imposed between the vectors resulting from successive iteration cycles. Using these Lanczos vectors the equations of motion are transformed to tridiagonal form, which provides for a very efficient time-stepping solution. The effectiveness of the method is demonstrated by a numerical example.

156 citations

Journal ArticleDOI
TL;DR: In this article, a recent version of the Lanczos method and the subspace method were used to solve the eigen problem in structural analysis and the advantages of these methods were discussed.
Abstract: In this paper we consider solution of the eigen problem in structural analysis using a recent version of the Lanczos method and the subspace method. The two methods are applied to examples and we conclude that the Lanczos method has advantages which are too good to overlook.

86 citations

Journal ArticleDOI
TL;DR: In this paper, the Lanczos algorithm is used to transform the linear eigenvalue equation (H X M)z = 0, where H and M are real symmetric matrices with M positive semidefinite.
Abstract: The general, linear eigenvalue equations (H X M)z = 0, where H and M are real symmetric matrices with M positive semidefinite, must be transformed if the Lanczos algorithm is to be used to compute eigenpairs (X, z). When the matrices are large and sparse (but not diagonal) some factorization must be performed as part of the transformation step. If we are interested in only a few eigenvalues X near a specified shift, then the spectral transformation of Ericsson and Ruhe [1] proved itself much superior to traditional methods of reduction. The purpose of this note is to show that a small variant of the spectral transformation is preferable in all respects. Perhaps the lack of symmetry in our formulation deterred previous investigators from choosing it. It arises in the use of inverse iteration. A second goal is to introduce a systematic modification of the computed Ritz vectors, which improves the accuracy when M is ill-conditioned or singular. We confine our attention to the simple Lanczos algorithm, although the first two sections apply directly to the block algorithms as well. 1. Overview. This contribution is an addendum to the paper by Ericsson and Ruhe [1] and also [7]. The value of the spectral transformation is reiterated in a later section. Here we outline our implementation of this transformation. The equation to be solved, for an eigenvalue A and eigenvector z, is (1) (H XM)z = 0, H and M are real symmetric n X n matrices, and M is positive semidefinite. A practical instance of (1) occurs in dynamic analysis of structures, where H and M are the stiffness and mass matrices, respectively. We assume that a linear combination of H and M is positive definite. It then follows that all eigenvalues X are real. In addition, one has a real scalar a, distinct from any eigenvalue, and we seek a few eigenvalues A close to a, together with their eigenvectors z. Ericsson and Ruhe replace (1) by a standard eigenvalue equation (2) [C(H aM) -CT v]y = 0, where C is the Choleski factor of M; M = CTC and y = Cz. If M is singular then so is C, but fortunately the eigenvector z can be recovered from y via z = (H aM) -ICTy. Of course, there is no intention to invert (H a M) explicitly. The Received May 14, 1984; revised December 20, 1985. 1980 Mathematics Subject Classification. Primary 65F15. * This research was supported in part by the AFOSR contract F49620-84-C-0090. The third author was also supported in part by the Swedish Natural Science Research Council. * *The paper was written while this author was visiting the Center for Pure and Applied Mathematics, University of California, Berkeley, California 94720. ?1987 American Mathematical Society 0025-5718/87 $1.00 + $.25 per page

74 citations

Journal ArticleDOI
TL;DR: A new class of algorithms for transient finite element analysis which is amenable to an efficient implementation in parallel computers is proposed and is shown to be unconditionally stable over a certain range of the algorithmic parameters.
Abstract: A new class of algorithms for transient finite element analysis which is amenable to an efficient implementation in parallel computers is proposed. The suitability of the method for parallel computation stems from the fact that, given an arbitrary partition of the finite element mesh, each subdomain in the partition can be processed over a time step independently and simultaneously with the rest. Both element-by-element and coarse partitions of the mesh are discussed. For the former, the proposed algorithms are shown to have the structure of an explicit scheme. In particular, no global equation solving effort is involved in the update procedure. However, in contrast to explicit schemes the proposed algorithms are shown to be unconditionally stable over a certain range of the algorithmic parameters. In structural dynamics problems, good accuracy is obtained with a constant time step integration. For heat conduction problems accuracy limitations suggest the use of a step-changing technique. When this is done, numerical tests indicate the good behavior of the method. The case in which the mesh is partitioned into a small number of subdomains, typically as many as processors in the computer, is also explored in detail. Good accuracy is obtained over a wide range of time steps. Finally, extensions to second- and higher-order accuracy methods are discussed.

57 citations

Journal ArticleDOI
TL;DR: One way to precondition without forming any large matrices is shown and the trade-off between time and storage is examined for the 1-D model problem and for the analysis of several realistic structures.
Abstract: The finite element method is a good way to turn elliptic boundary value problems into large symmetric systems of equations. These large matrices, ${\bf A}$, are usually assembled from small ones. It is simple to omit the assembly process and use the code to accumulate the product ${\bf {Av}}$ for any v. Consequently the conjugate gradient algorithm (CG) can be used to solve ${\bf {Ax}} = {\bf b}$ without ever forming ${\bf A}$.It is well known, however, that CG performs best when applied to preconditioned systems. In this paper we show one way to precondition without forming any large matrices.The trade-off between time and storage is examined for the 1-D model problem and for the analysis of several realistic structures.

45 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A novel domain decomposition approach for the parallel finite element solution of equilibrium equations is presented, which exhibits a degree of parallelism that is not limited by the bandwidth of the finite element system of equations.
Abstract: A novel domain decomposition approach for the parallel finite element solution of equilibrium equations is presented. The spatial domain is partitioned into a set of totally disconnected subdomains, each assigned to an individual processor. Lagrange multipliers are introduced to enforce compatibility at the interface nodes. In the static case, each floating subdomain induces a local singularity that is resolved in two phases. First, the rigid body modes are eliminated in parallel from each local problem and a direct scheme is applied concurrently to all subdomains in order to recover each partial local solution. Next, the contributions of these modes are related to the Lagrange multipliers through an orthogonality condition. A parallel conjugate projected gradient algorithm is developed for the solution of the coupled system of local rigid modes components and Lagrange multipliers, which completes the solution of the problem. When implemented on local memory multiprocessors, this proposed method of tearing and interconnecting requires less interprocessor communications than the classical method of substructuring. It is also suitable for parallel/vector computers with shared memory. Moreover, unlike parallel direct solvers, it exhibits a degree of parallelism that is not limited by the bandwidth of the finite element system of equations.

1,302 citations

Dissertation
01 May 1997
TL;DR: The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation, based on which three algorithms for model reduction are proposed, which are suited for parallel or approximate computations.
Abstract: This dissertation focuses on efficiently forming reduced-order models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reduced-order models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation. Based on this theoretical framework, three algorithms for model reduction are proposed. The first algorithm, dual rational Arnoldi, is a numerically reliable approach involving orthogonal projection matrices. The second, rational Lanczos, is an efficient generalization of existing Lanczos-based methods. The third, rational power Krylov, avoids orthogonalization and is suited for parallel or approximate computations. The performance of the three algorithms is compared via a combination of theory and examples. Independent of the precise algorithm, a host of supporting tools are also develop ed to form a complete model-reduction package. Techniques for choosing the matching frequencies, estimating the modeling error, insuring the model's stability, treating multiple-input multiple-output systems, implementing parallelism, and avoiding a need for exact factors of large matrix pencils are all examined to various degrees

817 citations

Journal ArticleDOI
TL;DR: A deflation procedure is introduced that is designed to improve the convergence of an implicitly restarted Arnoldi iteration for computing a few eigenvalues of a large matrix and implicitly deflates the converged approximations from the iteration.
Abstract: A deflation procedure is introduced that is designed to improve the convergence of an implicitly restarted Arnoldi iteration for computing a few eigenvalues of a large matrix. As the iteration progresses, the Ritz value approximations of the eigenvalues converge at different rates. A numerically stable scheme is introduced that implicitly deflates the converged approximations from the iteration. We present two forms of implicit deflation. The first, a locking operation, decouples converged Ritz values and associated vectors from the active part of the iteration. The second, a purging operation, removes unwanted but converged Ritz pairs. Convergence of the iteration is improved and a reduction in computational effort is also achieved. The deflation strategies make it possible to compute multiple or clustered eigenvalues with a single vector restart method. A block method is not required. These schemes are analyzed with respect to numerical stability, and computational results are presented.

654 citations

Book
05 Oct 2017
TL;DR: In this article, the authors present the theory, methods and applications of matrix analysis in a new theoretical framework, allowing readers to understand second-order and higher-order matrix analysis.
Abstract: This balanced and comprehensive study presents the theory, methods and applications of matrix analysis in a new theoretical framework, allowing readers to understand second-order and higher-order matrix analysis in a completely new light Alongside the core subjects in matrix analysis, such as singular value analysis, the solution of matrix equations and eigenanalysis, the author introduces new applications and perspectives that are unique to this book The very topical subjects of gradient analysis and optimization play a central role here Also included are subspace analysis, projection analysis and tensor analysis, subjects which are often neglected in other books Having provided a solid foundation to the subject, the author goes on to place particular emphasis on the many applications matrix analysis has in science and engineering, making this book suitable for scientists, engineers and graduate students alike

613 citations

Journal ArticleDOI
TL;DR: This paper gives an overview of the recent progress in other Krylov subspace techniques for a variety of dynamical systems, including second-order and nonlinear systems, and case studies arising from circuit simulation, structural dynamics and microelectromechanical systems are presented.

606 citations