scispace - formally typeset
Search or ask a question

Showing papers on "Coefficient matrix published in 2002"


Journal ArticleDOI
TL;DR: In this paper, a relocatable system for generalized inverse (GI) modeling of barotropic ocean tides is described, where the GI penalty functional is minimized using a representer method, which requires repeated solution of the forward and adjoint linearized shallow water equations.
Abstract: A computationally efficient relocatable system for generalized inverse (GI) modeling of barotropic ocean tides is described. The GI penalty functional is minimized using a representer method, which requires repeated solution of the forward and adjoint linearized shallow water equations (SWEs). To make representer computations efficient, the SWEs are solved in the frequency domain by factoring the coefficient matrix for a finite-difference discretization of the second-order wave equation in elevation. Once this matrix is factored representers can be calculated rapidly. By retaining the first-order SWE system (defined in terms of both elevations and currents) in the definition of the discretized GI penalty functional, complete generality in the choice of dynamical error covariances is retained. This allows rational assumptions about errors in the SWE, with soft momentum balance constraints (e.g., to account for inaccurate parameterization of dissipation), but holds mass conservation constraints. Wh...

3,133 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied efficient iterative methods for the large sparse non-Hermitian positive definite system of linear equations based on the Hermitian and skew-hermitian splitting of the coefficient matrix.
Abstract: We study efficient iterative methods for the large sparse non-Hermitian positive definite system of linear equations based on the Hermitian and skew-Hermitian splitting of the coefficient matrix. These methods include a Hermitian/skew-Hermitian splitting (HSS) iteration and its inexact variant, the inexact Hermitian/skew-Hermitian splitting (IHSS) iteration, which employs some Krylov subspace methods as its inner iteration processes at each step of the outer HSS iteration. Theoretical analyses show that the HSS method converges unconditionally to the unique solution of the system of linear equations. Moreover, we derive an upper bound of the contraction factor of the HSS iteration which is dependent solely on the spectrum of the Hermitian part and is independent of the eigenvectors of the matrices involved. Numerical examples are presented to illustrate the effectiveness of both HSS and IHSS iterations. In addition, a model problem of a three-dimensional convection-diffusion equation is used to illustrate the advantages of our methods.

860 citations


Journal ArticleDOI
TL;DR: A tutorial survey of numerical algorithms for the practical treatment of discretization and solution of 2-D deconvolution problems whose variables separate, with emphasis on methods that take the special structure of the matrix into account.
Abstract: By deconvolution we mean the solution of a linear first-kind integral equation with a convolution-type kernel, i.e., a kernel that depends only on the difference between the two independent variables. Deconvolution problems are special cases of linear first-kind Fredholm integral equations, whose treatment requires the use of regularization methods. The corresponding computational problem takes the form of structured matrix problem with a Toeplitz or block Toeplitz coefficient matrix. The aim of this paper is to present a tutorial survey of numerical algorithms for the practical treatment of these discretized deconvolution problems, with emphasis on methods that take the special structure of the matrix into account. Wherever possible, analogies to classical DFT-based deconvolution problems are drawn. Among other things, we present direct methods for regularization with Toeplitz matrices, and we show how Toeplitz matrix–vector products are computed by means of FFT, being useful in iterative methods. We also introduce the Kronecker product and show how it is used in the discretization and solution of 2-D deconvolution problems whose variables separate.

173 citations


Patent
03 Sep 2002
TL;DR: In this article, a method for finding a maximum likelihood solution for vector data is proposed, comprising of providing a sample vector (100), iteratively match-filtering said sample vector with a coefficient matrix to find a gradient (154), using the gradient to search for a maximum-likelihood solution (160), and deciding if a found solution of vector data was good enough (114-116).
Abstract: A method (Fig.1B) of finding a maximum likelihood solution for, comprising: providing a sample vector (100); iteratively match-filtering said sample vector with a coefficient matrix to find a gradient (154); using the gradient to search for a maximum likelihood solution (160); and deciding if a found solution of vector data is good enough (114-116).

169 citations


Journal ArticleDOI
TL;DR: In this paper, an alternative solution procedure based on the singular value decomposition of the coefficient matrix is suggested and it is shown that the numerical results are extremely accurate (often within machine precision) and relatively independent of the location of the source points.
Abstract: The method of fundamental solutions (also known as the singularity or the source method) is a useful technique for solving linear partial differential equations such as the Laplace or the Helmholtz equation. The procedure involves only boundary collocation or boundary fitting and hence is a very fast procedure for the solution of these classes of problems. The resulting coefficient matrix, is however ill-conditioned and hence the solution accuracy is sensitive to the location of the source points. In this paper, an alternative solution procedure based on the singular value decomposition of the coefficient matrix is suggested and it is shown that the numerical results are extremely accurate (often within machine precision) and relatively independent of the location of the source points. Copyright © 2002 John Wiley & Sons, Ltd.

144 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a method to find the corner of the L-curve that determines the regularizing iteration number, which is useful when either the right-hand side or the coefficient matrix are given with errors.

129 citations


Journal ArticleDOI
TL;DR: To compute the smallest eigenvalues and associated eigenvectors of a real symmetric matrix, the Jacobi–Davidson method with inner preconditioned conjugate gradient iterations for the arising linear systems is considered and it is shown that the coefficient matrix of these systems is indeed positive definite with the largest eigenvalue bounded away from zero.
Abstract: To compute the smallest eigenvalues and associated eigenvectors of a real symmetric matrix, we consider the Jacobi–Davidson method with inner preconditioned conjugate gradient iterations for the arising linear systems. We show that the coefficient matrix of these systems is indeed positive definite with the smallest eigenvalue bounded away from zero. We also establish a relation between the residual norm reduction in these inner linear systems and the convergence of the outer process towards the desired eigenpair. From a theoretical point of view, this allows to prove the optimality of the method, in the sense that solving the eigenproblem implies only a moderate overhead compared with solving a linear system. From a practical point of view, this allows to set up a stopping strategy for the inner iterations that minimizes this overhead by exiting precisely at the moment where further progress would be useless with respect to the convergence of the outer process. These results are numerically illustrated on some model example. Direct comparison with some other eigensolvers is also provided. Copyright © 2001 John Wiley & Sons, Ltd.

108 citations


Journal ArticleDOI
TL;DR: The condition for an ellipsoid to be an invariant set of a linear system under a saturated linear feedback is presented and can be easily used for optimization-based analysis and design.
Abstract: We present a necessary and sufficient condition for an ellipsoid to be an invariant set of a linear system under a saturated linear feedback. The condition is given in terms of linear matrix inequalities (LMIs) and can be easily used for optimization-based analysis and design.

107 citations


Journal ArticleDOI
TL;DR: In this paper, the null space of a linear system of homogeneous equations is constructed using the cofactors of an augmented coefficient matrix, which is shown to be invariant and linearly independent of the row space.
Abstract: This paper develops an approach for constructing the null space N ( A ) of a linear system of homogeneous equations using the cofactors of an augmented coefficient matrix A . The relationship between the row space R ( A T ) and null space is exploited by introducing an augmenting vector which is linearly independent of the row space and dependent on the null space. The resultant null space is shown to be a vector of cofactors of the augmenting row of the coefficient matrix and is invariant. This provides a straightforward solution to a linear system of homogeneous equations without going through Gauss-Seidel elimination. The approach is derived from a onedimensional null space and is extended to a multidimensional one by partitioning the coefficient matrix and consequently constructing a set of ( n − m ) null–space vectors based on cofactors. Examples are given and accuracy is compared with Gauss–Seidel elimination. The approach is further used in a screw–algebra context with a simple procedure to obtain a system of reciprocal screws representing a set of constraint wrenches from a set of twists of freedom, in the form of a linear system of homogeneous equations in R 6 . The paper provides rigorous proofs and applications in both linear algebra and advanced kinematics.

106 citations


Journal ArticleDOI
TL;DR: This paper describes some new algorithms derived by developing efficient methods for the Schur complement systems arising from the Navier-Stokes equations, and demonstrates their effectiveness for solving both steady-state and evolutionary problems.

103 citations


Journal ArticleDOI
TL;DR: Results demonstrate that SSOR preconditioning strategy is especially effective for CG iterative method when an edge FEM is applied to solve large-scale time-harmonic electromagnetic-field problems.
Abstract: The symmetric successive overrelaxation (SSOR) preconditioning scheme is applied to the conjugate-gradient (CG) method for solving a large system of linear equations resulting from the use of edge-based finite-element method (FEM). For this scheme, there is no additional computing time required to construct the preconditioning matrix and it contains more global information of the coefficient matrix when compared with those of the banded-matrix preconditioning scheme. The efficient implementation of this preconditioned CG (PCG) algorithm is described in details for complex coefficient matrix. With SSOR as the preconditioner and its efficient implementation in the CG algorithm, this PCG approach can reach convergence in five times CPU time shorter than CG for several typical structures. By comparison with other preconditioned techniques, these results demonstrate that SSOR preconditioning strategy is especially effective for CG iterative method when an edge FEM is applied to solve large-scale time-harmonic electromagnetic-field problems.

01 Jan 2002
TL;DR: In this article, generalized linear models are applied to generalized linear models and a generalized linear model approach is proposed to solve the problem of generalization of linear models in generalised linear models.
Abstract: generalized linear models an applied approach , generalized linear models an applied approach , کتابخانه الکترونیک و دیجیتال - آذرسا

Journal ArticleDOI
TL;DR: The authors present a new algorithm for computing the capacitance of three-dimensional electrical conductors of complex structures that is significantly faster and uses much less memory than previous best algorithms and is kernel independent.
Abstract: The authors present a new algorithm for computing the capacitance of three-dimensional electrical conductors of complex structures. The new algorithm is significantly faster and uses much less memory than previous best algorithms and is kernel independent. The new algorithm is based on a hierarchical algorithm for the n-body problem and is an acceleration of the boundary element method (BEM) for solving the integral equation associated with the capacitance extraction problem. The algorithm first adaptively subdivides the conductor surfaces into panels according to an estimation of the potential coefficients and a user-supplied error bound. The algorithm stores the potential coefficient matrix in a hierarchical data structure of size O(n), although the matrix is size n/sup 2/ if expanded explicitly, where n is the number of panels. The hierarchical data structure allows the multiplication of the coefficient matrix with any vector in O (n) time. Finally, a generalized minimal residual algorithm is used to solve m linear systems each of size n /spl times/ n in O(mn) time, where m is the number of conductors. The new algorithm is implemented and the performance is compared with previous best algorithms for the k /spl times/ k bus example. The new algorithm is 60 times faster than FastCap and uses 1/80 of the memory used by FastCap. The results computed by the new algorithm are within 2.5% from that computed by FastCap. The new algorithm is 5 to 150 times faster than the commercial software QuickCap with the same accuracy.

Journal ArticleDOI
TL;DR: A truncation strategy for the coefficient matrix of the corresponding discrete system which forms a basis for fast algorithms for fast collocation methods for integral equations of the second kind with weakly singular kernels is proposed.
Abstract: In this paper we develop fast collocation methods for integral equations of the second kind with weakly singular kernels. For this purpose, we construct multiscale interpolating functions and collocation functionals having vanishing moments. Moreover, we propose a truncation strategy for the coefficient matrix of the corresponding discrete system which forms a basis for fast algorithms. An optimal order of convergence of the approximate solutions obtained from the fast algorithms is proved and the computational complexity of the algorithms is estimated. The stability of the numerical method and the condition number of the truncated coefficient matrix are analyzed.

Journal ArticleDOI
TL;DR: The compactly supported radial basis functions (CSRBFs) are presented in solving a system of shallow water hydrodynamics equations and the resulting banded matrix has shown improvement in both ill-conditioning and computational efficiency.

Book
30 Jun 2002
TL;DR: This chapter discusses the mathematics of model reduction in linear systems, which involves regularization of ill-conditioned systems and the construction of formal orthogonal polynomials for linear algebraic equations.
Abstract: Introduction. 1. Control of linear systems. 2. Formal orthogonal polynomials. 3. Pade approximations. 4. Transform inversion. 5. Linear algebra issues. 6. Lanczos tridiagonalization process. 7. Systems of linear algebraic equations. 8. Regularization of ill-conditioned systems. 9. Sylvester and riccati equations. 10. Topics on nonlinear differential equations. 11. Appendix: the mathematics of model reduction. Index.

Journal ArticleDOI
TL;DR: It is illustrated here how the algebraic techniques of Multipolynomial resultant and Groebner basis explicitly solve the nonlinear GPS pseudo-ranging four-point equations once they have been converted into algebraic (polynomial) form and reduced to linear equations.
Abstract: Several procedures for solving, in a closed form the GPS pseudo-ranging four-point problem P4P in matrix form already exist. We present here alternative algebraic procedures using Multipolynomial resultant and Groebner basis to solve the same problem. The advantage is that these algebraic algorithms have already been implemented in algebraic software such as “Mathematica” and “Maple”. The procedures are straightforward and simple to apply. We illustrate here how the algebraic techniques of Multipolynomial resultant and Groebner basis explicitly solve the nonlinear GPS pseudo-ranging four-point equations once they have been converted into algebraic (polynomial) form and reduced to linear equations. In particular, the algebraic tools of Multipolynomial resultant and Groebner basis provide symbolic solutions to the GPS four-point pseudo-ranging problem. The various forward and backward substitution steps inherent in the clasical closed form solutions of the problem are avoided. Similar to the Gauss elimination techniques in linear systems of equations, the Multipolynomial resultant and Groebner basis approaches eliminate several variables in a multivariate system of nonlinear equations in such a manner that the end product normally consists of univariate polynomial equations (in this case quadratic equations for the range bias expressed algebraically using the given quantities) whose roots can be determined by existing programs (e. g., the roots command in MATLAB). © 2002 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: In this article, a robust generalized Jacobi (GJ) preconditioner was proposed for solving very large-scale Biot's finite element equations using the symmetric quasi-minimal residual method.
Abstract: Finite element simulations of very large-scale soil–structure interaction problems (e.g. excavations, tunnelling, pile-rafts, etc.) typically involve the solution of a very large, ill-conditioned, and indefinite Biot system of equations. The traditional preconditioned conjugate gradient solver coupled with the standard Jacobi (SJ) preconditioner can be very inefficient for this class of problems. This paper presents a robust generalized Jacobi (GJ) preconditioner that is extremely effective for solving very large-scale Biot's finite element equations using the symmetric quasi-minimal residual method. The GJ preconditioner can be formed, inverted, and implemented within an ‘element-by-element’ framework as readily as the SJ preconditioner. It was derived as a diagonal approximation to a theoretical form, which can be proven mathematically to possess an attractive eigenvalue clustering property. The effectiveness of the GJ preconditioner over a wide range of soil stiffness and permeability was demonstrated numerically using a simple three-dimensional footing problem. This paper casts a new perspective on the potentialities of the simple diagonal preconditioner, which has been commonly perceived as being useful only in situations where it can serve as an approximate inverse to a diagonally dominant coefficient matrix. Copyright © 2002 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The paper shows that certain forms of approximate inverse techniques amount to approximately inverting the triangular factors obtained from some variants of ILU factorization of the original matrix.
Abstract: This paper discusses some relationships between ILU factorization techniques and factored sparse approximate inverse techniques. While ILU factorizations compute approximate LU factors of the coefficient matrix A, approximate inverse techniques aim at building triangular matrices Z and W such that $W^\top AZ$ is approximately diagonal. The paper shows that certain forms of approximate inverse techniques amount to approximately inverting the triangular factors obtained from some variants of ILU factorization of the original matrix. A few useful applications of these relationships will be discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the left side and right side of each attaching point and each end of the beam are regarded as nodes, and derived the coefficient matrix of the simultaneous equations for the ν th attaching point.

Journal ArticleDOI
TL;DR: In this article, two universal relations for both compressible and incompressible, isotropic elastic materials reinforced by a single field of inextensible fibers are derived as components of an axial vector condition.
Abstract: Two general universal relations for both compressible and incompressible, isotropic elastic materials reinforced by a single field of inextensible fibers are derived as components of an axial vector condition. The constitutive equations comprise a system of six scalar equations linear in four constraint and material response functions. It is known from the manifold method applied to this linear system that, in general, at least two universal relations exist. Hence, depending on the rank of a coefficient matrix, the two axial vector component equations comprise the complete set of universal relations for the fiber-reinforced material. These equations are valid for all deformations, and they hold independently of the balance equations and boundary conditions. The results are illustrated for a homogeneous simple shear with triaxial stretch of a material having various single fiber arrangements. For the homogeneous shear and two kinds of nonhomogeneous deformations, the balance equations and/or boundary condi...

Journal ArticleDOI
TL;DR: In this paper, a Fourier analysis based on a finite-volume discretization of a vector potential formulation of time-harmonic Maxwell's equations on a staggered grid in three dimensions is presented.
Abstract: We consider the rapid simulation of three-dimensional electromagnetic problems in geophysical parameter regimes, where the conductivity may vary significantly and the range of frequencies is moderate. Toward developing a multigrid preconditioner, we present a Fourier analysis based on a finite-volume discretization of a vector potential formulation of time-harmonic Maxwell's equations on a staggered grid in three dimensions. We prove grid-independent bounds on the eigenvalue and singular value ranges of the system obtained using a preconditioner based on exact inversion of the dominant diagonal blocks of the non-Hermitian coefficient matrix. This result implies that a preconditioner that uses single multigrid cycles to effect inversion of the diagonal blocks also yields a preconditioned system with an $\ell_2$-condition number bounded independent of the grid size. We then present numerical examples for more realistic situations involving large variations in conductivity (i.e.,\ jump discontinuities). Block-preconditioning with one multigrid cycle using Dendy's BOXMG solver is found to yield convergence in very few iterations, apparently independent of the grid size. The experiments show that the somewhat restrictive assumptions of the Fourier analysis do not prohibit it from describing the essential local behavior of the preconditioned operator under consideration. A very efficient, practical solver is obtained.

Journal ArticleDOI
TL;DR: In this paper, the orthogonal set (dual cone) of a linear space (cone) generated by any subset of a given set of vectors (including sign selection) is obtained, and the representation of the resulting linear spaces and cones to their minimal representations.

Journal ArticleDOI
TL;DR: It is shown that after finitely many iterations, the working set becomes independent of the iterates and is essentially the same as the active set of the KKT point, and under some additional conditions, the convergence rate is two-step superlinear or even Q-superlinear.
Abstract: In this paper, by means of the concept of the working set, which is an estimate of the active set, we propose a feasible sequential linear equation algorithm for solving inequality constrained optimization problems. At each iteration of the proposed algorithm, we first solve one system of linear equations with a coefficient matrix of size m × m (where m is the number of constraints) to compute the working set; we then solve a subproblem which consists of four reduced systems of linear equations with a common coefficient matrix. Unlike existing QP-free algorithms, the subproblem is concerned with only the constraints corresponding to the working set. The constraints not in the working set are neglected. Consequently, the dimension of each subproblem is not of full dimension. Without assuming the isolatedness of the stationary points, we prove that every accumulation point of the sequence generated by the proposed algorithm is a KKT point of the problem. Moreover, after finitely many iterations, the working set becomes independent of the iterates and is essentially the same as the active set of the KKT point. In other words, after finitely many steps, only those constraints which are active at the solution will be involved in the subproblem. Under some additional conditions, we show that the convergence rate is two-step superlinear or even Q-superlinear. We also report some preliminary numerical experiments to show that the proposed algorithm is practicable and effective for the test problems.

01 Jan 2002
TL;DR: A class of regularized conjugate gradient methods is presented for solving the large sparse system of linear equations of which the coefficient matrix is an ill-conditioned symmetric positive definite matrix that is more efficient and robust than both classical relaxation methods and classical conjugates direction methods.
Abstract: A class of regularized conjugate gradient methods is presented for solving the large sparse system of linear equations of which the coefficient matrix is an ill-conditioned symmetric positive definite matrix The convergence properties of these methods are discussed in depth, and the best possible choices of the parameters involved in the new methods are investigated in detail Numerical computations show that the new methods are more efficient and robust than both classical relaxation methods and classical conjugate direction methods

Journal ArticleDOI
TL;DR: A new Newton-like method is proposed which defines new iterates using a linear system with the same coefficient matrix in each iterate while the correction is performed on the right-hand-side vector of the Newton system.
Abstract: This paper proposes a new Newton-like method which defines new iterates using a linear system with the same coefficient matrix in each iterate. while the correction is performed on the right-hand-side vector of the Newton system. In this way a method is obtained which is less costly than the Newton method and faster than the fixed Newton method. Local convergence is proved for nonsingular systems. The influence of the relaxation parameter is analyzed and explicit formulae for the selection of an optimal parameter are presented. Relevant numerical examples are used to demonstrate the advantages of the proposed method.

Journal ArticleDOI
TL;DR: A stabilization of the primal-dual interior-point approach that ensures rapid local convergence under these conditions without enforcing the usual centrality condition associated with path-following methods is described.
Abstract: In recent work, the local convergence behavior of path-following interior-point methods and sequential quadratic programming methods for nonlinear programming has been investigated for the case in which the assumption of linear independence of the active constraint gradients at the solution is replaced by the weaker Mangasarian–Fromovitz constraint qualification. In this paper, we describe a stabilization of the primal-dual interior-point approach that ensures rapid local convergence under these conditions without enforcing the usual centrality condition associated with path-following methods. The stabilization takes the form of perturbations to the coefficient matrix in the step equations that vanish as the iterates converge to the solution.

Journal ArticleDOI
TL;DR: In this article, a linear quadratic regulator (LQR) for mechanical vibration systems is studied based on second-order matrix equations, and the performance index is a functional depending on second derivatives.

Journal ArticleDOI
TL;DR: In this article, convergence analysis on some classical stationary iterative methods for solving the two-dimensional variable coefficient convection-diffusion equation discretized by a fourth-order compact difference scheme is conducted.
Abstract: We conduct convergence analysis on some classical stationary iterative methods for solving the two-dimensional variable coefficient convection-diffusion equation discretized by a fourth-order compact difference scheme. Several conditions are formulated under which the coefficient matrix is guaranteed to be an M-matrix. We further investigate the effect of different orderings of the grid points on the performance of some stationary iterative methods, multigrid method, and preconditioned GMRES. Three sets of numerical experiments are conducted to study the convergence behaviors of these iterative methods under the influence of the flow directions, the orderings of the grid points, and the magnitude of the convection coefficients.

01 Mar 2002
TL;DR: An algorithm for transforming skew-polynomial matrices over an Ore domain in row-reduced form is described and it is shown that this algorithm can be used to perform the standard calculations of linear algebra on such matrices.
Abstract: We describe an algorithm for transforming skew-polynomial matrices over an Ore domain in row-reduced form, and show that this algorithm can be used to perform the standard calculations of linear algebra on such matrices (ranks, kernels, linear dependences, inhomogeneous solving). The main application of our algorithm is to desingularize recurrences and to compute the rational solutions of a large class of linear functional systems. It also turns out to be efficient when applied to ordinary commutative matrix polynomials.