scispace - formally typeset
Search or ask a question

Showing papers on "Coefficient matrix published in 1996"


Journal ArticleDOI
TL;DR: The basic objects of pseudo-linear algebra are introduced (pseudo-derivations, skew polynomials, and pseudo- linear operators) and several recent algorithms on them are described, which yield algorithms for uncoupling and solving systems of linear differential and difference equations in closed form.

220 citations


Book
30 Dec 1996
TL;DR: In this article, the authors introduce the concept of linear transformations in the context of linear systems and introduce the notion of inner product spaces and inner product space spaces for linear transformations.
Abstract: 1. Linear Equations 1.1 Introduction to Linear Systems 1.2 Matrices, Vectors, and Gauss-Jordan Elimination 1.3 On the Solutions of Linear Systems Matrix Algebra 2. Linear Transformations 2.1 Introduction to Linear Transformations and Their Inverses 2.2 Linear Transformations in Geometry 2.3 Matrix Products 2.4 The Inverse of a Linear Transformation 3. Subspaces of R" and Their Dimensions 3.1 Image and Kernel of a Linear Transformation 3.2 Subspace of R" Bases and Linear Independence 3.3 The Dimension of a Subspace of R" 3.4 Coordinates 4. Linear Spaces 4.1 Introduction to Linear Spaces 4.2 Linear Transformations and Isomorphisms 4.3 The Matrix of a Linear Transformation 5. Orthogonality and Least Squares 5.1 Orthogonal Projections and Orthonormal Bases 5.2 Gram-Schmidt Process and QR Factorization 5.3 Orthogonal Transformations and Orthogonal Matrices 5.4 Least Squares and Data Fitting 5.5 Inner Product Spaces 6. Determinants 6.1 Introduction to Determinants 6.2 Properties of the Determinant 6.3 Geometrical Interpretations of the Determinant Cramer's Rule 7. Eigenvalues and Eigenvectors 7.1 Dynamical Systems and Eigenvectors: An Introductory Example 7.2 Finding the Eigenvalues of a Matrix 7.3 Finding the Eigenvectors of a Matrix 7.4 Diagonalization 7.5 Complex Eigenvalues 7.6 Stability 8. Symmetric Matrices and Quadratic Forms 8.1 Symmetric Matrices 8.2 Quadratic Forms 8.3 Singular Values 9. Linear Differential Equations 9.1 An Introduction to Continuous Dynamical Systems 9.2 The Complex Case: Euler's Formula 9.3 Linear Differential Operators and Linear Differential Equations Appendix A. Vectors Answers to Odd-numbered Exercises Subject Index Name Index

177 citations


Journal ArticleDOI
TL;DR: If the LIP algorithm is applied to integer data, it gets as another corollary a new proof of a well-known theorem by Tardos that linear programming can be solved in strongly polynomial time provided that A contains small-integer entries.
Abstract: We propose a primal-dual "layered-step" interior point (LIP) algorithm for linear programming with data given by real numbers. This algorithm follows the central path, either with short steps or with a new type of step called a "layered least squares" (LLS) step. The algorithm returns an exact optimum after a finite number of steps--in particular, after O(n3.5c(A)) iterations, wherec(A) is a function of the coefficient matrix. The LLS steps can be thought of as accelerating a classical path-following interior point method. One consequence of the new method is a new characterization of the central path: we show that it composed of at mostn2 alternating straight and curved segments. If the LIP algorithm is applied to integer data, we get as another corollary a new proof of a well-known theorem by Tardos that linear programming can be solved in strongly polynomial time provided thatA contains small-integer entries.

163 citations


Journal ArticleDOI
TL;DR: In this note, a method is given that uses the students' knowledge of homogeneous linear differential equations with constant coefficients and the Cayley--Hamilton theorem to calculate the matrix exponential.
Abstract: There are many different methods to calculate the exponential of a matrix: series methods, differential equations methods, polynomial methods, matrix decomposition methods, and splitting methods, none of which is entirely satisfactory from either a theoretical or a computational point of view. How then should the matrix exponential be introduced in an elementary differential equations course, for engineering students for example, with a minimum of mathematical prerequisites? In this note, a method is given that uses the students' knowledge of homogeneous linear differential equations with constant coefficients and the Cayley--Hamilton theorem. The method is not new; what is new is the approach.

126 citations


Journal ArticleDOI
TL;DR: A switching surface determined by the control coefficient matrix and the associated Lyapunov function is able to ensure asymptotic stability for the system in sliding mode and the proposed method may also be used for systems with nonlinear dynamics and for linear systems with delays.

100 citations


Journal ArticleDOI
TL;DR: A matrix method, which is called the Chebyshev‐matrix method, for the approximate solution of linear differential equations in terms of ChebysHEv polynomials is presented.
Abstract: A matrix method, which is called the Chebyshev‐matrix method, for the approximate solution of linear differential equations in terms of Chebyshev polynomials is presented. The method is based on first taking the truncated Chebyshev series of the functions in equation and then substituting their matrix forms into the given equation. Thereby the equation reduces to a matrix equation, which corresponds to a system of linear algebraic equations with unknown Chebyshev coefficients. To illustrate the method, it is applied to certain linear differential equations under the given conditions and the results are compared.

94 citations


Journal ArticleDOI
TL;DR: The authors present some relations that allow the efficient approximate inversion of linear differential operators with rational function coefficients, including all the classical orthogonal polynomials, in terms of a large class of orthogomatic polynomial families.
Abstract: We present some relations that allow the efficient approximate inversion of linear differential operators with rational function coefficients. We employ expansions in terms of a large class of orthogonal polynomial families, including all the classical orthogonal polynomials. These families obey a simple three-term recurrence relation for differentiation, which implies that on an appropriately restricted domain the differentiation operator has a unique banded inverse. The inverse is an integration operator for the family, and it is simply the tridiagonal coefficient matrix for the recurrence. Since in these families convolution operators (i.e. matrix representations of multiplication by a function) are banded for polynomials, we are able to obtain a banded representation for linear differential operators with rational coefficients. This leads to a method of solution of initial or boundary value problems that, besides having an operation count that scales linearly with the order of truncation N, is computationally well conditioned. Among the applications considered is the use of rational maps for the resolution of sharp interior layers.

92 citations


Journal ArticleDOI
TL;DR: Thanks to the specific structure and the properties of Markovian generators, it is established that the solution of the system can be approximated “as close as possible” by a series expansion in terms of the small parameter $\varepsilon > 0$.
Abstract: A class of singularly perturbed time-varying systems with a small parameter $\varepsilon > 0$ is considered in this paper. The importance of the study stems from the fact that many problems arise in various applications involve a rapidly fluctuating Markov chain. To investigate the limit behavior of such systems, it is necessary to consider the corresponding singular-perturbation problems. Existing results in singular perturbation of ordinary differential equations cannot be applied since the coefficient matrix of the equation is a generator of a finite-state Markov chain, and as a result it is singular. Asymptotic properties of the aforementioned systems are developed via matched asymptotic expansion in this paper. Thanks to the specific structure and the properties of Markovian generators, it is established that the solution of the system can be approximated “as close as possible” by a series expansion in terms of the small parameter $\varepsilon > 0$.

81 citations


Journal ArticleDOI
TL;DR: An alternative scheme is presented, which iterates to the exact solution of the fully coupled numerical model, and is limited to solute transport problems for which LU factorisation is a practical solution method.

54 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of solving an elliptic boundary value problem in the case that the coefficients vary by many orders of magnitude over the domain and proposed a linear finite element method to solve the problem.
Abstract: We consider solving an elliptic boundary value problem in the case that the coefficients vary by many orders of magnitude over the domain. A linear finite element method is used. It is shown that the standard method for solving the resulting linear equations in finite-precision arithmetic can give an arbitrarily inaccurate answer because of ill-conditioning in the stiffness matrix. A new method for solving the linear equations is proposed. This method is based on a "mixed formulation" and gives a numerically accurate answer independent of the variation in the coefficients. The numerical error in the soution of the linear system for the new method is shown to depend on the aspect ratio of the triangulation.

50 citations


Journal ArticleDOI
TL;DR: The Asymptotic stability of a class of linear difference equations has been studied in this paper, where the authors show that linear difference equality is stable in terms of the number of differences.
Abstract: (1996). On the Asymptotic Stability of a Class of Linear Difference Equations. Mathematics Magazine: Vol. 69, No. 1, pp. 34-43.

Journal ArticleDOI
TL;DR: QMR was found to be the best iterative method in this application of discrete-dipole approximation and converged in only a few more iterations than the full generalized minimal residual (GMRES) method.
Abstract: The discrete-dipole approximation (DDA) is a method for calculating the scattering of light by an irregular particle. The DDA has been used, for example, in calculations of optical properties of cosmic dust. In this method the particle is approximated by interacting electromagnetic dipoles. Computationally the DDA method includes the solution of large dense systems of linear equations where the coefficient matrix is complex symmetric. In this work, the linear systems of equations are solved by various iterative methods. QMR was found to be the best iterative method in this application. It converged in only a few more iterations than the full generalized minimal residual (GMRES) method. When the discretization of the particle was refined, the number of iterations remained constant even without preconditioning. The matrix–vector product in the iterative methods can be computed with the fast Fourier transform or the fast multipole algorithm. These algorithms make it feasible to solve dense linear systems of ...

Journal ArticleDOI
TL;DR: This work proposes an alternative, also based on the Arnoldi process, which approximates the field of values of the coefficient matrix and of its inverse in the Krylov subspace, and shows the limitations of this approach.
Abstract: Hybrid methods for the solution of systems of linear equations consist of a first phase where some information about the associated coefficient matrix is acquired, and a second phase in which a polynomial iteration designed with respect to this information is used. Most of the hybrid algorithms proposed recently for the solution of nonsymmetric systems rely on the direct use of eigenvalue estimates constructed by the Arnoldi process in Phase I. We will show the limitations of this approach and propose an alternative, also based on the Arnoldi process, which approximates the field of values of the coefficient matrix and of its inverse in the Krylov subspace. We also report on numerical experiments comparing the resulting new method with other hybrid algorithms.



Book ChapterDOI
18 Aug 1996
TL;DR: A new version of QMR is proposed with the following properties: Firstly, the Lanczos process is based on coupled two-term recurrences; secondly, both sequences of Lanczos vectors are scalable; and finally, there is only a single global synchronization point per iteration.
Abstract: For the solution of linear systems of equations with unsymmetric coefficient matrix, Freund and Nachtigal (SIAM J. Sci. Comput. 15 (1994), 313–337) proposed a Kryloy subspace method called Quasi-Minimal Residual method (QMR). The two main ingredients of QMR are the unsymmetric Lanczos algorithm and the quasi-minimal residual approach that minimizes a factor of the residual vector rather than the residual itself. The Lanczos algorithm spans a Krylov subspace by generating two sequences of biorthogonal vectors called Lanczos vectors. Due to the orthogonalization and scaling of the Lanczos vectors, algorithms that make use of the Lanczos process contain inner products leading to global communication and synchronization on parallel processors. For massively parallel computers, these effects cause delays preventing scalability of the implementation. Consequently, parallel algorithms should avoid global synchronization as far as possible. We propose a new version of QMR with the following properties: Firstly, the Lanczos process is based on coupled two-term recurrences; secondly, both sequences of Lanczos vectors are scalable; and finally, there is only a single global synchronization point per iteration. The efficiency of this algorithm is demonstrated by numerical experiments on a PARAGON system using up to 121 processors.

Proceedings ArticleDOI
David Gesbert1, Pierre Duhamel
24 Jun 1996
TL;DR: A new joint data/channel estimation method that relies on the minimization of a bilinear MSE cost function, where the variables to be adjusted are the channel coefficient matrix and a linear equalizer, leading to globally convergent identification/equalization schemes.
Abstract: In the context of digital radio communications, the signals are transmitted through propagation channels which introduce intersymbol interference (ISI). The channels can be represented as FIR filters which have to be identified and/or equalized for the transmitted symbols to be recovered. The problem of identifying/equalizing a digital communication channel based on its temporally or spatially oversampled output has gained much attention (single-input/multiple-output-SIMO-deconvolution). In this context, we propose a new joint data/channel estimation method. Our technique relies on the minimization of a bilinear MSE cost function, where the variables to be adjusted are the channel coefficient matrix and a linear equalizer. We show that this a priori choice of a linear equalization structure allows the derivation of a second-order unimodal criterion, leading to globally convergent identification/equalization schemes. The proposed method is completely blind in that (1) no assumption is required upon the transmitted sequence statistics or alphabet, and (2) it shows some robustness with respect to the channel order estimation problem (thus improving on most previous related works). It also allows the free choice of a delay in the equalizer so that output noise amplification can be optimized.

Journal ArticleDOI
TL;DR: This work considers a modification of a path-following infeasible-interior-point algorithm that has similar theoretical global convergence properties to those of the earlier algorithm while its asymptotic convergence rate can be made superquadratic by an appropriate parameter choice.
Abstract: We consider a modification of a path-following infeasible-interior-point algorithm described by Wright. In the new algorithm, we attempt to improve each major iterate by reusing the coefficient matrix factors from the latest step. We show that the modified algorithm has similar theoretical global convergence properties to those of the earlier algorithm while its asymptotic convergence rate can be made superquadratic by an appropriate parameter choice.

Journal ArticleDOI
TL;DR: A numerical algorithm for computing a few extreme generalized singular values and corresponding vectors of a sparse or structured matrix pair based on the CS decomposition and the Lanczos bidiagonalization process is presented.
Abstract: We present a numerical algorithm for computing a few extreme generalized singular values and corresponding vectors of a sparse or structured matrix pair \(\{A,B\}\). The algorithm is based on the CS decomposition and the Lanczos bidiagonalization process. At each iteration step of the Lanczos process, the solution to a linear least squares problem with\((A^{\rm T},B^{\rm T})^{\rm T}\) as the coefficient matrix is approximately computed, and this consists the only interface of the algorithm with the matrix pair \(\{A,B\}\). Numerical results are also given to demonstrate the feasibility and efficiency of the algorithm.

Journal ArticleDOI
TL;DR: In this paper, the operational matrix of differentiation associated with the shifted Chebyshev polynomials of the first kind is derived and the solution of a system of differential equations can be found by solving a set of linear algebraic equations without constructing the equivalent integral equations.
Abstract: Chebyshev polynomials are utilized to obtain solutions of a set of pth order linear differential equations with periodic coefficients. For this purpose, the operational matrix of differentiation associated with the shifted Chebyshev polynomials of the first kind is derived. Utilizing the properties of this matrix, the solution of a system of differential equations can be found by solving a set of linear algebraic equations without constructing the equivalent integral equations. The Floquet Transition Matrix (FTM) can then be computed and its eigenvalues (Floquet multipliers) subsequently analyzed for stability. Two straightforward methods, the ‘differential state space formulation’ and the ‘differential direct formulation’, are presented and the results are compared with those obtained from other available techniques. The well-known Mathieu equation and a higher order system are used as illustrative examples.

01 Jan 1996
TL;DR: In this paper, local and global invariants of linear dierential-algebraic equations with variable coecients and their relation are studied, and the connection between dierent approaches to the analysis of such equations and the associated indices, which are the dierentiation index and the strangeness index is discussed.
Abstract: We study local and global invariants of linear dierential-algebraic equations with variable coecients and their relation. In particular, we discuss the connection between dierent approaches to the analysis of such equations and the associated indices, which are the dierentiation index and the strangeness index. This leads to a new proof of an existence and uniqueness theorem as well as to an adequate numerical algorithm for the solution of linear dierential-algebraic equations.

Journal ArticleDOI
TL;DR: Two domain decomposition formulations are presented in conjunction with the preconditioned conjugate gradient method (PCG) for the solution of large-scale problems in solid and structural mechanics.

Proceedings ArticleDOI
01 Oct 1996
TL;DR: Let an Ore polynomial ring k[X; a, 6] and a nonzero pseudolinear map 19: K + K, where K is a O, &compatible extension of the field k, be given and it is assumed that if a first-order equation Fg = O, F ~ k[9], has a non zero solution in a u, b-compatible extensionof the fieldk, then the equation has aNonzero solution in K.
Abstract: Let an Ore polynomial ring k[X; a, 6] and a nonzero pseudolinear map 19: K + K, where K is a O, &compatible extension of the field k, be given. Then we have the ring k[O] of operators K ~ K. It is assumed that if a first-order equation Fg = O, F ~ k[9], has a nonzero solution in a u, b-compatible extension of the field k, then the equation has a nonzero solution in K. These solutions form the set %,t C K of hyperexponential elements. An equation Py = O, P ~ k[O], is called completely factorable if P can be decomposed in the product of first-order operators over k. Solutions of all completely factorable equations form the linear space dk C K of d’Alembertian elements. The order of minimal operator over k which annihilates a G ~k is called the height of a. It is easy to see that %k C J& and the height of any a C %k is equal to 1. It is known ([12, 4]) that if L E ,k[@] and ~ G ‘?-l~ then all the hyperexponential solutions of the equation

Journal ArticleDOI
TL;DR: In this article, the authors considered the general linear regression model (y,Xβ,V|R2β2 = r) where the block partitioned regressor matrix X = (X1 X2) may be deficient in column rank, the dispersion matrix V is possibly singular, βt = (βt 1 βt2) is the vector of unknown regression coefficients, and β2 is possibly subject to consistent linear constraints R2β 2 = r.

Patent
Mitsutoshi Nakamura1
06 Sep 1996
TL;DR: In this article, a simulation method and a simulator determine a profile of particles, by determining whether or not each reaction formula, which describes a reaction of particles to generate reactants in semiconductor solids or gases, is in equilibrium.
Abstract: A simulation method and a simulator determine a profile of particles, by determining whether or not each reaction formula, which describes a reaction of particles to generate reactants in semiconductor solids or gases, is in equilibrium, determining unknown variables excluding variables related to the reactants of each equilibrium reaction, forming continuity equations containing a plurality of time differential terms as functions of the unknown variables, linearizing and discretizing the time differential terms into the coefficient matrix and constant vector of simultaneous linear equations, and solving the simultaneous linear equations.



Journal ArticleDOI
TL;DR: In this paper, the authors present a description of all systems of linear differential equations which do not admit any rational first integral (RFI) in the form of a first integral.

Journal ArticleDOI
TL;DR: A new algorithm to determine if the coefficient matrix A of a linear system Ax = b is a generalized diagonally dominant matrix is proposed and compared with that of Yi-ming Gao.
Abstract: we propose a new algorithm to determine if the coefficient matrix A of a linear system Ax = b is a generalized diagonally dominant matrix. We give some numerical examples and compare our method with that of Yi-ming Gao.

Journal ArticleDOI
TL;DR: In this article, the problem of classifying linear time-varying finite dimensional systems of difference equations under kinematic similarity was studied, i.e., under a uniformly bounded change of variables of which the inverse is also uniformly bounded.
Abstract: This paper concerns the problem to classify linear time-varying finite dimensional systems of difference equations under kinematic similarity, i.e., under a uniformly bounded time-varying change of variables of which the inverse is also uniformly bounded. Also the problem of reducing difference equations by using such similarity transformations is studied. Both problems are solved for a number of subclasses, including equations with scalar coefficients, time-invariant equations, finitely supported equations, and equations with one jump. For the general case an open problem is formulated.