scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Numerical Analysis in 1973"


Journal ArticleDOI
TL;DR: This paper presents an unusual numbering of the mesh (unknowns) and shows that if the authors avoid operating on zeros, the $LDL^T $ factorization of A can be computed using the same standard algorithm in $O(n^3 )$ arithmetic operations.
Abstract: Let M be a mesh consisting of $n^2 $ squares called elements, formed by subdividing the unit square $(0,1) \times (0,1)$ into $n^2 $ small squares of side ${1 / h}$, and having a node at each of the $(n + 1)^2 $ grid points. With M we associate the $N \times N$ symmetric positive definite system $Ax = b$, where $N = (n + 1)^2 $, each $x_i $ is associated with a node of M, and $A_{ij} e 0$ if and only if $x_i $ and $x_j $ are associated with nodes of the same element. If we solve the equations via the standard symmetric factorization of A, then $O(n^4 )$ arithmetic operations are required if the usual row by row (banded) numbering scheme is used, and the storage required is $O(n^3 )$. In this paper we present an unusual numbering of the mesh (unknowns) and show that if we avoid operating on zeros, the $LDL^T $ factorization of A can be computed using the same standard algorithm in $O(n^3 )$ arithmetic operations. Furthermore, the storage required is only $O(n^2 \log _2 n)$. Finally, we prove that all ord...

1,043 citations


Journal ArticleDOI
TL;DR: A new method, called the QZ algorithm, is presented for the solution of the matrix eigenvalue problem $Ax = \lambda Bx$ with general square matrices A and B with particular attention to the degeneracies which result when B is singular.
Abstract: A new method, called the $QZ$ algorithm, is presented for the solution of the matrix eigenvalue problem $Ax = \lambda Bx$ with general square matrices A and B. Particular attention is paid to the degeneracies which result when B is singular. No inversions of B or its submatrices are used. The algorithm is a generalization of the $QR$ algorithm, and reduces to it when $B = I$. Problems involving higher powers of $\lambda $ are also mentioned.

1,038 citations


Journal ArticleDOI
TL;DR: In this paper, L2 error estimates for the continuous time and several discrete time Galerkin approxima-tions to solutions of some second order nonlinear parabolic partial differential equations are derived.
Abstract: L2 error estimates for the continuous time and several discrete time Galerkin approxima- tions to solutions of some second order nonlinear parabolic partial differential equations are derived. Both Neumann and Dirichlet boundary conditions are considered. The estimates obtained are the best possible in an L2 sense. These error estimates are derived by relating the error for the nonlinear parabolic problem to known L2 error estimates for a linear elliptic problem. With additional restrictions on basis functions

608 citations


Journal ArticleDOI
TL;DR: By modifying the simplex method of linear programming, this work is able to present an algorithm for l_1-approximation which appears to, be superior computationally to any other known algorithm for this problem.
Abstract: Work sponsored by the United States Army under Contract No.: DA-31-124-ARO-D-462. Department of Mathematics, University of Victoria, Victoria, B.C., Canada.

571 citations


Journal ArticleDOI
TL;DR: In this article, an isolated solution of an mth order nonlinear ordinary differential equation with m linear side conditions is determined by collocation, i.e., by the requirement that they satisfy the differential equation at k points in each subinterval, together with the m side conditions.
Abstract: Approximations to an isolated solution of an mth order nonlinear ordinary differential equation with m linear side conditions are determined. These approximations are piecewise polynomial functions of order $m + k$ (degree less than $m + k$) possessing $m - 1$ continuous derivatives. They are determined by collocation, i.e., by the requirement that they satisfy the differential equation at k points in each subinterval, together with the m side conditions. If the solution of the sufficiently smooth differential equation problem has $m + 2k$ continuous derivatives and if the collocation points are the zeroes of the kth Legendre polynomial relative to each subinterval, then the global error in these approximations is $O(| \Delta |^{m + k} )$ with $| \Delta |$ the maximum subinterval length. Moreover, at the ends of each subinterval, the approximation and its first $m - 1$ derivatives are $O(| \Delta |^{2k} )$ accurate. The solution of the nonlinear collocation equations may itself be approximated by solving ...

568 citations


Journal ArticleDOI
TL;DR: A family of fourth order iterative methods for finding simple zeros of nonlinear functions is presented in this paper, which require evaluation of the function and its derivative at the starting point of the algorithm.
Abstract: A family of fourth order iterative methods for finding simple zeros of nonlinear functions is displayed. The methods require evaluation of the function and its derivative at the starting point of e...

342 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the nonlinear eigenvalue problem and proposed a global strategy to find a complete basis of eigenvectors in the cases where it is proved that such a basis exists.
Abstract: The following nonlinear eigenvalue problem is studied : Let $T(\lambda )$ be an $n \times n$ matrix, whose elements are analytical functions of the complex number $\lambda $. We seek $\lambda $ and vectors x and y, such that $T(\lambda )x = 0$, and $y^H T(\lambda ) = 0$.Several algorithms for the numerical solution of this problem are studied. These algorithms are extensions of algorithms for the linear eigenvalue problem such as inverse iteration and the $QR$ algorithm, and algorithms that reduce the nonlinear problem into a sequence of linear problems. It is found that this latter method can be extended into a global strategy, finding a complete basis of eigenvectors in the cases where it is proved that such a basis exists.Numerical tests, performed in order to compare the different algorithms, are reported, and a few numerical examples illustrating their behavior are given.

292 citations


Journal ArticleDOI
TL;DR: A priori error estimates are established in the $L^2 $-norm for Galerkin approximations to the solution of a generalized wave equation.
Abstract: A priori error estimates are established in the $L^2 $-norm for Galerkin approximations to the solution of a generalized wave equation. Optimal rates of convergence are established for several boundary conditions using both continuous and discrete Galerkin procedures.

256 citations


Journal ArticleDOI
TL;DR: In this article, an approximation theorem for the Dirichlet problem for a W ∞ ∞ 2 ∞ (1) ∞ )-elliptic equation was proved and error bounds were derived.
Abstract: Curved elements, introduced by the author in [13] and [14], which are suitable for solving boundary value problems of the second order in plane domains with an arbitrary boundary are discussed. An approximation theorem is proved, the Dirichlet problem for a ${\mathop W\limits^{\circ}} _2^{(1)} $-elliptic equation is considered as a model problem and error bounds are derived.

251 citations


Journal ArticleDOI
TL;DR: A penalty method approach is used for achieving convergence of a finite element method using nonconforming elements and error estimates are given.
Abstract: A penalty method approach is used for achieving convergence of a finite element method using nonconforming elements. Error estimates are given.

248 citations


Journal ArticleDOI
TL;DR: A class of secant methods and a class of methods related to Brown’s methods, but using orthogonal rather than stabilized elementary transformations are introduced to avoid finding a new approximation to the Jacobian matrix of the system at each step, and thus increase the efficiency.
Abstract: We compare the Ostrowski efficiency of some methods for solving systems of nonlinear equations without explicitly using derivatives. The methods considered include the discrete Newton method, Shamanskii’s method, the two-point secant method, and Brown’s methods. We introduce a class of secant methods and a class of methods related to Brown’s methods, but using orthogonal rather than stabilized elementary transformations. The idea of these methods is to avoid finding a new approximation to the Jacobian matrix of the system at each step, and thus increase the efficiency. Local convergence theorems are proved, and the efficiencies of the methods are calculated. Numerical results are given, and some possible extensions are mentioned.

Journal ArticleDOI
TL;DR: In this paper, the singular value decomposition (SVDC) is used to solve ill-conditioned linear systems using the singular values decomposition. But the SVDC can improve the accuracy of the computed solution for certain kinds of right-hand sides.
Abstract: We consider the solution of ill-conditioned linear systems using the singular value decomposition, and show how this can improve the accuracy of the computed solution for certain kinds of right-hand sides Then we indicate how this technique is especially appropriate for some classical ill-posed problems of mathematical physics

Journal ArticleDOI
TL;DR: In this paper, an implicit finite difference method for the multidimensional Stefan problem is discussed, where the classical problem with discontinuous enthalpy is replaced by an approximate Stefan problem with continuous piecewise linear enthpy.
Abstract: An implicit finite difference method for the multidimensional Stefan problem is discussed. The classical problem with discontinuous enthalpy is replaced by an approximate Stefan problem with continuous piecewise linear enthalpy. An implicit time approximation reduces this formulation to a sequence of monotone elliptic problems which are solved by finite difference techniques. It is shown that the resulting nonlinear algebraic equations are solvable with a Gauss-Seidel method and that the discretized solution converges to the unique weak solution of the Stefan problem as the time and space mesh size approaches zero.

Journal ArticleDOI
TL;DR: A general formulation of the additive correction methods of Poussin [4] and Watts [7] is presented in this paper, which is applied to the solution of finite difference equations resulting from elliptic and parabolic partial differential equations.
Abstract: A general formulation of the additive correction methods of Poussin [4] and Watts [7] is presented. The methods are applied to the solution of finite difference equations resulting from elliptic and parabolic partial differential equations. A new method is developed for anisotropic and heterogeneous problems. For such difficult problems the method presented here is comparable with Stone’s strongly implicit method, while all other methods tested require much greater computational effort. The correction approach discussed here can be easily applied with any iterative method.

Journal ArticleDOI
TL;DR: Sharp lower bounds are obtained for multiplications and storage in the sparse system arising from the application of finite difference or finite element techniques to linear boundary value problems on plane regions yielding regular $n \times n$ grids.
Abstract: Sharp lower bounds are obtained for multiplications and storage in the sparse system arising from the application of finite difference or finite element techniques to linear boundary value problems on plane regions yielding regular $n \times n$ grids. Graph-theoretic techniques are used to take advantage of the simplicity of the underlying combinatorial structure of the problem.

Journal ArticleDOI
TL;DR: In this paper, a modified form of the gradient type approach is applied to a perturbation of the penalta, which enables the first order derivatives of the penalty function to be everywhere defined.
Abstract: Consider the problem of finding the local constrained minimum $x_0 $ of the function f on the set \[ F = \left\{ {x \in R^n |\phi _i (x) \geqq 0,\quad \psi _{j + k} (x) = 0;\quad i = 1,2, \cdots ,k;\quad j = 1,2, \cdots ,l} \right\}.\]One method of solution is to minimize the associated penalty function \[ p_0 (x) = \mu f(x) - \sum _{i = 1}^k {\min } \left( {0,\phi _i (x)} \right) + \sum _{j = 1}^l {\left| {\psi _{j + k} (x)} \right|} \quad {\text{for }} x \in R^n ,\quad \mu \geqq 0. \]Let $x(\mu )$ be the minimum of this penalty function. It is known that, provided $\mu $ is sufficiently small, $x(\mu ) = x_0 $.However, until recently, a serious drawback to this particular penalty function was that its first order derivatives are not everywhere defined. Thus, well-known gradient type methods usually applied to unconstrained optimization problems were necessarily excluded.This paper presents a method that enables a modified form of the gradient type approaches to be applied to a perturbation of the penalt...

Journal ArticleDOI
TL;DR: In this article, a method for computing accurate approximations to the eigenvalues and eigenfunctions of regular Sturm-Liouville differential equations was proposed, which consists of replacing the coefficient functions of the given problem by piecewise polynomial functions and then solving the resulting simplified problem.
Abstract: This paper is concerned with computing accurate approximations to the eigenvalues and eigenfunctions of regular Sturm–Liouville differential equations The method consists of replacing the coefficient functions of the given problem by piecewise polynomial functions and then solving the resulting simplified problem Error estimates in terms of the approximate solutions are established and numerical results are displayed Since the asymptotic properties for Sturm–Liouville systems are preserved by the approximation, the relative error in the higher eigenvalues is much more uniform than is the case for finite difference or Rayleigh–Ritz methods

Journal ArticleDOI
TL;DR: In this paper, it was shown that the first four iterations produce exactly the same sequence of subspaces as do direct and inverse iterations, starting from appropriate sub-spaces, and that Hessenberg matrices are associated with ideal starting spaces.
Abstract: We are concerned with the task of computing the invariant subspaces of a given matrix. For this purpose the $LU$, $QR$, treppen and bi-iterations have been presented, used, and studied more or less independently of the old-fashioned power method. Each of these methods generates implicitly a sequence of subspaces which determines the convergence properties of the method. The iterations differ in the way in which a basis is constructed to represent each subspace. This aspect largely determines the usefulness of the method.We show that the first four iterations produce exactly the same sequence of subspaces as do direct and inverse iteration started from appropriate subspaces. Their convergence properties are therefore the same and we present a complete geometric convergence theory in terms of the power method. Most previous studies have been algebraic in character. We show that Hessenberg matrices are associated with ideal starting spaces.The theory rests naturally in the setting of an n-dimensional space $...

Journal ArticleDOI
TL;DR: In this article, the authors considered two-stage iterative processes for solving the linear system, where the outer iteration is defined by a nonsingular matrix such that the inner iteration is convergent and the outer process is determined exactly at every step.
Abstract: This paper considers two-stage iterative processes for solving the linear system $Af = b$ The outer iteration is defined by $Mf^{k + 1} = Nf^k + b$, where M is a nonsingular matrix such that $M - N = A$ At each stage $f^{k + 1} $ is computed approximately using an inner iteration process to solve $Mv = Nf^k + b$ for v At the kth outer iteration, $p_k $ inner iterations are performed It is shown that this procedure converges if $p_k \geqq P$ for some P provided that the inner iteration is convergent and that the outer process would converge if $f^{k + 1} $ were determined exactly at every step Convergence is also proved under more specialized conditions, and for the procedure where $p_k = p$ for all k, an estimate for p is obtained which optimizes the convergence rate Examples are given for systems arising from the numerical solution of elliptic partial differential equations and numerical results are presented

Journal ArticleDOI
TL;DR: For the continuous-time Galerkin method with periodic solution, this paper showed that the error is bounded by Ω(ch^4 ) for sufficiently smooth solutions on a mesh of size h.
Abstract: The continuous-time Galerkin method is studied for the equation $u_t + u_x = 0$ with periodic solution. If the space of possible approximate solutions is taken to be $C^1 $ piecewise cubic polynomials on mesh of size h, then the $L^2 $-norm of the error is in general no better than $ch^3 $; if the class of possible approximate solutions is taken to be $C^2 $ piecewise cubic polynomials on this mesh, the error is bounded by $ch^4 $ for sufficiently smooth solutions.

Journal ArticleDOI
TL;DR: Approximately ten different ways for changing the step size used by multistep methods are enumerated, and their good and bad features are compared.
Abstract: Approximately ten different ways for changing the step size used by multistep methods are enumerated, and their good and bad features are compared. More efficient algorithms are given for the difference formulations of a frequently used halving and doubling process, and a cure for the instability inherent in this halving process is proposed.

Journal ArticleDOI
TL;DR: In this paper, it is shown how smoothing splines can be represented in terms of a local basis, and that the coefficients can be obtained by solution of a banded linear system.
Abstract: It is shown how smoothing splines can be represented in terms of a local basis, and that the coefficients can be obtained by solution of a banded linear system. Recursion relations are developed which permit rapid and accurate calculation of the necessary basis elements.

Journal ArticleDOI
TL;DR: In this paper, a simple canonical form is presented for splines with respect to non-degenerate (if a set of hyperplanes has nonempty intersection then the corresponding set of normal vectors is linearly independent) partitions.
Abstract: Any set of hyperplanes partitions $E^n $ into a set of polyhedra. A multivariate spline of degree n is a polynomial of total degree n on each polyhedron with all partial derivatives of order $n - 1$ being continuous everywhere. An especially simple canonical form is presented for splines with respect to nondegenerate (if a set of hyperplanes has nonempty intersection then the corresponding set of normal vectors is linearly independent) partitions. Use of the canonical form, for fitting data, involves linear regression for fixed partitions and nonlinear regression for varying partitions. The canonical form gives rise to an ill-conditioned linear regression problem. However, some preliminary numerical experience in low dimensions indicates that the ill-conditioning is overcome with the use of singular value decomposition.

Journal ArticleDOI
TL;DR: In this paper, a completely continuous nonlinear operator on a domain D contained in a Banach space is given, and a common way of approximating this operator leads to a sequence of nonlinear operators.
Abstract: Given a completely continuous nonlinear operator $\mathcal{K}$ on a domain D contained in a Banach space $\mathcal{X}$, one common way of approximating $\mathcal{K}$ leads to a sequence $\{ \mathca...

Journal ArticleDOI
TL;DR: In this paper, the authors review several connections between the theory of convergence of iterative processes and the Lyapunov stability of ordinary difference equations, including local convergence and asymptotic stability.
Abstract: In this report, we review several connections between the theory of convergence of iterative processes and the theory of Lyapunov stability of ordinary difference equations. Among the topics discussed are local convergence and asymptotic stability, global convergence and global asymptotic stability, Lyapunov functions, domain of attraction, total stability and rounding errors, nonautonomous equations and variable operator iterations.

Journal ArticleDOI
TL;DR: In this paper, the convergence conditions for stationary relaxation processes of Gauss-Seidel and Jacobi types were established for linear systems of linear equations with respect to necessary and sufficient conditions for convergence.
Abstract: Cyclic iterative methods of solving systems of linear equations are investigated with reference to necessary and sufficient conditions for convergence. A new and concise method of proof is given for the convergence of point Gauss–Seidel and Jacobi iterations subject to strict and irreducible weak diagonal dominance. The method is developed to obtain convergence conditions, not previously established, for stationary relaxation processes of Gauss–Seidel and Jacobi type. Bounds are obtained on the spectral radius of the iteration matrix, hence a range of values of the relaxation parameter is derived sufficient to guarantee convergence.

Journal ArticleDOI
TL;DR: The method is a generalization of Rutishauser’s $LR$-method for the standard eigenvalue problem and closely resembles the $QZ$-algorithm given by Moler and Stewart for the generalized problem given above.
Abstract: In this paper, we will present and analyze an algorithm for finding ${\bf x}$ and $\lambda$ such that \[ A{\bf x} = \lambda B{\bf x},\] where A and B are $n \times n$ matrices. The algorithm does not require matrix inversion, and may be used when either or both matrices are singular. Our method is a generalization of Rutishauser’s $LR$-method [20] for the standard eigenvalue problem $A{\bf x} = \lambda {\bf x}$ and closely resembles the $QZ$-algorithm given by Moler and Stewart [13] for the generalized problem given above. Unlike the $QZ$-algorithm, which uses orthogonal transformations, our method, the $LZ$-algorithm, uses elementary transformations. When either A or B is complex, our method should be more efficient.

Journal ArticleDOI
TL;DR: In this article, it was shown that any minimax rational approximation problem defines a strictly quasi-convex function with the property that a best approximation (if one exists) is a minimum of that function.
Abstract: If a continuous function is strictly quasi-convex on a convex set $\Gamma $, then every local minimum of the function must be a global minimum. Furthermore, every local maximum of the function on the interior of $\Gamma $ must also be a global minimum. Here, we prove that any minimax rational approximation problem defines a strictly quasi-convex function with the property that a best approximation (if one exists) is a minimum of that function. The same result is not true in general for best rational approximation in other norms.

Journal ArticleDOI
TL;DR: In this article, the authors studied the convergence rate of four methods of feasible directions of the Zoutendijk procedures 1 and 2 and two modifications of these procedures due to the authors.
Abstract: This paper deals with the rate of convergence of four methods of feasible directions the Zoutendijk procedures 1 and 2 and two modifications of these procedures due to the authors. It is shown that of these methods, the two due to the authors converge linearly under convexity assumptions, that the Zoutendijk procedure 2 converges sublinearly under these assumptions, and that the Zoutendijk procedure 1 converges linearly provided the solution of the problem is a vertex of the constraint set.

Journal ArticleDOI
TL;DR: A priori error estimates for continuous piecewise polynomial Galerkin approximations to the solutions of one-dimensional second order parabolic and hyperbolic equations were derived in this article.
Abstract: A priori $L_\infty $ error estimates are derived for continuous piecewise polynomial Galerkin approximations to the solutions of one-dimensional second order parabolic and hyperbolic equations. Optimal rates of convergence are established for both continuous and discrete time Galerkin procedures.