scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Numerical Analysis in 1978"


Journal ArticleDOI
TL;DR: The main theorem gives an error estimate in terms of localized quantities which can be computed approximately, and the estimate is optimal in the sense that, up to multiplicative constants which are independent of the mesh and solution, the upper and lower error bounds are the same.
Abstract: A mathematical theory is developed for a class of a-posteriors error estimates of finite element solutions. It is based on a general formulation of the finite element method in terms of certain bilinear forms on suitable Hilbert spaces. The main theorem gives an error estimate in terms of localized quantities which can be computed approximately. The estimate is optimal in the sense that, up to multiplicative constants which are independent of the mesh and solution, the upper and lower error bounds are the same. The theoretical results also lead to a heuristic characterization of optimal meshes, which in turn suggests a strategy for adaptive mesh refinement. Some numerical examples show the approach to be very effective.

1,431 citations


Journal ArticleDOI
TL;DR: In this article, a discontinuous collocation-finite element method with interior penalties was proposed and analyzed for elliptic equations, motivated by the interior penalty L 2-Galerkin procedure of Douglas and Dupont.
Abstract: A discontinuous collocation-finite element method with interior penalties is proposed and analyzed for elliptic equations. The integral orthogonalities are motivated by the interior penalty L2- Galerkin procedure of Douglas and Dupont.

787 citations


Journal ArticleDOI
TL;DR: The new method seeks to avoid the deficiencies in the Gauss–Newton method by improving, when necessary, the Hessian approximation by specifically including or approximating some of the neglected terms.
Abstract: This paper describes a modification to the Gauss–Newton method for the solution of nonlinear least-squares problems. The new method seeks to avoid the deficiencies in the Gauss–Newton method by improving, when necessary, the Hessian approximation by specifically including or approximating some of the neglected terms. The method seeks to compute the search direction without the need to form explicitly either the Hessian approximation or a factorization of this matrix. The benefits of this are similar to that of avoiding the formation of the normal equations in the Gauss-Newton method. Three algorithms based on this method are described; one which assumes that second derivative information is available and two which only assume first derivatives can be computed.

544 citations


Journal ArticleDOI
TL;DR: In this paper, the B-spline representation for splines was used to approximate free knots to data by splines, and the approximation problem was reduced to nonlinear least squares in the variable knots.
Abstract: Approximations to data by splines improve greatly if the knots are free variables. Using the B-spline representation for splines, and separating the linear and nonlinear aspects, the approximation problem reduces to nonlinear least squares in the variable knots.We describe the problems encountered in this formulation caused by the “lethargy” theorem, and how a logarithmic transformation of the knots can lead to an effective method for computing free knot spline approximations.

250 citations


Journal ArticleDOI
TL;DR: In this paper, the approximate solution of ordinary differential equations rely on controlling estimates of the error through adjustment of stepsize (and possibly, of order) through adjusting the stepsize.
Abstract: Efficient algorithms for the approximate solution of ordinary differential equations rely on controlling estimates of the error through adjustment of stepsize (and possibly, of order). For explicit...

224 citations


Journal ArticleDOI
TL;DR: In this article, various adaptive mesh selection strategies for solving two-point boundary value problems are brought together and a limited comparison is made, and the mesh strategies are applied using collocation met...
Abstract: Various adaptive mesh selection strategies for solving two-point boundary value problems are brought together and a limited comparison is made. The mesh strategies are applied using collocation met...

204 citations


Journal ArticleDOI
TL;DR: A new algorithm to solve the nonlinear minimax problem and incorporates several simple features of the algorithm and numerical results to date suggest the resulting algorithm is very efficient.
Abstract: Over the past few years the circuit and system designers have shown great interest in minimax algorithms. The purpose of this paper is to present a new algorithm to solve the nonlinear minimax problem. The minimax optimization problem can be stated as" where minimize Mr(x) Mr(x) max /)(x) li;m andx=(xl, x2," ,x,)r. The above objective function has discontinuous first partial derivatives at points where two or more of the functions/ are equal toM even if/(x), 1 _<-i-

174 citations


Journal ArticleDOI
TL;DR: In this article, an iterative Lanczos method for solving large sparse systems arising from elliptic problems is presented, which requires no a priori information on the spectrum of the operators.
Abstract: Let L be a real linear operator with a positive definite symmetric part M. In certain applications a number of problems of the form $Mv = g$ can be solved with less human or computational effort than the original equation $Lu = f$. An iterative Lanczos method, which requires no a priori information on the spectrum of the operators, is derived for such problems. The convergence of the method is established assuming only that $M^{ - 1} L$ is bounded. If $M^{ - 1} L$ differs from the identity mapping by a compact operator the convergence is shown to be superlinear. The method is particularly well suited for large sparse systems arising from elliptic problems. Results from a series of numerical experiments are presented. They indicate that the method is numerically stable and that the number of iterations can be accurately predicted by our error estimate.

172 citations


Journal ArticleDOI
TL;DR: In this article, some of the previous algorithms are generalized to higher orders of accuracy, and a more detailed consideration of the second order algorithm is given which leads naturally to methods of orders three and four.
Abstract: In Part I of this paper (SIAM J Numer Anal, 15 (1978), pp 1212–1224), novel algorithms were introduced for solving parabolic differential equations in which high-frequency components occurred in the solution and for which $A_0 $-stable methods, exemplified by the classical Crank-Nicolson method, were less than satisfactory The algorithms presented were based on a simple extrapolation of the Backward Euler method which produced $L_0 $ -stability In all cases the algorithms presented were second order accurate in time In the present paper some of the previous algorithms are generalized to higher orders of accuracy In particular, a more detailed consideration of the second order algorithm is given which leads naturally to methods of orders three and four The novel algorithms are tested on a heat equation with constant coefficients in which a discontinuity between the initial values and boundary values exists

154 citations


Journal ArticleDOI
TL;DR: A formal definition of a nested dissection ordering of the graph of a general sparse symmetric matrix A is given and some preliminary results which provide a direct relationship between these orders and theorems are introduced.
Abstract: A formal definition of a nested dissection ordering of the graph of a general sparse symmetric matrix A is given. After introducing some preliminary results which provide a direct relationship betw...

151 citations


Journal ArticleDOI
TL;DR: The function $\phi $ is directly minimized in a finite number of steps using techniques borrowed from Conn’s approach toward minimizing piecewise differentiable functions.
Abstract: A new algorithm is presented for computing a vector x which satisfies a given m by $n(m > n \geqq 2)$ linear system in the sense that the $l_1 $ norm is minimized. That is, if A is a matrix having m columns $a_1 , \cdots ,a_m $ each of length n, and b is a vector with components $\beta _1 , \cdots ,\beta _m $, then x is selected so that \[\phi (x) = ||A^T x - b||_1 = \sum _{i = 1}^m {\left|a_i^T x - \beta _i \right|} \] is as small as possible. Such solutions are of interest for the “robust” fitting of a linear model to data.The function $\phi $ is directly minimized in a finite number of steps using techniques borrowed from Conn’s approach toward minimizing piecewise differentiable functions. In these techniques if x is any point and $A_\mathcal {Z} $ stands for the submatrix consisting of those columns $a_j$ from A for which the corresponding residuals $a_j^T x - \beta _j $ are zero, then the discontinuities in the gradient of $\phi $ at x are handled by making use of the projector onto the null space o...

Journal ArticleDOI
TL;DR: For the n-simplex, integration formulas of arbitrary odd degree were derived and monomial representations of the orthogonal polynomials corresponding to $T_n$ were given as discussed by the authors.
Abstract: For the n-simplex $T_n$, integration formulas of arbitrary odd degree are derived and the monomial representations of the orthogonal polynomials corresponding to $T_n$ are given.

Journal ArticleDOI
TL;DR: It is shown that under loose step length criteria similar to but slightly different from those of Lenard, the method converges to the minimizes of a convex function with a strictly bounded Hessian.
Abstract: This paper studies the convergence of a conjugate gradient algorithm proposed in a recent paper by Shanno. It is shown that under loose step length criteria similar to but slightly different from those of Lenard, the method converges to the minimizes of a convex function with a strictly bounded Hessian. Further, it is shown that for general functions that are bounded from below with bounded level sets and bounded second partial derivatives, false convergence in the sense that the sequence of approximations to the minimum converges to a point at which the gradient is bounded away from zero is impossible.

Journal ArticleDOI
TL;DR: In this article, a Crank-Nicolson-Galerkin approximation with extrapolated coefficients is presented along with a conjugate gradient iterative procedure which can be used efficiently to solve the different linear systems of algebraic equations arising at each step from the Galerkin method.
Abstract: Three cases for the nonlinear Sobolev equation c(x,u)(Ou/Ot)-V.(a(x,u)Vu+ b(x, u, Vu)V(Ou/Ot))=f(x, t, u, Vu) are studied. In case I, the coefficients a and b have uniform positive lower bounds in a neighborhood of the solution; in case II, b b(x, u) is allowed to take zero values and possibly cause the Sobolev equation to degenerate to a parabolic equation; in case III we only require a bound of the form la(x, u)l

Journal ArticleDOI
TL;DR: In this paper, a quasi-envelope of the solution of highly oscillatory differential equations is defined, which can be integrated using much larger steps than are possible for the original problem.
Abstract: A “quasi-envelope” of the solution of highly oscillatory differential equations is defined. For many problems this is a smooth function which can be integrated using much larger steps than are possible for the original problem. Since the definition of the quasi-envelope is a differential equation involving an integral of the original oscillatory problem, it is necessary to integrate the original problem over a cycle of the oscillation (to average the effects of a full cycle). This information can then be extrapolated over a long (giant!) time step. Unless the period is known a priori, it is also necessary to estimate it either early in the integration (if it is fixed) or periodically (if it is slowly varying). Error propagation properties of this technique are investigated, and an automatic program is presented. Numerical results indicate that this technique is much more efficient than conventional ODE methods, for many oscillating problems.

Journal ArticleDOI
TL;DR: In this article, the local rates of convergence of Newton-iterative methods for the solution of systems of nonlinear equations were investigated. But the convergence rate was not shown to be linear in the inner, linear part of the system.
Abstract: In this paper we consider the local rates of convergence of Newton-iterative methods for the solution of systems of nonlinear equations. We show that under certain conditions on the inner, linear i...

Journal ArticleDOI
TL;DR: In this article, the common zeros of a set of polynomials are constructed by a number of techniques based on polynomial ideals using the minimum number of nodes.
Abstract: Cubature formulae of fixed degree using the minimum number of nodes, the common zeros of a set of polynomials, are constructed by a number of techniques based on the theory of polynomial ideals. Examples demonstrate that known lower bounds to the number of nodes can be attained though usually these bounds are too severe. One example is shown to give rise to an interlacing family of rules, a two dimensional analogue of the Clenshaw–Curtis quadrature. An effective numerical procedure is also given for finding all the common zeros of a set of polynomials.

Journal ArticleDOI
TL;DR: In this article, sufficient conditions are given for Newton's method to converge when the derivative is singular at the root of the root, and sufficient conditions for convergence when the root is singular.
Abstract: Sufficient conditions are given for Newton’s method to converge when the derivative is singular at the root.

Journal ArticleDOI
TL;DR: In this paper, the convexity of a complex matrix was exploited to determine the boundary points and tangents of the complex matrix A. The result is a convergent computation scheme and an error measure for each approximation.
Abstract: For an $n \times n$ complex matrix A, the convexity of $F(A) \equiv \{ x^ * Ax:x^ * x = 1,x \in C^n \} $ and some simple observations are exploited to determine certain boundary points and tangents of $F(A)$. The result is a convergent computation scheme and an error measure for each approximation. Given the curvature of its boundary, the computational effort to determine $F(A)$ to a prespecified level of accuracy is $O(n^3 )$.

Journal ArticleDOI
TL;DR: In this paper, a simple bifurcation point (p) = p(t + 1) with rank (H_y) = (p^ * ) is defined.
Abstract: For an equation $H(y,t) = 0$, where $H:D \subset R^{n + 1} \to R^n $, let $p:J \subset R^1 \to R^n $ be a primary solution on which a simple bifurcation point $p^ * = p(t^ * )$ with rank $H_y = (p^...

Journal ArticleDOI
TL;DR: In this paper, the authors examined the stability restrictions for second order schemes using linear stability analysis, and illustrate their behaviour on Burgers' equation, and showed that they are easy to use and apply readily to nonlinear equations.
Abstract: In this paper we are concerned with second order schemes which are easy to use, and apply readily to nonlinear equations. We examine the stability restrictions for such schemes using linear stability analysis, and illustrate their behaviour on Burgers'' equation.

Journal ArticleDOI
TL;DR: In this article, it was proved that the relative maxima of the Lebesgue function are strictly decreasing from the outside towards the middle of the interval, leading to the conclusion that the deviation between any two local maxima doesn't exceed 1 / 2.
Abstract: Properties of the Lebesgue function associated with interpolation at the Chebyshev nodes ${{\{ \cos [(2k - 1)\pi } {(2n)}}],\, k = 1,2, \cdots ,n\} $ are studied. It is proved that the relative maxima of the Lebesgue function are strictly decreasing from the outside towards the middle of the interval. An exact estimate for the smallest maximum is obtained. This estimate together with Rivlin's estimate for the largest maximum leads to the conclusion that the deviation between any two local maxima doesn't exceed ${1 / 2}$. It is shown that for the extended Chebyshev nodes this deviation is less than 0.201. Analogous results are obtained for the set of nodes based on the roots of the Chebyshev polynomials of the second kind.

Journal ArticleDOI
TL;DR: The numerical results reported here, combined with the fact that in the absence of constraints the present algorithm reduces to the earlier unconstrained $l_1$ algorithm, indicate that this algorithm is very efficient.
Abstract: We describe an algorithm, based on the simplex method of linear programming, for solving the discrete $l_1$ approximation problem with any type of linear constraints. The numerical results reported here, combined with the fact that in the absence of constraints the present algorithm reduces to our earlier unconstrained $l_1$ algorithm, indicate that this algorithm is very efficient.

Journal ArticleDOI
TL;DR: In this paper, a family of locally second-order accurate finite difference schemes for the initial-boundary value problem for the general reaction-diffusion system was constructed, and a time-independent error bound was derived.
Abstract: We consider the initial-boundary value problem for the general reaction-diffusion system (*) $U_t = D abla ^2 U + \Sigma _j M_j U_{xj} + F$, where U is a vector, $D,M_j $, and F depend on $(x,t,U)$, and D and $M_j$ are diagonal matrices with $D \geqq 0$. It is known that if there is a box S in U-space such that F does not point out of S, then S is invariant for (*). This invariance gives rise to an a priori estimate which in turn yields the existence of a smooth solution.In this paper we construct a family of locally second-order accurate finite difference schemes for (*). We prove that, under certain conditions on the mesh, S is also invariant for the difference equations. This enables us to derive an error bound which is $O(h)$ for t fixed, and which tends to diam S as $t \to \infty $. Next, we obtain a time-independent error bound by imposing a monotonicity condition on F, and we examine the implications of this condition for the solution of (*). Finally we derive a time-independent error bound for a ...

Journal ArticleDOI
TL;DR: Two iterative algorithms are given for the refinement of $k < p$ zeros located around a point a and the other for t... for polynomial of degree N with p distinct zeros.
Abstract: Let $P(z)$ be a polynomial of degree N with p distinct zeros. In this paper, two iterative algorithms are given, one for the refinement of $k < p$ zeros located around a point a and the other for t...

Journal ArticleDOI
TL;DR: In this article, the application of collocation methods based on piecewise polynomials to the numerical solution of boundary value problems for systems of ordinary differential equations with a singularity of the first kind is examined.
Abstract: The application of collocation methods based on piecewise polynomials to the numerical solution of boundary value problems for systems of ordinary differential equations with a singularity of the first kind is examined. The schemes are shown to be stable and convergent. Enhanced accuracy at the nodes for suitably chosen collocation points (superconvergence) is established for a class of important problems.

Journal ArticleDOI
TL;DR: In this article, the authors considered linear finite element approximations of quasilinear boundary value problems and obtained the almost optimal rate of convergence with respect to the L ∞ norm.
Abstract: The authors consider linear finite element approximations of quasilinear boundary value problems and obtain the almost optimal rate of convergence $O(h^2 |\ln h|^q ),q = q(n)$, with respect to the $L^\infty $-norm.

Journal ArticleDOI
TL;DR: In this article, the optimal order of convergence estimates for a new mixed element approximation for the biharmonic problem were derived, where the order is determined by the order of the order in which the elements in the mixed element are generated.
Abstract: “Optimal” order of convergence estimates are derived for a new mixed element approximation for the biharmonic problem.

Journal ArticleDOI
Abstract: Consider the initial value problem for a first order system of stiff ordinary differential equations. The smoothness properties of its solutions are investigated and a general theory for difference...

Journal ArticleDOI
TL;DR: In this paper, a definition of stability for step-by-step numerical methods is presented, and its application to determine regions of stability applied to simple test equations is discussed, which gives new insight into the suitability of numerical methods in practice.
Abstract: “Step-by-step” methods (in which there is the possibility of unstable error propagation) occur in the approximate solution of first and second kind integral equations of Volterra type. We discuss a definition of stability, and its application to determine regions of stability for such methods applied to simple test equations. This gives new insight into the suitability of numerical methods in practice.