scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Numerical Analysis in 1980"


Journal ArticleDOI
TL;DR: In this article, a monotone piecewise bicubic interpolation algorithm was proposed for data on a rectangular mesh, where the first partial derivatives and first mixed partial derivatives are determined by the mesh points.
Abstract: In a 1980 paper [SIAM J. Numer. Anal., 17 (1980), pp. 238–246] the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone $\mathcal{C}^1 $ piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm.

2,174 citations


Journal ArticleDOI
TL;DR: In this article, a singular value decomposition analysis of the TLS problem is presented, which provides a measure of the underlying problem's sensitivity and its relationship to ordinary least squares regression.
Abstract: Totla least squares (TLS) is a method of fitting that is appropriate when there are errors in both the observation vector $b (mxl)$ and in the data matrix $A (mxn)$. The technique has been discussed by several authors and amounts to fitting a "best" subspace to the points $(a^{T}_{i},b_{i}), i=1,\ldots,m,$ where $a^{T}_{i}$ is the $i$-th row of $A$. In this paper a singular value decomposition analysis of the TLS problem is presented. The sensitivity of the TLS problem as well as its relationship to ordinary least squares regression is explored. Aan algorithm for solving the TLS problem is proposed that utilizes the singular value decomposition and which provides a measure of the underlying problem''s sensitivity.

1,587 citations


Journal ArticleDOI
TL;DR: The technique is used in an empirical study of two methods for estimating the condition number of a matrix in the group of orthogonal matrices.
Abstract: This paper presents a method for generating pseudo-random orthogonal matrices from the Haar distribution for the group of orthogonal matrices. The random matrices are expressed as products of $n - 1$ Householder transformations, which can be computed in $O(n^2 )$ time. The technique is used in an empirical study of two methods for estimating the condition number of a matrix.

375 citations


Journal ArticleDOI
TL;DR: In this article, an enlarged system is introduced for which the turning point is a nonsingular solution and thus standard methods can be used to compute it, and numerical results for discretizations of a differential equation and an integral equation are given.
Abstract: This paper is concerned with the determination of a type of singular point, called a “turning” or “limit” point, of nonlinear equations depending on a parameter. An enlarged system is introduced for which the turning point is a nonsingular solution and thus standard methods can be used to compute it. An efficient implementation of Newton’s method in the finite-dimensional case is presented and numerical results for discretizations of a differential equation and an integral equation are given.

291 citations


Journal ArticleDOI
TL;DR: Theoretical error bounds are established, improving those given by S. Kaniel, and similar inequalities are found for the eigenvectors by using bounds on the acute angle between the exact eigenvesctors.
Abstract: Theoretical error bounds are established, improving those given by S. Kaniel. Similar inequalities are found for the eigenvectors by using bounds on the acute angle between the exact eigenvectors a...

248 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the computational analysis of specified parts of the solution field of equations of the form $Fx = b$, where $F:R^m \to R^n $ is a given mapping and $m > n.
Abstract: Contihuation methods are considered here in the broad sense as algorithms for the computational analysis of specified parts of the solution field of equations of the form $Fx = b$ , where $F:R^m \to R^n $ is a given mapping and $m > n$. Such problems arise, for instance, in structural mechanics and then usually $m - n$ of the variables $x_i $ are designated as parameters. For the case $m = n + 1$ an existence theory for the regular curves of the solution field is developed here. Then approximate solutions are considered and shown to be solutions of certain perturbed problems. These results are used to prove that for the continuation methods with Eider-predictor and Newton-corrector a particular steplength algorithm is guaranteed to trace any regular solution of the field. Some numerical aspects of the procedure are discussed and a numerical example is included to illustrate the effectiveness of the approach.

149 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of determining appropriate ABCs to use at a finite point was studied for both linear and non-linear boundary value problems, and a theory for doing this correctly was devised, which consists in determining appropriate asymptotic boundary conditions.
Abstract: To solve boundary value problems posed on semi-infinite intervals the problem is frequently replaced by one on a finite interval. We devise a theory for doing this correctly for both linear and nonlinear problems. In brief the method consists in determining appropriate asymptotic boundary conditions (ABC) to use at a finite point. In the linear case these are “projections” into the subspace of bounded solutions. In the nonlinear case, nonlinear boundary conditions result. They involve possibly unknown projections for the problem linearized about the solution at infinity. A linear eigenvalue problem for the Schrodinger equation and a nonlinear elasticity problem are solved to show the power of the new methods.

138 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that for almost all polynomial systems of n complex variables, the number of solutions is equal to √ √ n q_i, where q is the degree of equation i. The proof of this result was done in such a way that all q solutions can be explicitly calculated.
Abstract: It is shown that for almost every system of n polynomial equations in n complex variables, the number of solutions is equal to $q \equiv \Pi _{i = 1}^n q_i $, where $q_i $ is the degree of equation i. The proof of this result is done in such a way that all q solutions can be explicitly calculated for almost all such systems.It is further shown that if the polynomial system obtained by retaining only the terms of degree $q_i $ in each equation i has only the trivial solution, then the number of solutions is equal to q.

128 citations


Journal ArticleDOI
TL;DR: A generalization from quadratic to conic approximations, defined as ratios of quadratics whose denominators are squares, can better match the values and gradients of typical objective functions, and hence give better estimates for their minimizers.
Abstract: Many optimization algorithms update quadratic approximations to their objective functions. This paper suggests a generalization from quadratic to conic approximations, defined as ratios of quadratics whose denominators are squares, $(\alpha + \alpha ^T x)^2 $, These can better match the values and gradients of typical objective functions, and hence give better estimates for their minimizers. Equivalently, affine scalings, $S(w) = x_0 + Jw$, of the domain of objective functions f are generalized to collinear scalings, $S(w) = x_0 + Jw/(1 + h^T w)$, to make the Hessian of the composition $fS$ more nearly constant as well as better conditioned. Certain general features of optimization algorithms using conic approximations and collinear scalings are presented. These are not only invariant under affine scalings, along with Newton–Raphson and variable metric algorithms, but they are also invariant under the larger group. of invertible collinear scalings.

113 citations


Journal ArticleDOI
TL;DR: A perturbation theory for the linear least squares problem with linear equality constraints (problem LSE) is presented in this paper, which is based on the concept of the weighted pseudoinverse.
Abstract: A perturbation theory for the linear least squares problem with linear equality constraints (problem LSE) is presented. The development of the theory is based on the concept of the weighted pseudoinverse. A general formula for the solution of problem of LSE is given. Condition numbers are defined and a perturbation theorem is proved.

112 citations


Journal ArticleDOI
TL;DR: A theorem due to Reddien giving sufficient conditions for convergence of Newton iterates for singular problems is extended in this paper, where the convergence condition is extended to the case of singular problems.
Abstract: A theorem due to Reddien giving sufficient conditions for convergence of Newton iterates for singular problems is extended.

Journal ArticleDOI
TL;DR: In this paper, the conjugate gradient method is applied to a linear equation, with a uniformly positive definite coefficient operator, on a real, separable Hilbert space, and it is shown that if the coefficient operator is a compact perturbation of the identity, then the method is R-superlinearly convergent.
Abstract: The conjugate gradient method is applied to a linear equation, with a uniformly positive definite coefficient operator, on a real, separable Hilbert space. We observe that, if the coefficient operator is a compact perturbation of the identity, then the method is R-superlinearly convergent. Furthermore, if the perturbation belongs to a certain trace class, then a rate of superconvergence can be given. The results can be considered as infinite-dimensional analogs of the fact that the method has finite termination property on a finite-dimensional space.

Journal ArticleDOI
TL;DR: In this article, direct and inverse estimates for multivariate spline approximation are given for local polynomial approximation, which generalize the work of Brudnyi and Bramble-Hilbert.
Abstract: We give direct and inverse estimates for multivariate spline approximation. The direct estimates rest bn new results for local polynomial approximation which generalize the work of Brudnyi and Bramble–Hilbert. The inverse estimates are multivariate extensions of one variable ideas.

Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of finding a finite element approximation to the solution of a linear elliptic boundary value problem on a square, and established an approximation in a function space consisting of tensor products of piecewise polynomials of degree not greater than r.
Abstract: Collocation at Gaussian quadrature points as a means of determining a $C^1 $ finite element approximation to the solution of a linear elliptic boundary value problem on a square is studied. Optimal order $L^2 $ and $H^1 $ error estimates are established for approximation in a function space consisting of tensorproducts of $C^1 $ piecewise polynomials of degree not greater that r, where $r \geqq 3$.

Journal ArticleDOI
TL;DR: In this article, two Newton-like methods for solving the systems which arise from nonlinear partial differential equations and nonlinear networks are discussed, both of which provide global and quadratic convergence provided parameters controlling the iteration are chosen correctly.
Abstract: We discuss two Newton-like methods for solving the systems which arise from nonlinear partial differential equations and nonlinear networks Under appropriate conditions both methods provide global and quadratic convergence provided parameters controlling the iteration are chosen correctly

Journal ArticleDOI
TL;DR: In this paper it is shown how the classical Markowitz idea can be generalized and considered the following parameters: u —the stability factor and $p(s)$—the number of coefficients in the pivotal sequence.
Abstract: Pivotal interchanges are commonly used in the solution of large and sparse systems of linear algebraic equations by Gaussian elimination (in order to preserve the sparsity of the matrix and to prevent the appearance of large roundoff errors during the computations). The Markowitz strategy (see [H. M. Markowitz, The elimination form of inverse and its applications to linear programming, Management Sci., 3 (1957), pp. 255–269]) is often used to determine the pivotal sequence. An efficient implementation of this strategy is given by Curtis and Reid (see [A. R. Curtis and J. K. Reid, Fortran subroutines for the solution of sparse sets of linear equations, A.E.R.E., Report R.6844, HMSO, London, 1971]) and improved by Duff (see [I. S. Duff, MA28—a set of Fortran subroutines for sparse unsymmetric matrices, A.E.R.E., Report R.8730, HMSO, London, 1977]). In this paper it is shown how the classical Markowitz idea can be generalized. Consider the following parameters: u —the stability factor and $p(s)$—the number o...

Journal ArticleDOI
TL;DR: In this article, a multivariate B-spline is constructed in terms of certain multivariate "fundamental solutions" which are analogous to the usual univariate truncated powers.
Abstract: In this paper multivariate B-splines are constructed in terms of certain multivariate “fundamental solutions” which are analogous to the usual univariate truncated powers. Furthermore, this approach is shown to provide the recurrence relations for the construction of higher order B-splines from lower order ones which were recently found by C. A. Micchelli, A constructive approach to Kergin interpolation in $R^k $ : MultivariateB-splines and Lagrange interpolation, MRC Technical Summary Report 1978. It also allows to relate the smoothness of a B-spline to its knot configuration.

Journal ArticleDOI
TL;DR: A particular member of this algorithm class is shown to have a Q-superlinear rate of convergence under standard assumptions on the objective function.
Abstract: A new class of algorithms for unconstrained optimization has recently been proposed by Davidon [Conic Approximations and Collinear Scalings for Optimers, SIAM J. Num. Anal., to appear.]. This new method called “optimization by collinear scaling” is derived here as a natural extension of existing quasi-Newton methods. The derivation is based upon constructing a collinear scaling of the variables so that a local quadratic model can interpolate both function and gradient values of the transformed objective function at the latest two iterates. Deviation of the function values from quadratic behavior as well as gradient information influences the updating process. A particular member of this algorithm class is shown to have a Q-superlinear rate of convergence under standard assumptions on the objective function. The amount of computation required per update is essentially the same as for existing quasi-Newton methods.

Journal ArticleDOI
TL;DR: In this paper, it was shown that there exists a one-parameter family of two-step, second-order one-leg methods which are stable for any dissipative nonlinear system and for any test problem of the form δ = δ (t)x, δ(t) \leq 0, using arbitrary step sequences.
Abstract: Two of the most commonly used methods, the trapezoidal rule and the two-step backward differentiation method, both have drawbacks when applied to difficult stiff problems. The trapezoidal rule does not sufficiently damp the stiff components and the backward differentiation method is unstable for certain stable variable-coefficient problems with variable-steps. In this paper we show that there exists a one-parameter family of two-step, second-order one-leg methods which are stable for any dissipative nonlinear system and for any test problem of the form $\dot x = \lambda (t)x$, $\operatorname{Re} \lambda (t) \leq 0$, using arbitrary step sequences.

Journal ArticleDOI
TL;DR: Under mild conditions on the given cell-size and error-indicator functions a local Pareto-type optimality property is introduced for the meshes to prove some general rate-of-convergence and global optimality properties.
Abstract: A general theory of mesh-refinement processes is developed. The fundamental structure is a locally finite, rooted tree with nodes representing the subdivision cells. The possible meshes then constitute a distributive lattice. Under mild conditions on the given cell-size and error-indicator functions a local Pareto-type optimality property is introduced for the meshes. This in turn is used to prove some general rate-of-convergence and global optimality properties which contain various known results of this type for specific problems.

Journal ArticleDOI
TL;DR: In this paper, affine invariant interpolants are defined based on operators which minimize a certain pseudonorm subject to interpolation on the boundary of an arbitrary triangular domain, and discrete interpolants which result from these blending methods are also given.
Abstract: Some methods for interpolation in triangles are described. These affine invariant interpolants are based upon operators which minimize a certain pseudonorm subject to interpolation on the boundary of an arbitrary triangular domain. Discrete interpolants, which result from these blending methods, are also given.

Journal ArticleDOI
TL;DR: In this article, the authors compare the performance of the Kantorovich and Moore theorems on the basis of sensitivity (ability to detect a solution close to the original solution), precision (the ability to give sharp error bounds), and computational complexity.
Abstract: In order to be useful, an approximate solution y of a nonlinear system of equations $f(x) = 0$ in $R^n $ must be close to a solution $x^ * $ of the system. Two theorems which can be used computationally to establish the existence of $x^ * $ and obtain bounds for the error vector $y - x^ * $ are the 1948 result of L. V. Kantorovich and the 1977 interval analytic theorem due to R. E. Moore. The two theorems are compared on the basis of sensitivity (ability to detect a solution $x^ * $ close to y), precision (ability to give sharp error bounds), and computational complexity (cost). A theoretical comparison shows that the Kantorovich theorem has at best only a slight edge in sensitivity and precision, while Moore’s theorem requires far less computation to apply, and thus provides the method of choice. This conclusion is supported by a numerical example, for which available UNIVAC 1108/1110 software is used to check the hypotheses of both theorems automatically, given y and f.

Journal ArticleDOI
TL;DR: In this article, a simple computational test is given for the accuracy of each component of an approximate solution to a nonlinear (or linear) system of equations, where the test is based on a simple test.
Abstract: A simple computational test is given for the accuracy of each component of an approximate solution to a nonlinear (or linear) system of equations.

Journal ArticleDOI
TL;DR: In this article, it was shown that there does not exist a better approximation that includes the exact range if we admit all expressions of the form $(c_0 + c_1 H + \cdots+ c_k H^n )/(d_0+ \CDots + d_n H^N )$ as approximations, where $H = X - c$ is symmetric, c the midpoint of X and the real coefficients $c_k,d_k $ depend on the same data from f as those needed to construct the centered
Abstract: R. E. Moore has introduced the centered form for approximating the range $\bar f(X)$ of a rational function f over X, where X is a real interval. In this paper it is shown that there does not exist a better approximation that includes the exact range if we admit all expressions of the form $(c_0 + c_1 H + \cdots + c_k H^n )/(d_0 + \cdots + d_n H^n )$ as approximations, where $H = X - c$ is symmetric, c the midpoint of X and the real coefficients $c_k ,d_k $ depend on the same data from f as those needed to construct the centered form.

Journal ArticleDOI
TL;DR: In this paper, theoretical properties of product integration rules of the form \[ √ √ 1/1/1 {k(x),f(x)dx \approx \sum √ n {w_{ni} f(x,ni} )},}, where f is continuous, k is integrable, and the nodes are the roots of the Jacobi polynomial, are discussed.
Abstract: The paper discusses theoretical properties of product integration rules of the form \[\int_{ - 1}^1 {k(x)f(x)dx \approx \sum _{i = 1}^n {w_{ni} f(x_{ni} )} ,} \] where f is continuous, k is absolutely integrable, the nodes $\{ x_{ni} \} $ are the roots of the Jacobi polynomial $P_n^{(\alpha ,\beta )} (x),(\alpha ,\beta > - 1)$, and the weights $\{ w_{ni} \} $ are chosen to make the rule exact if f is any polynomial of degree $ 1$, then the rule converges to the exact value of the integral as $n \to \infty $ if f is any continuous function, and the sum of the absolute values of the weights $\Sigma _{i = 1}^n |w_{ni} |$ ( converges to a limit, namely $\int _{ - 1}^1 |k(x)dx|$. A limiting expression for the individual weights is also obtained under suitable conditions.

Journal ArticleDOI
TL;DR: Several finite element methods for singular two-point boundary value problems are analyzed and error bounds of optimal order are proved and upper and lower bounds on the extent to which the mesh must be graded are obtained.
Abstract: In this paper we analyze several finite element methods for singular two-point boundary value problems\[ - (x^\sigma u')' + qu = f,\quad 0 < x < 1.\] Here $\sigma \in [0,1)$, and appropriate boundary conditions are imposed. The solution can be approximated by splines on a nonuniform (“$\beta $ -graded”) mesh. Error bounds of optimal order are proved, and upper and lower bounds on the extent to which the mesh must be graded are obtained. We also consider approximating the solution by functions of the form $x^{ - \sigma } s(x),s(x)$ a spline. Error bounds and numerical results for these “weighted splines” indicate hat they are very efficient. For a third subspace, known error bounds are improved by using a mildly graded mesh.

Journal ArticleDOI
TL;DR: The deviation between the local maxima of the Lebesgue function is studied and it is shown that for the Chebyshev nodes T this deviation is less than $({2 / \pi })\log 2 = 0.441 \cdots $, whereas for $\hat T$ it is asymptotically not exceeding.
Abstract: Asymptotic expressions of the form $({2 / \pi })\log n + c + r_n $ are investigated for the Lebesgue constants associated with interpolation at the Chebyshev nodes T and the “expanded Chebyshev nodes” $\hat T$. Estimations of the error $r_n $, are given. Similar asymptotic expressions can be obtained for interpolation at the Chebyshev extrema U and trigonometric interpolation at equidistant nodes. The deviation between the local maxima of the Lebesgue function is studied. It is shown that for the Chebyshev nodes T this deviation is less than $({2 / \pi })\log 2 = 0.441 \cdots $, whereas for $\hat T$ it is asymptotically not exceeding $({2 / \pi })\log 2 - {4 / {(3\pi ) = 0.016}} \cdots $.

Journal ArticleDOI
TL;DR: In this paper, it was shown that under certain conditions these acceleration procedures are equivalent to similar procedures applied to the double method corresponding to two applications of the original basic iterative method.
Abstract: This paper is concerned with the acceleration, by Chebyshev acceleration or conjugate gradient acceleration, of basic iterative methods for solving systems of linear algebraic equations. It is shown that under certain conditions these acceleration procedures are equivalent to similar procedures applied to the “double method” corresponding to two applications of the original basic iterative method. This result is applied to show the equivalence of certain acceleration procedures applied to the Jacobi methods for “red/black” systems, and similar procedures applied to the “reduced system,” which is obtained from the original system by eliminating some of the unknowns. The result is also used to study the behavior of the generalized conjugate gradient procedure of Concus and Golub and of Widlund, for solving linear systems where the matrices are positive real rather than symmetric and positive definite.

Journal ArticleDOI
TL;DR: There has been much recent interest in the use of incomplete factorizations of matrices in conjunction with applications of the generalized conjugate gradient method, for approximating solutions of large sparse systems of linear equations.
Abstract: There has been much recent interest in the use of incomplete factorizations of matrices, in conjunction with applications of the generalized conjugate gradient method, for approximating solutions of large sparse systems of linear equations. Underlying many of these recent developments is the theory of H-matrices, introduced by A. M. Ostrowski. In this note, further connections of the theory of incomplete factorizations of matrices with the theory of H-matrices are derived.

Journal ArticleDOI
TL;DR: In this paper, the Gauss-Chebyshev numerical integration rule was shown to provide the same numerical results for the same number of abscissas used in numerical integrations.
Abstract: Cauchy type singular integral equations can be solved numerically either directly (through the use of an appropriate numerical integration rule and reduction to a system of linear equations) or after a previous reduction to an equivalent Fredholm integral equation of the second kind and application of the numerical technique to this equation. In this note it is proved in the special case when the Gauss-Chebyshev numerical integration rule is used that both methods are equivalent in the sense that they provide the same numerical results for the same number of abscissas used in numerical integrations.