scispace - formally typeset
Search or ask a question

Showing papers on "Piecewise published in 1972"


Journal ArticleDOI
TL;DR: Collocation with piecewise polynomial functions is developed as a method for solving two-point boundary value problems in this paper, and convergence is shown for a general class of linear problems and a rather broad class of nonlinear problems.
Abstract: Collocation with piecewise polynomial functions is developed as a method for solving two-point boundary value problems. Convergence is shown for a general class of linear problems and a rather broad class of nonlinear problems. Some computational examples are presented to illustrate the wide applicability and efficiency of the procedure.

309 citations


Journal ArticleDOI
TL;DR: A new approach to the numerical solution of systems of first-order ordinary differential equations is given by finding local Galerkin approximations on each subinterval of a given mesh of size h by using an n-point Gauss- Legendre quadrature formula to evaluate certain inner products in theGalerkin equations.
Abstract: A new approach to the numerical solution of systems of first-order ordinary differential equations is given by finding local Galerkin approximations on each subinterval of a given mesh of size h. One step at a time, a piecewise polynomial, of degree n and class CIO, is constructed, which yields an approximation of order O(h2nf) at the mesh points and O(hn+') between mesh points. In addition, the jth derivatives of the approximation on each subinterval have errors of order O(h"-i+1), 1 1, a method is defined (Section 2) which uses an n-point Gauss- Legendre quadrature formula to evaluate certain inner products in the Galerkin equations. For sufficiently small step size h, a unique numerical solution exists and may be found by successive substitution (Section 3). After showing that these Galerkin methods are also collocation methods (Section 4) and implicit Runge-Kutta methods (Section 5), we show that the mesh point errors are of the order O(h2I), and the global errors are of the order O(h"n') for the approximate solution and O(h'-i+ )5 1 < j _ n, for its jth derivatives (Section 6). A proof of the A-stability of the methods is given in Section 7, and numerical results are presented in Section 8. Discrete one-step methods based on quadrature, other than the classical Runge- Kutta methods, have been studied by several authors, including the explicit schemes

146 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that if the trial space is complete through polynomials of degreek?1, then it contains a functionv h such that |u?v h | s?ch k?s|u| k.
Abstract: The rate of convergence of the finite element method depends on the order to which the solutionu can be approximated by the trial space of piecewise polynomials. We attempt to unify the many published estimates, by proving that if the trial space is complete through polynomials of degreek?1, then it contains a functionv h such that |u?v h | s ?ch k?s|u| k . The derivatives of orders andk are measured either in the maximum norm or in the mean-square norm, and the estimate can be made local: the error in a given element depends on the diameterh i of that element. The proof applies to domains Ω in any number of dimensions, and employs a uniformity assumption which avoids degenerate element shapes.

138 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied strong nonlinear heat transport across a porous layer using Howard's (1963) variational method and provided the theoretical bounding heat-transport curve, which is consistent with the result of the well-known dimensional argument leading to the one-third power law for regular convection.
Abstract: Strongly nonlinear heat transport across a porous layer is studied using Howard's (1963) variational method. The analysis explores a bifurcation property of Busse's (1969) multi-a solution of this variational problem and complements the 1972 study of Busse & Joseph by further restricting the fields which are allowed to compete for the maximum heat transported a t a given temperature difference. The restriction arises, as in the case of infinite Prandtl number convection studied by Chan (1971), from letting a parameter tend to infinity from the outset; here, however, the parameter which is assumed infinitely large (the Prandtl-Darcy number) is actually seldom smaller than O(107).The theoretical bounding heat-transport curve is computed numerically. The maximizing Nusselt number (Nu) curve is given a t first by a functional of the single-a solution; then this solution bifurcates and the Nusselt number functional is maximized for an interval of Rayleigh numbers (R) by the two-a solution. The agreement between the numerical analysis and recent experiments is striking. The theoretical heat-transport curve is found to be continuously differentiable but has piecewise discontinuous second derivatives.The results of an asymptotic (R → ∞) analysis following Chan (1971) are in qualitative agreement with the results of numerical analysis and give the asymptotic law Nu = 0.016R. This law is consistent with the result of the porous version of the well-known dimensional argument leading to the one-third power law for regular convection. The asymptotic results, however, do not appear to be in good quantitabive agreement with the numerical results.

95 citations



Journal ArticleDOI
TL;DR: In this paper, the authors presented a generalization of von Neumann's method of generating random samples from the exponential distribution by comparisons of uniform random numbers on (0, 1).
Abstract: The author presents a generalization he worked out in 1950 of von Neumann''s method of generating random samples from the exponential distribution by comparisons of uniform random numbers on (0,1). It is shown how to generate samples from any distribution whose probability density function is piecewise both absolutely continuous and monotonic on ($-\infty$,$\infty$). A special case delivers normal deviates at an average cost of only 4.036 uniform deviates each. This seems more efficient than the Center-Tail method of Dieter and Ahrens, which uses a related, but different, method of generalizing the von Neumann idea to the normal distribution.

62 citations



Journal ArticleDOI
TL;DR: In this paper, a technique for the optimal design of a large class of structures arising in civil and mechanical engineering is presented, where constraints on stress level, magnitudes of elastic deformation, natural frequency and buckling load are admitted.
Abstract: This paper presents a technique for the optimal design of a large class of structures arising in civil and mechanical engineering. Constraint conditions on stress level, magnitudes of elastic deformation, natural frequency and buckling load are admitted. A computational method is developed, which takes advantage of special features of the structural optimization problem. Numerical examples are solved to illustrate the applicability of the method. These problems are, in themselves, of engineering interest.

41 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that a locally unknotted embedding of a manifold M in Qxl, where Q is a manifold with dimension not less than that of M, can be ambiently isotoped to become a critical level embedding.
Abstract: In this paper the idea of collapsing, and the associated idea of handle cancellation, in a piecewise linear manifold are used to produce a version of Morse theory for piecewise linear embeddings. As an application of this it is shown that, if n > 2, there exist triangulations of the n-ball that are not simplicially collapsible. Working entirely within the P.L. category, we shall develop a proof of the fact that a locally unknotted embedding of a manifold M in Qxl, where Q is a manifold with dimension not less than that of M, can be ambiently isotoped to become a critical level embedding. This is an embedding which, regarding M as M = handle + collar + handle + collar..., embeds each handle in a level of Qxl and each collar productwise along the / direction. This is a folklore theorem which is fundamental to several important theories (e.g. [lO], [4], [12]). As mentioned by Rourke [10], the result could probably be deduced from an amplification of the methods employed by Kuiper [8] and Kosiñski [7], in proving that a P.L. function /: M—>R can be approximated-by a function with only nondegenerate critical points (i.e., a P.L. analogue of Morse theory). That result can be deduced at once from the theory given here, by considering the graph of / as an embedding of M in Mxl. Our method of proof is essentially very simple, and uses only basic P.L. theory. We consider Masa subcomplex of Qxl, and a simplicial collapse of Qxl to Q x 0; we drag a collar of Q x 0 in Q x I up over the whole of Qxl, following the collapsing sequence in the second derived subdivision of the triangulation, thus obtaining a new parametrisation of Qxl. When the collar is dragged over a p-simplex of M, a level p-handle is introduced in this new parametrisation. Otherwise the proof consists of attention to detail. In Theorem 3, M is replaced by Mxl, in order to consider concordances Mxl—> Qxl. This theorem sets the stage for applying Rourke's proof of \"concordance implies isotopy in codimension 3\", for the handles obtained for the critical level embedding of Mx I actually cancel, as a handle decomposition of Mx I on Mx 0, without any handle 'moves'. In conjunction with Rourke's proof, this theory also gives an alternative proof of unknotting spheres in codimension > 3. Received by the editors May 14, 1971. AMS 1970 subject classifications. Primary 57C35, 57C45; Secondary 57D70. Copyright © 1972, American Mathematical Society

32 citations


Journal ArticleDOI
TL;DR: In this article, an optimization framework which extends the familiar Tinbergen-Theil model in two ways is presented, namely, a piecewise quadratic objective function and a time horizon that is endogenous to the optimization process itself.
Abstract: This paper outlines an optimization framework which extends the familiar Tinbergen-Theil model in two ways. First, a "piecewise quadratic" replaces the standard quadratic objective function. Second, the time horizon of the optimization becomes, within the context of economic stabilization problems, endogenous to the optimization process itself. The purpose of both extensions is to escape the conceptual restrictiveness of the Tinbergen-Theil structure while preserving the practical convenience of that model for applied policy work. The paper also describes a solution algorithm incorporating these two extensions, and it presents the results of a sample computational application based on the 1957-58 recession.

31 citations


Journal ArticleDOI
TL;DR: In this article, a method for obtaining exact solutions to the time-dependent coupled Hartree-Fock perturbation equations is presented, relying on the Aitken δ2 transformation to insure and accelerate convergence.
Abstract: A new method is presented for obtaining exact solutions to the time‐dependent coupled Hartree‐Fock perturbation equations. We choose an iterative approach, relying on the Aitken δ2 transformation to insure and accelerate convergence. At each iteration the resulting uncoupled inhomogeneous differential equations are solved using the technique previously presented by Alexander and Gordon, based on piecewise polynomial approximation of both the potential and the inhomogeneity. As a numerical application we calculate the frequency‐dependent dipole polarizability of the helium atom within the coupled Hartree‐Fock approximation. Comparison is made with the results of previous variational and numerical calculations. The method can be extended to the solution of integro‐differential equations arising in other areas of chemical physics.

Journal Article
TL;DR: In this article, the conditions générales d'utilisation (http://www.snsnsns.it/it/edizioni/riviste/annaliscienze/) implique l’accord avec les conditions generales d’utilisation, i.e., toute copie ou impression de ce fichier doit contenir la présente mention de copyright.
Abstract: © Scuola Normale Superiore, Pisa, 1972, tous droits réservés. L’accès aux archives de la revue « Annali della Scuola Normale Superiore di Pisa, Classe di Scienze » (http://www.sns.it/it/edizioni/riviste/annaliscienze/) implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/legal.php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.

Journal ArticleDOI
TL;DR: In this article, the utility of using a higher-order representation of the scalar potential than piecewise linear, and the effects of smoothing the electric field as opposed to simply calculating it from the gradient of the potential at every point were investigated.

Journal ArticleDOI
TL;DR: In this paper, the parabolic splines method is used for least square computer fitting of parabolic segments to short subranges of the experimental data and the sliding-parabola method for smooth data.
Abstract: The determination of experimental curves and rates of change for nonlinear data is approached and solved without assuming an artificially restrictive mathematical form for the complete range of the data. This relatively form-free result is achieved by least-squares computer fitting of parabolic segments to short subranges of the experimental data. Two ways of doing this, referred to as the sliding-parabola and the parabolic-splines methods, are developed. These are tested on both smooth and scattered data generated basically from the function y = X¹/², without and with random error, respectively. for smooth data the sliding-parabola method is slightly better than the parabolic splines, but in general both are subject to only very small errors, and are also in good agreement with a previously presented graphical prism method. For scattered data wherein the inherent errors of function and slope evaluation are much increased, the parabolic-splines method is distinctly superior to the sliding parabola method. Both methods require only relatively short computing times, on the order of 1 sec for 40 data points, and are of utility for determining nonconstant experimental rates of change and for least-squares curve fitting without specification of a complete-range mathematical curve.


Journal ArticleDOI
TL;DR: In this paper, two methods for piecewise Hermite interpolation of a sufficiently smooth function are presented, where the interpolation function is on each elementary rectangle, into which the given region is divided, determined by all the derivatives of the function under consideration up to a predetermined order.
Abstract: The paper presents two methods for a piecewise Hermite interpolation of a sufficiently smooth function. The interpolation function is on each elementary rectangle, into which the given region is divided, determined by all the derivatives of the function under consideration up to a certain predetermined order. The results obtained are utilized in the solution of a general quasi-linear equation and in the solution of a non-linear integral equation.

Journal ArticleDOI
TL;DR: A parallel mechanism is described, based on an array of circularly nutating photodetectors, that computes an approximation of the density of slopes of the boundary of any piecewise regular silhouette that is invariant with respect to the size, translation, and orientation of the given silhouette.
Abstract: We describe a parallel mechanism, based on an array of circularly nutating photodetectors, that computes an approximation of the density of slopes of the boundary of any piecewise regular silhouette. This density, when properly normalized, is invariant with respect to the size, translation, and orientation of the given silhouette.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the possibility of developing "finite element" variational methods having higher-order accuracy which could advantageously replace the 5-point difference approximations to the multigroup diffusion equations in plane sections.

Journal ArticleDOI
TL;DR: This paper presents the impedance and admittance forms of diakoptic solution of the load-flow problem on the basis of graph- theoretic concepts that does not assume a fixed slack bus voltage but takes into account the equation for total transmission line losses as an integral part of the scheme.
Abstract: This paper presents the impedance and admittance forms of diakoptic solution of the load-flow problem on the basis of graph- theoretic concepts. The formulation does not assume a fixed slack bus voltage but instead takes into account the equation for total transmission line losses as an integral part of the scheme. Finally, test data based on this alternative formulation are presented for purposes of comparison With existing methods.


Proceedings ArticleDOI
01 Aug 1972
TL;DR: In this paper, the authors propose a suboptimal solution to the problem of choosing the boundaries of the regions in such a way as to minimize the amount of storage required for the approximate description of f(x,y).
Abstract: The domain of a function f(x,y) is subdivided into regions D1, D2,...DM such that on each one of them f(x,y) can be approximated by a low order polynomial within a given tolerance. It is desirable to chose the boundaries of the regions in such a way as to minimize the amount of storage required for the approximate description of f(x,y). A suboptimal solution to this problem is presented. It is based on a two step procedure. First the optimal segmentation is obtained for profiles of f(x,y) along certain lines of its domain. The regions so obtained are then grouped together to form the final subdivisions. Examples of application of the method in the compression of topographical data are presented. Compression ratios of over 20:1 are obtained for RMS error 2%.

Book ChapterDOI
01 Jan 1972
TL;DR: In this article, a finite element procedure using basis functions consisting of piecewise bicubic Hermite polynomials defined on a mesh which is refined in a well-defined manner in a neighborhood of each corner is discussed.
Abstract: Publisher Summary This chapter focuses on the computational aspects of the finite element method. The finite element procedure discussed in the chapter uses basis functions consisting of piecewise bicubic Hermite polynomials defined on a mesh which is refined in a well-defined manner in a neighborhood of each corner. The coefficients and right-hand side of the resulting linear algebraic system of equations involve integrals over two-dimensional rectangular elements that are approximated by the local nine-point product Gaussian quadrature scheme —the tensor product of the one-dimensional three-point Gaussian quadrature schemes. Finally, the approximate linear algebraic system of equations is symmetric and positive definite and is solved by either the band Cholesky or profile Cholesky decomposition procedure. The chapter presents theoretical justifications for the procedure used in the chapter. It is shown that asymptotically the procedure used in the chapter is far more efficient than the combination of the five-point central difference approximation and successive overrelaxation (SOR).

Journal ArticleDOI
TL;DR: In this paper, the authors present a new method for piecewise solution of large and integrated electrical networks and power system load flow problems applying principle of superposition, without involving diakoptics.
Abstract: This paper presents a new method for piecewise solution of large and integrated electrical networks and power system load flow problems applying principle of superposition, without involving diakoptics. This permits an average engineer to solve his large scale practical problems in pieces without having knowledge of terminology, topology and diakoptics. The method does not require development of intersubdivision matrix (model) as required in diakoptical method on the contrary calculates cut branch currents in a simple way. Necessary mathematical models are developed which represent the performance characteristics of the system. Subdivision solution models are developed in the form of nodal admittance matrix and solved in conjunction with optimally ordered triangular factorization to yield nodal voltages. The method provides full freedom in choosing the line of cut and reference node.

Journal ArticleDOI
TL;DR: In this paper, it was shown that any upper semicontinuous decomposition of En which is generated by a trivial defining sequence of cubes with handles determines a factor of En+ 1.
Abstract: In this paper we prove that any upper semicontinuous decomposition of En which is generated by a trivial defining sequence of cubes with handles determines a factor of En+ 1. An important corollary to this result is that every 0-dimensional point-like decomposition of E3 determines a factor of E4. In our approach we have simplified the construction of the sequence of shrinking homeomorphisms by eliminating the necessity of shrinking sets piecewise in a collection of n-cells, the technique employed by R. H. Bing in the original result of this type.

Journal ArticleDOI
TL;DR: The basis functions are displayed in closed form for piecewise polynomial approximation of degreen over a triangulation of the plane in terms of the pyramid functions for linear approximation.
Abstract: In many applications of the finite element method, the explicit form of the basis functions is not known. A well-known exception is that of piecewise linear approximation over a triangulation of the plane, where the basis functions are pyramid functions. In the present paper, the basis functions are displayed in closed form for piecewise polynomial approximation of degreen over a triangulation of the plane. These basis functions are expressed simply in terms of the pyramid functions for linear approximation.

Journal ArticleDOI
TL;DR: In this paper, the authors use linear programming codes to find the optimal solution to problems with economies of scale in water resources planning problems, where the problem is solved by adjusting the unit construction cost on a single facility to iteratively work toward the true optimal solution.
Abstract: Both because of its effectiveness and ease in use, linear programming has become progressively popular in water resources planning problems. Yet, the assumptions of linear construction costs can be misleading. Diseconomies of scale in construction can be handled by successive approximations to the cost function but problems with economies of scale yield paradoxical results when piecewise approximations are used. If significant economies of scale exist in only one facility, the solution to problems of this nature can be found using normal linear programming codes by successively adjusting the unit construction cost on that single facility to iteratively work toward the true optimal solution.

01 Jan 1972
TL;DR: In this paper, a method for eliminating quantifiers in the theory of addition on the real numbers (EAR) has been described, which has been used to solve linear programming problems.
Abstract: Using formal logic, many iproblems from the general area of linear inequalities co~ be expressed in the elementary theory of addition on the real numbers (EAR). We describe a method for eliminatit~ quantiflers in EAR which has been programmed arm demonstrate its usefulness in solving some problems related to linear programming. In the area of mechanical mathematics this kind of approach has been neglected in favor of more generalized method r based on Herbrand expansion. However, in a restricted area, such as linear inequalities, the use of these ~pecialized methods can increaye e.O~cieney by several orders of maonitud~ over an axiomatic Herbrand approach, aJut make practical problems accessible. As is common in a~ificial intelligence, the work reported here is of an interdisciplinary nature. It involves mathematical logic, linear inequalities, and symbolic mathematics on a computer. For the sake of a~gument, let us distinguish two kinds of workers in the area of linear inequalities. There is the theoretician, who is developing new methods and discovering new theorems. Then there is the user, who is faced with a practical problem which can be expressed in sorae way at least piecewise linearly. As a simple-minded distinction between the theoretician and the user we can say that the latter is interested in questions involving a fixed number of variables,, while the former is concerned with questions involving an arbitrary number of variables. Using terminology from iogie to be made more precise below ~this means that the ,ser is generally working within the elementary J~eory of additi


Journal ArticleDOI
TL;DR: In this paper, the error estimates for the Rayleigh-Ritz method with piecewise linear trial functions under weak regularity assumptions on the boundary of R and the triangulation of the plane of R are obtained.
Abstract: COURANT has suggested in [1] a finite difference method which is applicable to the Dirichlet problem for second order self-adjoint elliptic equations in a bounded open two-dimensional region R. The finite difference equations are obtained by means of the Rayleigh-Ritz method with trial functions that are piecewise linear over a triangulation of the plane. This method has been investigated recently by many authors (see [2]-[9]). In the present paper we shall obtain error estimates for the Rayleigh-Ritz method with piecewise linear trial functions under weak regularity assumptions on the boundary of R and the triangulation of the plane of R.

Journal ArticleDOI
TL;DR: The relation ℰn=o[ω f (e−√n)] is valid for a function f that admits a bounded analytic continuation onto the disk K =z:¦z−1¦< as discussed by the authors.
Abstract: For best piecewise polynomial approximation ℰn=ℰn (f; [0, 1]) of a functionf, which is continuous on the interval [0, 1] and admits a bounded analytic continuation onto the disk K=z:¦z−1¦<, the relation ℰn=o[ω f (e−√n)] is valid.