scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 1979"


Book ChapterDOI
TL;DR: In this article, it is shown that direct simulation is not an alternative for practical computation and that the various sophisticated closures suffer from essentially the same problems as the direct simulations and therefore, are limited to homogeneous situations.
Abstract: Publisher Summary This chapter discusses that in many situations of practical importance, “second-order modeling” technique makes possible computations that often agree with what data is available. Inevitably, the technique is also applied in many situations in which data does not exist, which must be regarded as a dangerous practice as the limitations of the technique are not known with any precision. It is primarily the possibility of practical computation that has been responsible for the great interest in this method. Even in its most stripped-down form, it results in general in the simultaneous solution of four partial differential equations in the domain of interest; more elaborate models in a three-dimensional situation might require the simultaneous solution of as many as 36 partial differential equations to obtain the mechanical field only. This is within the capabilities of present computers at a reasonable price, which cannot be said of any other technique. The chapter explores that direct simulation is not an alternative for practical computation. The various sophisticated closures suffer from essentially the same problems as the direct simulations and therefore, are limited to homogeneous situations. Thus, the second-order modeling is the only possibility for practical computation.

1,069 citations



Book ChapterDOI
16 Jul 1979
TL;DR: By providing more sophisticated well-founded sets, the corresponding termination functions can be simplified.
Abstract: A common tool for proving the termination of programs is the well-founded set, a set ordered in such a way as to admit no infinite descending sequences. The basic approach is to find a termination function that maps the values of the program variables into some well-founded set, such that the value of the termination function is continually reduced throughout the computation. All too often, the termination functions required are difficult to find and are of a complexity out of proportion to the program under consideration. However, by providing more sophisticated well-founded sets, the corresponding termination functions can be simplified.

353 citations


Journal ArticleDOI
Russell H. Taylor1
TL;DR: Two methods for achieving straight line motions in manipulator control languages by interpolating intermediate points along the Cartesian straight line path at regular intervals during the motion, and solving the manipulator's kinematic equations to produce the corresponding intermediate joint parameter values.
Abstract: Recently developed manipulator control languages typically specify motions as sequences of points through which a tool affixed to the end of the manipulator is to pass The effectiveness of such motion specification formalisms is greatly increased if the tool moves in a straight line between the user-specified points This paper describes two methods for achieving such straight line motions The first method is a refinement of one developed in 1974 by R Paul Intermediate points are interpolated along the Cartesian straight line path at regular intervals during the motion, and the manipulator's kinematic equations are solved to produce the corresponding intermediate joint parameter values The path interpolation functions developed here offer several advantages, including less computational cost and improved motion characteristics The second method uses a motion planning phase to precompute enough intermediate points so that the manipulator may be driven by interpolation of joint parameter values while keeping the tool on an approximately straight line path This technique allows a substantial reduction in real time computation and permits problems arising from degenerate joint alignments to be handled more easily The planning is done by an efficient recursive algorithm which generates only enough intermediate points to guarantee that the tool's deviation from a straight line path stays within prespecified error bounds

305 citations


Journal ArticleDOI
TL;DR: A scheme is presented in which alerters may be placed on a complex query involving a relational database, and a method is demonstrated for reducing the amount of computation involved in checking whether an alerter should be triggered.
Abstract: An alerter is a program which monitors a database and reports to some user or program when a specified condition occurs. It may be that the condition is a complicated expression involving several entities in the database; in this case the evaluation of the expression may be computationally expensive. A scheme is presented in which alerters may be placed on a complex query involving a relational database, and a method is demonstrated for reducing the amount of computation involved in checking whether an alerter should be triggered.

193 citations


Journal ArticleDOI
TL;DR: In this article, the numerical computation of branch points in systems of nonlinear equations is considered, and a direct method is presented which requires the solution of one equation only, and the branch points are indicated by suitable test functions.
Abstract: The numerical computation of branch points in systems of nonlinear equations is considered. A direct method is presented which requires the solution of one equation only. The branch points are indicated by suitable testfunctions. Numerical results of three examples are given.

186 citations


Journal ArticleDOI
TL;DR: In this article, the computer code Duvorol, dealing with the computation of three-dimensional rolling contact with dry friction, is described, which is based on the variational principle of Duvaut and Lions for dry friction.
Abstract: In this paper the computer code Duvorol, dealing with the computation of three-dimensional rolling contact with dry friction, is described. It is based on the variational principle of Duvaut and Lions for dry friction, which leads to an incremental theory. The relevant properties of Duvorol are: 1Generality. All half-space steady-state rolling contact problems with Hertzian normal contact can be treated. 2Reliability. The total tangential force is always found with reasonable accuracy by a standard discretization. 3Speed. On an IBM 370/158 the calculation of a case then takes only several seconds.

144 citations


Journal ArticleDOI
TL;DR: A general method for designing networks of locally interconnected simple processors that include the network of interconnections among the participating processors, as well as the iterative computation to be performed by each processor is described.

124 citations


Book ChapterDOI
Charles A. Micchelli1
01 Jan 1979
TL;DR: In this article, an algorithm for the computation of smooth piecewise polynomials (multivariate B-spline) is given, and the results of numerical calculation for twelve typical B-Spline are given.
Abstract: In this paper an algorithm for the computation of smooth piecewise polynomials (multivariate B-spline) is given. The results of numerical calculation for twelve typical B-spline

112 citations


Journal ArticleDOI
TL;DR: In this paper, two scaling transformations are discussed which make possible such computations by reducing the computation time to acceptable levels and are shown to be valid for low-Mach-number flows.

108 citations


Journal ArticleDOI
TL;DR: It is proved that only a small amount of computational work is needed for the approximation of one eigenvalue and the corresponding eigenfunctions.
Abstract: The eigenvalues and eigenfunctions can be approximated by finite element methods. Then the original problem is replaced by a finite-dimensional problem. In this paper we propose a multi-grid method for solving these finite-dimensional problems. It is proved that only a small amount of computational work is needed for the approximation of one eigenvalue and the corresponding eigenfunctions.


Journal ArticleDOI
01 Feb 1979-Infor
TL;DR: New labeling techniques are provided for accelerating the basis exchange step of specialized linear programming methods for network problems and show that these techniques substantially reduce the amount of computation involved in updating operations.
Abstract: : New labeling techniques are provided for accelerating the basis exchange step of specialized linear programming methods for network problems. Computational results are presented which show that these techniques substantially reduce the amount of computation involved in updating operations. (Author)

Journal ArticleDOI
TL;DR: This paper demonstrates an hypothesis test procedure which permits the objective and unambiguous evaluation of comparative dielectric tests on two different sets of data.
Abstract: The results of accelerated aging tests on solid electrical insulation are difficult to evaluate objectively, primarily due to the inherently large variability of the test data. This variability is often represented by the Weibull or other extreme-value probability distributions. This paper demonstrates an hypothesis test procedure which permits the objective and unambiguous evaluation of comparative dielectric tests on two different sets of data. The computation techniques are facilitated through the use of a Fortran computer program. A significant difference must be established at low probabilities of failure. Analysis of typical aging tests from the literature indicate that many experiments performed to date may not be statistically significant at utilization levels. The number of tests required to achieve unambiguous significance at low probability levels may render meaningful accelerated aging tests uneconomic.

Journal ArticleDOI
TL;DR: In this paper, three long-wave approximations (the Born, quasistatic, and extended quasistic) to the scattering of elastic waves from a flaw embedded in an isotropic medium are discussed.
Abstract: We discuss three long‐wave approximations (the Born, quasistatic, and extended quasistatic) to the scattering of elastic waves from a flaw embedded in an isotropic medium. First, we derive the Born and quasistatic approximations by the technique of infinite‐order perturbation theory. For these approximations this derivation clearly reveals the precise nature of the approximations and the generality of the quasistatic approximation in particular. Next, we give the complete details on how to calculate within the three approximations the long‐wave scattering from ellipsoidal voids and inclusions. Then, we calibrate the approximations by comparison with exact results for the scattering from a sphere and also present computations based on the extended quasistatic approximation for the scattering from various ellipsoidal voids.

Journal ArticleDOI
TL;DR: The second in a series of papers reporting the results of a study of tides, setup and bottom friction in the Bight of Abace, Bahamas is reported in this article, where extensive field data reported in Part I of the series (Filloux and Snyder, 1979) are compared with tidal computations using a modified elliptic model first developed by Sidjabat (1970).
Abstract: This is the second in a series of papers reporting the results of a study of tides, setup and bottom friction in the Bight of Abace, Bahamas. The extensive field data reported in Part I of the series (Filloux and Snyder, 1979) are compared with tidal computations using a modified elliptic model first developed by Sidjabat (1970). This model, a multiconstituent generalization of the “hamonic method” of Dronkers (1964), is based on a polynomial representation for the magnitude of the current which provides a tractable resolution of bottom friction, resulting in a coupled set of time-independent equations governing the individual constituents. This resolution naturally spots the bottom friction into a part which can be absorbed on the left-hand (operator) side of the constituent equations and a part which can be treated as multilinear source terms on the right-hand side. The contribution to the left-hand side is large enough that the resulting coupled set may be solved iteratively, converging rapidl...


Journal ArticleDOI
TL;DR: Four algorithms for the numerical computation of the standard deviation of (unweighted) sampled data are analyzed and it is concluded that all four algorithms will provide accurate answers for many problems, but two of the algorithms are substantially more accurate on difficult problems than are the other two.
Abstract: Four algorithms for the numerical computation of the standard deviation of (unweighted) sampled data are analyzed Two of the algorithms are well-known in the statistical and computational literature; the other two are new algorithms specifically intended for automatic computation Our discussion is expository, with emphasis on reaching a suitable definition of “accuracy” Each of the four algorithms is analyzed for the conditions under which it will be accurate We conclude that all four algorithms will provide accurate answers for many problems, but two of the algorithms, one new, one old, are substantially more accurate on difficult problems than are the other two

Journal ArticleDOI
TL;DR: In this article, spline functions are used to solve two mathematical problems: the determination of derivatives (velocities and accelerations) from displacement data and the computation of Fourier coefficients.

Journal ArticleDOI
TL;DR: A formal framework for structuring and embedding the heuristic information is proposed in order to allow an algorithmic computation of the evaluation function ĥ (n) of the classical Hart-Nilsson-Raphael algorithm.

Journal ArticleDOI
01 Sep 1979
TL;DR: In this paper, it was shown that the error associated with the Burg frequency estimator for a truncated real sinusoid in noise can be reduced by using a simple window function in the computation of the coefficient a mm in the mth-order prediction error filter (1, a m1,..., a mm ).
Abstract: It is indicated that the error associated with the Burg frequency estimator for a truncated real sinusoid in noise can be reduced on using a simple window function in the computation of the coefficient a mm in the mth-order prediction-error filter (1, a m1 ,..., a mm ).

Journal ArticleDOI
TL;DR: In this paper, a modification of the theory of neighboring extremals is presented which leads to a new formulation of a linear boundary value problem for the perturbation of the state and adjoint variables around a reference trajectory.
Abstract: A modification of the theory of neighboring extremals is presented which leads to a new formulation of a linear boundary value problem for the perturbation of the state and adjoint variables around a reference trajectory. On the basis of the multiple shooting algorithm, a numerical method for stable and efficient computation of perturbation feedback schemes is developed. This method is then applied to guidance problems in astronautics. Using as much stored a priori information about the precalculated flight path as possible, the only computational work to be done on the board computer for the computation of a regenerated optimal control program is a single integration of the state differential equations and the solution of a few small systems of linear equations. The amount of computation is small enough to be carried through on modern board computers for real-time. Nevertheless, the controllability region is large enough to compensate realistic flight disturbances, so that optimality is preserved. In this paper a numerical method for the computation of neighboring optimum feedback schemes is presented. The method is then applied to guidance problems in astronautics. Suppose that during the flight of a spacecraft, deviations from the precalculated path occur. A new optimal control program must then be computed in real-time on the board computer to compensate these perturbations and to preserve optimality. The necessary amount of computing time can be minimized only if as much a priori information about the nominal path as possible is invested and stored. The extensive numerical experiments

Journal ArticleDOI
TL;DR: Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer and the spatial behavior is described using higher-order finite elements.

Journal ArticleDOI
TL;DR: It is concluded that the fields of symbolic and numerical computation can advance most fruitfully in harmony rather than in competition.

Journal ArticleDOI
TL;DR: It is argued that several computation procedures commonly used in spectrum estimation are contrary to the basic idioms of statistical analysis.
Abstract: It is argued that several computation procedures commonly used in spectrum estimation are contrary to the basic idioms of statistical analysis.

01 Jan 1979
TL;DR: In this paper, a relationship between the vehicle scheduling problem and the dynamic lot size problem is considered, where the goal is to minimize the combined set-up and inventory holding costs.
Abstract: In this paper a relationship between the vehicle scheduling problem and the dynamic lot size problem is considered. For the latter problem we assume that order quantities for different products can be determined separately. Demand is known over our n -period production planning horizon. For a certain product our task is to decide for each period if it should be produced or not. If it is produced, what is its economic lot size? Our aim here is to minimize the combined set-up and inventory holding costs. The optimal solution of this problem is given by the well-known Wagner-Whitin dynamic lot size algorithm. Also many heuristics for solving this problem have been presented. In this article we point out the analogy of the dynamic lot size problem to a certain vehicle scheduling problem. For solving vehicle scheduling problems the heuristic algorithm developed by Clark and Wright in very often used. Applying this algorithm to the equivalent vehicle scheduling problem we obtain by analogy a simple heuristic algorithm for the dynamic lot size problem. Numerical results indicate that computation time is reduced by about 50% compared to the Wagner-Whitin algorithm. The average cost appears to be approximately 0.8% higher than optimum.

Proceedings ArticleDOI
01 Dec 1979
TL;DR: The von Neumann model has proved to be a viable and powerful approach to computation, but other models of computation are explored to determine if they offer advantages in ease of programming, exploitation of concurrency and performance.
Abstract: In 1946 John von Neumann outlined an organization for computers 1 that has dominated the languages and architecture of machines to this day—the familiar sequential, one-word-at-a-time instruction stream which modifies the contents of a memory. Although the von Neumann model has proved to be a viable and powerful approach to computation, we have chosen to explore other models of computation to determine if they offer advantages in ease of programming, exploitation of concurrency and performance. A primary motivation is new technology such as large scale integration (LSI) which has greatly expanded the range of choice in computer design.

Journal ArticleDOI
J. M. Owen1
TL;DR: In this paper, an approximate theoretical model is developed to estimate the effects of errors in temperature measurement on the computation of heat-transfer coefficients, h, from the numerical solution of Fourier's equation.
Abstract: An approximate theoretical model is developed to estimate the effects of errors in temperature measurement on the computation of heat-transfer coefficients, h, from the numerical solution of Fourier's equation. The model predicts that, depending on the Biot number and the algorithm used, a small random error on temperature will produce an amplified random error on the calculated instantaneous value of h and a bias in the average value, . ‘Experimental results’, simulated using Monte Carlo methods, are in reasonable agreement with the model, and it is shown that improved estimates of the heat-transfer coefficient can be obtained by using smoothing curves to minimize the effects of noise on measured temperatures.If the temperature measurements have a small positive bias, the theoretical model and ‘experimental results’ show that there can be a large negative bias in the calculated value of h. It is also shown that ‘missing thermocouples’ can be partially compensated for by using interpolation polynomials...

Journal ArticleDOI
TL;DR: In this article, the authors used a well-type Ge(Li) detector for instrumental neutron activation analysis, and found that the intensity ratios between high energy peaks and the sum peaks derived from them are influenced by selfabsorption of the low energy photons in the sample.

Journal ArticleDOI
TL;DR: In this paper, the authors used repeated use of the Routh test to avoid the computation of zeros and provide a useful method for the abscissa of stability of a polynomial.
Abstract: The numerical computation of the abscissa of stability of a polynomial, defined as the largest of the real parts of the zeros of the polynomial, is a problem that occurs repeatedly in computer-aided design of dynamical and control systems. The approach developed in this paper, by making repeated use of the Routh test, avoids the computation of zeros and provides a useful method.