scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 1977"


Journal ArticleDOI
TL;DR: A method in which subspace iteration is utilized in conjunction with a frequency dependent mass and stiffness formulation is described and applied to framed structures.

113 citations



Journal ArticleDOI
TL;DR: A parallel computational method is described that provides a simple and fast algorithm for the evaluation of polynomials, certain rational functions and arithmetic expressions, solving a class of systems of linear equations, or performing the basic arithmetic operations in a fixed-point number representation system.
Abstract: A parallel computational method, amenable for efficient hardware-level implementation, is described. It provides a simple and fast algorithm for the evaluation of polynomials, certain rational functions and arithmetic expressions, solving a class of systems of linear equations, or performing the basic arithmetic operations in a fixed-point number representation system. The time required to perform the computation is of the order of m carry-free addition operations, m being the number of digits in the solution. In particular, the method is suitable for fast evaluation of mathematical functions in hardware.

97 citations


Book
01 Jan 1977
TL;DR: 43. Mathematical Functions and Their Approximations by Yudell L. Luke, 1976-06 44. Algorithms for the Computation of Mathematical functions by Yuda L Luke, 1977 45. Integrals of Bessel functions byYudellL Luke, 1962 46.
Abstract: 43. Mathematical Functions and Their Approximations by Yudell L. Luke, 1976-06 44. Algorithms for the Computation of Mathematical Functions by Yudell L. Luke, 1977 45. Integrals of Bessel functions by Yudell L Luke, 1962 46. The Special Functions and Their Approximations, Two Volumes by Yudell L. Luke, 1969 47. The Special Functions and Their Approximations Vol 2 by Yudell Luke, 1969 48. The special functions and their approximations (Mathematics in science and engineering) by Yudell L Luke, 1969 49. Cumulative Index to Mathematics of Computation by Yudell L. Luke, Jet Wimp, et all 1972 50. Algorithms for the Computation of Mathematical Functions by Yudell L. Luke, 1977 51. Special Functions & Their Approxima 2 Volumes by Yudell L Luke, 1969 52. On the approximate inversion of some Laplace transforms by Yudell L Luke, 1961 53. Cumulative Index to Mathematics of Computation Vols. 1-231943-1969 by Yudell L.; Wimp, Jet; Fair, Wyman Luke , 1972 54. On generating Bessel functions by use of the backward recurrence formula by Yudell L Luke, 1972 55. Cumulative Index to Mathematics of Computation 1943-1969 by Yudell L. Luke, 1972-12 56.

85 citations


Journal ArticleDOI
TL;DR: To tackle process design problems of this complexity several chemical process and petroleum companies, consulting firms and academic institutions developed general integrated digital computer programs for steady-state and transient simulation provided frequently with process-oriented input description, automatic sequencing of computation, built-in unified physical data bank, provision for convergence and standard unit operation subroutines.

67 citations


Journal ArticleDOI
TL;DR: The authors prove upper bounds on speed-ups achievable by parallel computers for a particular problem, the solution of first order linear recurrences for Cmmp and ILLIAC 4.
Abstract: : The concept of computers such as C.mmp and ILLIAC 4 is to achieve computational speed-up by performing several operations simultaneously with parallel processors. This type of computer organization is referred to as a parallel computer. In this paper, the authors prove upper bounds on speed-ups achievable by parallel computers for a particular problem, the solution of first order linear recurrences. The authors consider this problem because it is important in practice and also because it is simply stated so that one might obtain some insight into the nature of parallel computation by studying it.

57 citations


Journal ArticleDOI
TL;DR: Two geometric approaches to solving sequencing problems are described and tested, and the largest angle method can be used to generate tours without any computation, giving the practitioner an effective "back of the envelope method" of finding solutions.
Abstract: Two geometric approaches to solving sequencing problems are described and tested. Both methods have yielded optimal or near optimal solutions in problems where the optimal is known. Further, these methods have the advantage of being programmable, with execution in relatively short computation times, even for large problems. (The largest tested was composed of 318 cities.) One of these methods (the largest angle method) can be used to generate tours without any computation, if the number of cities is less than 25 or so, giving the practitioner an effective “back of the envelope method” of finding solutions. The results include applications to problems previously reported in the literature as well as several original large problems. The tours, their costs and computation times are presented.

47 citations


Journal ArticleDOI
01 Sep 1977-Talanta
TL;DR: It was found that the choice of algorithm determines the efficiency of program execution, and in all cases studied the problems were most rapidly solved by a modified Newton-Raphson approach that employs Choleski factoring.

41 citations


Journal ArticleDOI
TL;DR: In this article, the translational velocity and position relative to the earth were computed by the processor of a strapdown inertial navigation system using three rate levels, each based on a different level of simplifying assumptions.
Abstract: The problem of computing the translational velocity and position relative to earth, which has to be solved by the processor of a strapdown inertial navigation system is discussed. Several approaches are briefly examined with consideration given to the form in which the sensor data are generated and to the computational burden involved in each approach. A computational scheme is finally selected in which the computation is divided into three rate levels. The differential equations of this scheme are developed and the assumptions on which the development is founded are stated. Three variants of the basic scheme are presented, each based on a different level of simplifying assumptions. The main purpose of this work is to develop the differential equations to be solved at each stage of the computation, rather than the numerical implementation of the solution. This work supplies the theoretical background for some of the numerical methods which are now being used.

38 citations


Journal ArticleDOI
01 Aug 1977
TL;DR: The use of continuation methods in the computer-aided analysis of electronic circuits is surveyed and applications of the concept to the location of multiple solutions to nonlinear equations, the computation of input-output characteristics for nonlinear networks, large-change sensitivity analysis, and the computationof multivariable Nyquist plots are reviewed.
Abstract: The use of continuation methods in the computer-aided analysis of electronic circuits is surveyed. Such methods are especially suitable when one desires to compute the solutions to a family of circuit analysis problems as a function of a continuous parameter. Applications of the concept to the location of multiple solutions to nonlinear equations, the computation of input-output characteristics for nonlinear networks, large-change sensitivity analysis, and the computation of multivariable Nyquist plots are reviewed.

37 citations


Book ChapterDOI
01 Jan 1977
TL;DR: In this article, the systematic formulation of the equations of motion for three-dimensional mechanical systems (mechanisms) is considered in view of sparsity requirements for large simulation problems.
Abstract: : In this paper, the systematic formulation of the equations of motion for three-dimensional mechanical systems (mechanisms) is considered in view of sparsity requirements for large simulation problems. Topological approaches of previous authors are extended to include the practical case of joint-constraints, and the resultant formulation is also related to Lagrange multiplier methods. Finally, a simulation example of a three-dimensional landing system model is studied several times larger than heretofore state-of-the-art problems and the growth rate of computation time with problem size is noted. (Author)

Journal ArticleDOI
TL;DR: In this paper, an iterative algorithm for solving nonlinear inverse problems in remote sensing of density profiles of a simple ocean model by using acoustic pulses is developed, where the adiabatic sound velocity is assumed to be proportional to the inverse square root of the density.

Journal ArticleDOI
TL;DR: In this article, a reproducing kernel Hilbert space is defined to contain the (linear) functionals associated with the observations of the Earth (T) for the collocation method of (least squares) collocation.
Abstract: In order to use the method of (least squares) collocation for the computation of an approximation to the anomalous potential of the Earth (T) it is necessary to specify a reproducing kernel Hilbert space the dual of which contain the (linear) functionals associated with the observations.

Journal ArticleDOI
TL;DR: A two level costate prediction algorithm is developed for the optimisation of non-linear discrete dynamical systems and is proved to converge under fairly mild conditions.


Journal ArticleDOI
TL;DR: This study is the computation of microdosimetric functions for sites which are too small to permit experimental determination of the distributions by Rossi-counters by using Monte-Carlo techniques.
Abstract: Object of this study is the computation of microdosimetric functions for sites which are too small to permit experimental determination of the distributions by Rossi-counters. The calculations are performed on simulated tracks generated by Monte-Carlo techniques. The first part of the article deals with the computational procedure. The second part presents numerical results for protons of energies 0.5, 5, 20 MeV and for site diameters of 5, 10, 100 nm.

Book ChapterDOI
W.S. Brown1
01 Jan 1977
TL;DR: A new model of floating-point computation, intended as a basis for efficient portable mathematical software, is presented, which supports conventional error analysis with only minor modifications.
Abstract: This paper presents a new model of floating-point computation, intended as a basis for efficient portable mathematical software. The model involves only simple familiar concepts, expressed in a small set of environment parameters. Using these, both a program and its documentation can tailor themselves to the host computer. Our main focus is on stating and proving machine-independent properties of numerical programs. With this in mind, we present fundamental axioms and a few theorems for arithmetic operations and arithmetic comparisons. Our main conclusion is that the model supports conventional error analysis with only minor modifications. To motivate the formal axioms and theorems, and describe their use, the paper includes numerous examples of bizarre phenomena that are inherent in floating-point computation or arise from the anomalies of real computers. While the use of the model imposes a fairly strict programming discipline for the sake of portability, its rules are easy to remember, and for the most part independently meritorious.

Proceedings ArticleDOI
01 May 1977
TL;DR: In this paper, a new method for the computation of the partial correlation coefficients from the autocorrelation sequence is introduced, which involves the calculation of the crosscorrelations between the inputs and the outputs of the successive models which are formed.
Abstract: This paper introduces a new method for the computation of the partial correlation coefficients from the autocorrelation sequence. Derived from Levinson's algorithm, it involves the calculation of the crosscorrelations between the inputs and the outputs of the successive models which are formed. Using these, a fixed-point implementation is found for real time speech analysis on a 16-bits microprocessor. The method also gives an approximation of the impulse response of the system, which may be useful for identification with moving average models.

01 Jan 1977
TL;DR: The purpose of this article is to examine the research developments in software for numerical computation, and to attempt to separate software research from numerical computation research, which is not easy as the two are intimately intertwined.
Abstract: The purpose of this article is to examine the research developments in software for numerical computation. Research and development of numerical methods is not intended to be discussed for two reasons. First, a reasonable survey of the research in numerical methods would require a book. The COSERS report [Rice et al, 1977] on Numerical Computation does such a survey in about 100 printed pages and even so the discussion of many important fields (never mind topics) is limited to a few paragraphs. Second, the present book is focused on software and thus it is natural to attempt to separate software research from numerical computation research. This, of course, is not easy as the two are intimately intertwined. We want to define numerical computation rather precisely so as to distinguish it from business data processing, symbolic processing (such as compilers) and general utilities (such as file manipulation systems or job schedulers). We have the following definition: Numerical computation involves real numbers with procedures at a mathematical level of trigonometry, college algebra, linear algebra or higher. Some people use a somewhat narrower definition which restricts the term to computation in the physical sciences and a few people even think of numerical computation as research and development computation (as opposed to production) in science. There are two principal sources of the problems in numerical computation: Mathematical models of the physical world and the optimization of models of the organizational world. The scope and range of the sources and the associated software is illustrated by the following list: 1. Simulation of the effects of multiple explosions. The software is a very complex program of perhaps 20,000 Fortran statements. It is specially tailored to this problem and may have taken several years to implement. The program requires all the memory and many hours of time on the largest and fastest computers. 2. Optimization of feed mixtures for a chicken farmer. This is standard software of modest length (5OO-2OO0 statements) even with an interface for a naive user. It might take substantial time to execute on a small computer. 3. Analysis of the structural vibration of a vehicle. The software is similar to that of example 1. One might also use NASTRAN (see II.G.3) with only a few months for an implementation. More computer time and memory would be used by this approach. 4. Simple linear regression on demographic data (e.g. age or income). This is …

Journal ArticleDOI
TL;DR: A matrix containing all information sufficient for the computation of MINQUE variance component estimates (Rao 1973) in random and mixed models is exhibited and leads to an efficient computer algorithm which saves both computer time and memory usage.
Abstract: A matrix containing all information sufficient for the computation of MINQUE variance component estimates (Rao 1973) in random and mixed models is exhibited. The use of this matrix leads to an efficient computer algorithm which saves both computer time and memory usage.

Book ChapterDOI
01 Jan 1977
TL;DR: In this paper, the Navier-Stokes equations were used to compute the velocity field in a turbulent flow down to the smallest scale of motion of interest, which is a major portion of engineering fluid mechanics and heat transfer work.
Abstract: Problems in which turbulent flow fields dominate form a major portion of engineering fluid mechanics and heat transfer work. In principle, the calculation of these flows involves the solution of the time-dependent Navier-Stokes equations. But these equations cannot be solved without recourse to numerical methods, which must divide the flow field into a finite number of calculation points. The fundamental problem in the computation of turbulent flows then becomes the fact that turbulence introduces motions on a scale far smaller than the distances between the calculation points on the smallest practical numerical solution grid. Indeed, even if it were possible to compute the velocity field in a turbulent flow down to the smallest scale of motion of interest, another problem would be encountered. Because the velocity field in a turbulent flow fluctuates randomly, the variables of engineering interest in the flow are in general time or ensemble averages of the fluctuating quantities. In order to predict these averages, it would be necessary to repeat a detailed computation a great number of times, each with a slightly different initial condition, and ensemble-average the results. For these reasons, a direct assault on the problem of the computation of turbulent flows is impractical.

Journal ArticleDOI
M. Tasto1
TL;DR: The ‘PEAC’ structure and its application to various image-processing methods such as point operations, neighbourhood operations, guided boundary detection using prior knowledge, and object reconstruction from projections are discussed.
Abstract: A major drawback of digital computer image processing is the large computation time required. On the other hand, its flexibility, programmability and computational accuracy make it desirable to use digital processing. Advances in technology of LSI circuitry have now made it possible to increase strongly the computational power of image processing systems by combining many ‘micro computers’ or processing elements to array processors. We discuss several concepts of connecting such small computers and integrating them into a system, and then concentrate on the ‘PEAC’ structure which was closely investigated at PFH. Its application to various image-processing methods such as point operations, neighbourhood operations, guided boundary detection using prior knowledge, and object reconstruction from projections are discussed. In almost all applications a speed-up ratio of k can be achieved, where k is the number of processing elements.

Journal ArticleDOI
TL;DR: The procedure presented here is a transform domain approach that is distinct, to the knowledge of the author, when compared to known identification techniques in which a best fitting is made to an assumed mathematical model of the system.
Abstract: Algorithms for system identification and the computation of its mathematical model through a ``fast'' Z transformation of its sampled response in the presence of noise are introduced. It is shown that by iteratively applying constant-damping?and constant-frequency contour finite Z transforms a system's mathematical model?in the presence of noise can be efficiently evaluated. On line tracking of the poles and zeros of relatively rapidly time-variant systems such as a space shuttle or a jet aircraft are possible applications. An organization for a high-speed machine including a fast Fourier transform processor for on line identification of relatively rapidly time-variant system is suggested. Applications of the described algorithms include enhancement of poles in spectral analysis of signals, representation of signals by poles and zeros for signal classification, coding and recognition, filter synthesis, adaptive filtering, identification of parameters in curve fitting problems, in addition to system identification in the presence of noise. The procedure presented here is a transform domain approach that is distinct, to the knowledge of the author, when compared to known identification techniques in which a best fitting is made to an assumed mathematical model of the system. In addition to the smoothing obtained here through the computation of spectra in the Z plane of a time series including redundancy, no priori knowledge of the order of the system needs be assumed.

Journal ArticleDOI
TL;DR: A measurement procedure was simulated to determine the effects of sample rate, signal noise levels, and numerical filters on the digital calculations and indicated that a trapezoidal rule integration at a sample rate of 30 Hz with no numerical filter can provide satisfactory data.
Abstract: Digital computers are often used in work physiology to find pulmonary gas transfer on a breath-by-breath basis. A measurement procedure was simulated to determine the effects of sample rate, signal noise levels, and numerical filters on the digital calculations. The results indicated that a trapezoidal rule integration at a sample rate of 30 Hz with no numerical filter can provide satisfactory data. The flow rate and concentration signals should not be out of phase by more than 25 ms. The simple computational procedures give accurate results while minimizing the memory requirements and computation time.

Journal ArticleDOI
TL;DR: In this paper, a matrix power series-based algorithm for tracking sensitivity is proposed, which requires less computation than previous methods, but the allowable range of the global parameter is restricted.
Abstract: A new algorithm for tracking sensitivity, based on a matrix power series, is given. It requires less computation than previous methods, but the allowable range of the global parameter is restricted.

Journal ArticleDOI
TL;DR: The technique is to test for surface proximity in a well defined manner, ‘ well spaced’ surfaces requiring a simple measure of distance to determine priority, and ‘closely spaced' surfaces being modified until they are ‘well spaced�’.
Abstract: Most hidden surface algorithms require a considerable amount of computation for all but the simplest images. This prevents their use in real time systems where new frames may be calculated at a rate of 25 per second. The paper presents an approach suitable for fixed models, such as those used in flight simulators, where most of the time consuming computation may be performed when the model is first created. The technique is to test for surface proximity in a well defined manner, ‘well spaced’ surfaces requiring a simple measure of distance to determine priority, and ‘closely spaced’ surfaces being modified until they are ‘well spaced’. This modification is only in the representation of the surface and does not affect its final appearance in the picture. The work to be described is part of a project financed by the Science Research Council, to which grateful acknowledgement is made.

Journal ArticleDOI
TL;DR: In this paper, a lumping technique has been applied to a dynamic distributed parameter model, in which the radial differential operators have been approximated and effectively replaced by simple algebraic expressions by redefining the dependent variable.
Abstract: The solution of dynamic distributed parameter models describing complex highly exothermic reactions in fixed bed reactors usually requires excessive computation times which are too demanding for detailed explorations. A lumping technique has been applied to a dynamic distributed parameter model, in which the radial differential operators have been approximated and effectively replaced by simple algebraic expressions by redefining the dependent variable. The resulting reduced model retains the accuracy of description and the necessary structure which is implicit in the full set of equations. This has been achieved by relating a suitably defined set of pseudoparameters to the chemical and physical processes taking place. The direct benefit of this is that the computation time and storage requirements have been reduced to a more realistic level for design purposes.