scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 1978"


Book
01 Jan 1978

488 citations


01 Nov 1978
TL;DR: In this article, a set of subroutines uses rational approximations to compute Bessel functions of integral order, and empirical formulae have been developed to express the limiting boundaries of the modes of computation.
Abstract: : Documentation is given for some subroutines which compute potentials and other functions. A set of subroutines uses rational approximations to compute Bessel functions of integral order. One subroutine uses the Debye approximation for the efficient computation of Bessel functions of complex argument and complex order. Empirical formulae have been developed to express the limiting boundaries of the modes of computation. (Author)

155 citations


Journal ArticleDOI
TL;DR: In this article, a simple and efficient computation for the bivariate normal integral based on direct computation of the double integral by the Gauss quadrature method is presented, which is used in this paper.
Abstract: This paper presents a simple and efficient computation for the bivariate normal integral based on direct computation of the double integral by the Gauss quadrature method.

104 citations


Journal ArticleDOI
01 May 1978
TL;DR: In this paper, the authors derived a reconstruction algorithm for arbitrary ray-sampling schemes, which is based on the negative inverse square of the perpendicular distance from the point to the ray, and this contribution has to be weighted by the ray sampling density.
Abstract: Methods for calculating the distribution of absorption densities in a cross section through an object from density intergrals along rays in the plane of the cross section are well-known, but are restricted to particular geometries of data collection. So-called convolutional-back projection-summation methods, used now for parallelray data, have recently been extended to special cases of the fan-beam reconstruction problem by the addition of pre- and post-multiplication steps. In this paper, a technique for deriving reconstruction algorithms for arbitrary ray-sampling schemes is presented; the resulting algorithms entail the use of a general linear operator, but require little more computation than the convolutional methods, which represent special cases. The key to the derivation is the observation that the contribution of a particular ray sum to a particular point in the reconstruction essentially depends on the negative inverse square of the perpendicular distance from the point to the ray, and that this contribution has to be weighted by the ray-sampling density. The remaining task is the efficient arrangement of this computation, so that the contribution of each ray sum to each point in the reconstruction does not have to be calculated explicitly. The exposition of the new method is informal in order to facilitate the application of this technique to various scanning geometries. The frequency domain is not used, since it is inappropriate for the space-variant operators encountered in the general case. The technique is illustrated by the derivation of an algorithm for parallel-ray sampling with uneven spacing between rays and uneven spacing between projection angles. A reconstruction is shown which attains high spatial resolution in the central region of an object by sampling central rays more finely than those passing through outer portions of the object.

101 citations


Journal ArticleDOI
TL;DR: In this article, a class of solutions to the linearized shallow water equations is presented, which involve two spatial dimensions plus time, and special emphasis is placed on the dynamic steady state.
Abstract: A class of solutions to the linearized shallow water equations is presented. The solutions involve two spatial dimensions plus time, and special emphasis is placed on the dynamic steady state. The results are intended for use in testing and verification of numerical models. The class of problems treated allows an independent assessment of the effects of wind stress, variable bathymetry, frictional dissipation, and nonrectangular boundaries on model performance.

90 citations


Book
01 Jan 1978
TL;DR: The scientific books will also be the best reason to choose, especially for the students, teachers, doctors, businessman, and other professions who are fond of reading.
Abstract: In what case do you like reading so much? What about the type of the machines languages and computation book? The needs to read? Well, everybody has their own reason why should read some books. Mostly, it will relate to their necessity to get knowledge from the book and want to read just to get entertainment. Novels, story book, and other entertaining books become so popular this day. Besides, the scientific books will also be the best reason to choose, especially for the students, teachers, doctors, businessman, and other professions who are fond of reading.

80 citations


Journal ArticleDOI
TL;DR: In this article, a general computational procedure is presented that is based upon a variational approach involving the assumption of constant source strength over each surface element, followed by an analysis of the discretization error for a spherical body that is then used to develop a hierarchy of computational schemes.
Abstract: Computational techniques for the treatment of fluid-structure interaction effects by discrete boundary integral methods are examinde. Attention is focused on the computation of the added mass matrix by finite element methods for a structure submerged in an infinite, inviscid, incompressible fluid. A general computational procedure is presented that is based upon a variational approach involving the assumption of constant source strength over each surface element. This is followed by an analysis of the discretization error for a spherical body that is then used to develop a hierarchy of computational schemes. These schemes are than evaluated numerically in terms of ‘fluid boundary modes’ for a submerged spherical surface. One scheme has been found to be surprisingly accurate in relation to its computational demands.

63 citations


Journal ArticleDOI
TL;DR: It is shown that Strassen's algorithm for the computation of the product of 2 × 2-matrices is essentially unique, and the question to what extent elements of the trivial algorithm for 2 ×2-matrix multiplication can be used in an optimal one is answered.

57 citations


Proceedings ArticleDOI
16 Oct 1978
TL;DR: A lower bound on the interprocessor information transfer required for computing a function in a distributed network configuration is derived in terms of the function's derivatives, and it is used to exhibit functions whose computation requires a great deal of interprocess communication.
Abstract: We derive a lower bound on the interprocessor information transfer required for computing a function in a distributed network configuration. The bound is expressed in terms of the function's derivatives, and we use it to exhibit functions whose computation requires a great deal of interprocess communication. As a sample application, we give lower bounds on information transfer in the distributed computation of some typical matrix operations. Traditional measures of computational complexity, such as the number of primitive operations or memory cells required to compute functions, do not form an adequate framework for assessing the complexity of computations carried out in distributed networks. Even in the relatively straightforward situation of memoryless processors arranged in highly structured configurations, Gentleman [4] has demonstrated that data movement, rather than arithmetic operations, can often be the significant factor in the performance of parallel computations. And for the more general kinds of distributed processing, involving arbitrary network configurations and distributed data bases, the situation is correspondingly more complex. This paper addresses the problem of measuring computational complexity in terms of the interprocess communication required when a computation is distributed among a number of processors. More precisely, we model the distributed computation of functions which depend upon large amounts of data by assuming that the data is partitioned into disjoint subsets, and that a processor is assigned to each subset. Each processor (which we can think of as a node in a computational network) computes some values based on its own "local" data, and transmits these values to other processors, which are able to use them in subsequent local comutations. This "compute locally and share information" procedure is repeated over and over until finally some (predetermined) processor outputs the value of the desired function. In measuring the complexity of such computations we will be concerned, not with the individual local computations, but rather with the total information transfer, i.e., the total number of values which must be transmitted between processors. We derive a lower bound on the total information transfer required for computing a function in a distributed network. The bound is expressed in terms of the function's derivatives, and we use it to exhibit functions whose computation requires a great deal of interprocess communicaion. As a sample application, we give lower bounds on information transfer in the distributed computation of some typical matrix operations.

43 citations


Journal ArticleDOI
TL;DR: A high precision unconditionally stable algorithm for computation of linear dynamic structural systems that shares the advantageous property of the amplification matrix preserving a banded form due to discretization in space, which means less computer space and fewer operations are needed.

41 citations


Journal ArticleDOI
TL;DR: In this article, the coexistence densities and pressure of the two phases of the inverse-twelfth-power soft-sphere model have been computed by direct simulation using a 1920 particle model with three-dimensional periodicity.

Journal ArticleDOI
TL;DR: In this paper, the relative oscillator strengths for transitions in the principal, sharp and diffuse series of Cu(I), Ag(I) and Au (I) spectra were calculated by employing a semi-empirical method which includes exchange and core-polarization effects.
Abstract: Relativistic oscillator strengths have been calculated for transitions in the principal, sharp and diffuse series of Cu(I), Ag(I) and Au(I) spectra. The computations have been performed by employing a semiempirical method which includes exchange and core-polarization effects. A comparison is presented for the calculated ⨍ik values with experimental and other theoretical data. The influence of core-polarization effects on oscillator strengths is discussed.

Journal ArticleDOI
TL;DR: Two algorithms for computation of optimal feedback gains for output constrained regulators are considered and one consists of an iterative solution of Lyapunov equations and one includes a solution of a nonlinear equation in each iteration.
Abstract: Two algorithms for computation of optimal feedback gains for output constrained regulators are considered. One algorithm consists of an iterative solution of Lyapunov equations. It is shown that it does not always converge even for good initial values. The other algorithm includes a solution of a nonlinear equation in each iteration. This algorithm is superior to the first one for considered examples.

Journal ArticleDOI
TL;DR: Although no general conclusion can be made on the effectiveness of the method, it appears that the method is at least comparable to that described in a recent paper.
Abstract: In this paper a new approach to the placement problem is introduced. The main idea is to take advantage of what one can do in linear placement in tackling the two-dimensional placement problem. The method consists of three distinct phases, namely: decomposition, linear placement, and iterative improvement Each is clearly spefled out. Both constructive and iterative algorithms are developed. The complexity of computation is analyzed and the method has been tried with practical examples. Although no general conclusion can be made on the effectiveness of the method, it appears that the method is at least comparable to that described in a recent paper [1].

Journal ArticleDOI
TL;DR: In this article, a numerical code is built up with 36000 cards and 343 subroutines to investigate the interconnected fields of velocity, temperature, pressure and isotope concentration in a gas centrigue.

Journal ArticleDOI
TL;DR: A new scheme using direct decimation is proposed which computes the narrow-band spectrum with good resolution while requiring only modest computation and storage.
Abstract: The calculation of the spectrum of a narrow-band signal which is embedded in a broad-band sequence usually requites substantial computation and storage if executed by performing an FFT or DFT's directly on the broad-band sequence. In this paper a new scheme using direct decimation is proposed which computes the narrow-band spectrum with good resolution while requiring only modest computation and storage. The performance of the proposed scheme is analyzed. Examples are presented which demonstrate the efficiency of this scheme when compared with the FFT, DFT, zoom transform, and complex modulation scheme.

Journal ArticleDOI
TL;DR: In this paper, a method for computing the optimum recovery at fixed length (Cp*) in two-dimensional diffusers with incompressible flow and turbulent inlet boundary layers is presented.
Abstract: A method is presented for computation of optimum recovery at fixed length (Cp *) in two-dimensional diffusers with incompressible flow and turbulent inlet boundary layers. Since Cp * lies in the zone of transitory stall, the method involves computation of not only attached but also detaching and detached turbulent boundary layers. The results agree with available data to the level of the uncertainty in the data. The model is zonal in character. Results suggest that the most important feature in computing detaching flows is the treatment of the interaction between the outer (inviscid) flow and the boundary layer; the use of velocity-profile forms that represent average back-flows adequately is also important.


Journal ArticleDOI
TL;DR: In this paper, the exact expressions for delta-star transformation were derived to simplify complex reliability block diagrams consisting 2-state or 3-state devices, and the conditions under which the transformation applies.
Abstract: This paper presents new exact expressions for delta-star transformation to simplify complex reliability block diagrams consisting 2-state or 3-state devices. The conditions are given under which the transformation applies. The expressions are interrelated and require less computation time for finding equivalent star configuration. Expressions can also be derived for star-delta transformation in the same way.

Dissertation
01 Jan 1978
TL;DR: With the actual physical size of components being very small and the high circuit density, there is little scope for improving computation speech significantly by such means as even denser circuitry or still faster electronic components.
Abstract: The present state of electronic technology is such that factors affecting computation speed have almost been minimised; switching for instance is almost instantaneous. Electronic components are so good, in fact, that the time taken for a logic signal to travel between two points is now a significant factor of instruction times. Clearly, with the actual physical size of components being very small and the high circuit density, there is little scope for improving computation speech significantly by such means as even denser circuitry or still faster electronic components. Thus, development of faster computers will require a new approach that depends on the imaginative use of existing knowledge. One such approach is to increase computation speed through parallelism. Obviously, a parallel computer with p identical processors is potentially p times as fast as a single computer, although this limit can rarely be achieved.

Journal ArticleDOI
TL;DR: In this paper, an algorithm is presented for the evaluation of the stiffness matrices of higher-order elements on the CDC STAR-100 computer, where the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline (streaming) capability are discussed.

Book ChapterDOI
01 Jan 1978
TL;DR: The Hartree-Fock method has been used for the calculation of energy splittings and other atomic properties as discussed by the authors, but it has not yet been extended beyond test cases.
Abstract: During the 1960’s, many theories were proposed for treating properly the correlation of the electrons of an atom Up till then, most of the methods used in atomic structure calculations—notably the Hartree-Fock method—had treated the interaction between the electrons in an approximate way Although such approximate methods frequently led to energy values that were reasonably accurate, the same could not be said of the calculation of energy splittings or of other atomic properties The new, more accurate theories attempted to remedy this situation Though they were conceptually straightforward, the details of these theories were often complex and the amount of computation required was considerable The feasibility of calculations based on them was thus closely linked to the availability of good computing facilities Some of the methods were never extended beyond test cases But others have been and indeed still are being exploited in the accurate calculation of atomic properties

Journal ArticleDOI
TL;DR: In this article, a method to approximate an integral over the (unit) sphere by a linear combination of the values of the integrand at given points is discussed, where the main concept is the observation that on the sphere for each sufficiently smooth function the integral can be expressed by a summation formula.
Abstract: This paper discusses a method to approximate an integral over the (unit) sphere by a linear combination of the values of the integrand at given points. The main concept of this method is the observation, that on the sphere for each sufficiently smooth function the integral can be expressed by a summation formula. A method is given for optimizing the accuracy of the computation for a given set of sample points for the function being integrated.

Journal ArticleDOI
TL;DR: In this paper, the authors consider a simple problem in the optimal control of Brownian motion and show that there exists an optimal policy involving just two critical numbers, and formulas are given for computation of the critical numbers.


Book
01 Apr 1978
TL;DR: The mu calculus, a simple syntactic formalism for representing message-passing computations, is presented and augmented to serve as the semantic basis for programs running on the network and supports object references by using a new concept, the reference tree.
Abstract: : The goal of this thesis is to develop a methodology for building networks of small computers capable of the same tasks now performed by single larger computers. Such networks promise to be both easier to scale and more economical in many instances. The mu calculus, a simple syntactic formalism for representing message-passing computations, is presented and augmented to serve as the semantic basis for programs running on the network. The augmented version includes cells, tokens, and semaphores, as well as primitives for side-effect-free computation. Tokens, a novel construct, allow certain simple communication and synchronization tasks without involving fully general side effects. The network implementation presented supports object references, keeping track of them by using a new concept, the reference tree. A reference tree is a group of neighboring processors in the network that share knowledge of a common object. Also discussed are mechanisms for handling side effects on objects and strategy issues involved in allocating computations to processors. (Author)

Proceedings ArticleDOI
01 Oct 1978
TL;DR: The main idea in this paper is the development of mathematical notation for expressing the important features and properties of Iterative computation networks, and can be used both for analyzing and for designing computational networks.
Abstract: This paper deals with design principles for Iterative computation networks Such computation networks are used for performing repetitive computations which typically are not data-dependent Most of the signal processing algorithms, like FFT and filtering, belong to this class The main idea in this paper is the development of mathematical notation for expressing such designs This notation captures the important features and properties of these computation networks, and can be used both for analyzing and for designing computational networks

Journal ArticleDOI
TL;DR: The methods for near-field computation of resonant dipole and broadside and endfire linear arrays are reviewed, some recent developments are described, and extensive experimental verification is reported.
Abstract: Techniques for the computation of near fields of thinwire antennas have been reported in the past. In this paper, the methods for such computation are reviewed, some recent developments are described, and extensive experimental verification is reported. The computed near fields of resonant dipole are compared with experiment. The near fields of broadside and endfire linear arrays are reported. The broadside near-field patterns narrow, whereas the endfire patterns broaden, as distance from the array center increases. Finally, the relevance of near-field computation to near-field radiation hazards and to near-field measurements is discussed.

Journal ArticleDOI
TL;DR: In this paper, a horizontally layered non-absorptive system of homogeneous layers may be specified by giving the reflection coefficients at each interface, provided the layers have equal vertical travel time and a perfect reflector as a free surface.
Abstract: A horizontally layered non-absorptive system of homogeneous layers may be specified by giving the reflection coefficients at each interface. Provided the layers have equal vertical travel time and a perfect reflector as a free surface, the reflection coefficients are generally reconstructed from the reflected pulses by way of solving simultaneous equations of the Toeplitz matrix form with the Levinson recursion method. There exists an alternative approach to solving this problem which by simple reasoning immediately turns out the (Levinson) recursion scheme. The method is based on formulas that relate to solving the forward problem. It resembles Kunetz's (1962) original inverse solution in as much as the computation of the reflection coefficients is based on the idea of separating the contribution of a primary from the sum of all multiples.