scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 1986"


Book
01 Jan 1986

749 citations


Journal ArticleDOI
TL;DR: The present scheme with a significant saving of computer time is found superior to other currently available methods for molecular integral computations with respect to electron repulsion integrals and their derivatives.
Abstract: Recurrence expressions are derived for various types of molecular integrals over Cartesian Gaussian functions by the use of the recurrence formula for three‐center overlap integrals. A number of characteristics inherent in the recursive formalism allow an efficient scheme to be developed for molecular integral computations. With respect to electron repulsion integrals and their derivatives, the present scheme with a significant saving of computer time is found superior to other currently available methods. A long innermost loop incorporated in the present scheme facilitates a fast computation on a vector processing computer.

609 citations


Journal ArticleDOI
TL;DR: The PISO algorithm as mentioned in this paper is a non-iterative method for solving the implicity discretised, time-dependent, fluid flow equations, which is applied in conjunction with a finite-volume technique employing a backward temporal difference scheme to the computation of compressible and incompressible flow cases.

500 citations


Journal ArticleDOI
TL;DR: In this article, a spin glass transition was found in the system, and the low temperature phase space has an ultrametric structure, which sheds light on the nature of hard computation problems.
Abstract: Recently developed techniques of the statistical mechanics of random systems are applied to the graph partitioning problem. The averaged cost function is calculated and agrees well with numerical results. The problem bears close resemblance to that of spin glasses. The authors find a spin glass transition in the system, and the low temperature phase space has an ultrametric structure. This sheds light on the nature of hard computation problems.

269 citations


Journal ArticleDOI
TL;DR: The model simulates accurately macroscopic phenomena of traffic flow while at the same time reproducing the main mechanisms of microscopic models, making it possible to simulate large traffic networks on personal computers.

223 citations


Journal ArticleDOI
01 Jun 1986
TL;DR: An object-oriented computation model is presented which is designed for modelling and describing a wide variety of concurrent systems and an overview of a programming language called ABCL/1, whose semantics faithfully reflects this computation model, is presented.
Abstract: An object-oriented computation model is presented which is designed for modelling and describing a wide variety of concurrent systems. In this model, three types of message passing are incorporated. An overview of a programming language called ABCL/1, whose semantics faithfully reflects this computation model, is also presented. Using ABCL/1, a simple scheme of distributed problem solving is illustrated. Furthermore, we discuss the reply destination mechanism and its applications. A distributed “same fringe” algorithm is presented as an illustration of both the reply destination mechanism and the future type message passing which is one of the three message passing types in our computation model.

220 citations


Journal ArticleDOI
TL;DR: In this article, a numerically efficient global matrix approach to the solution of the wave equation in horizontally stratified environments is presented, where the field in each layer is expressed as a superposition of the field produced by the sources within the layer and an unknown field satisfying the homogeneous wave equations, both expressed as integral representations in the horizontal wavenumber.
Abstract: Summary. A numerically efficient global matrix approach to the solution of the wave equation in horizontally stratified environments is presented. The field in each layer is expressed as a superposition of the field produced by the sources within the layer and an unknown field satisfying the homogeneous wave equations, both expressed as integral representations in the horizontal wavenumber. The boundary conditions to be satisfied at each interface then yield a linear system of equations in the unknown wavefield amplitudes, to be satisfied at each horizontal wavenumber. As an alternative to the traditional propagator matrix approaches, the solution technique presented here yields both improved efficiency and versatility. Its global nature makes it well suited to problems involving many receivers in range as well as depth and to calculations of both stresses and particle velocities. The global solution technique is developed in close analogy to the finite element method, thereby reducing the number of arithmetic operations to a minimum and making the resulting computer code very efficient in terms of computation time. These features are illustrated by a number of numerical examples from both crustal and exploration seismology.

210 citations


Journal ArticleDOI
TL;DR: It is suggested that any analog computer can be simulated efficiently (in polynomial time) by a digital computer from the assumption that P ≠ NP and from this assumption the operation of physical devices used for computation is drawn.

188 citations


Journal ArticleDOI
TL;DR: Computer arithmetic is extended so that the arithmetic operations in the linear spaces and their interval correspondents which are most commonly used in computation can be performed with maximum accuracy on digital computers.
Abstract: A new approach to the arithmetic of the digital computer is surveyed. The methodology for defining and implementing floating-point arithmetic is described. Shortcomings of elementary floating-point arithmetic are revealed through sample problems. The development of automatic computation with emphasis on the user control of errors is reviewed. The limitations of conventional rule-of-thumb procedures for error control in scientific computation are demonstrated by means of examples. Computer arithmetic is extended so that the arithmetic operations in the linear spaces and their interval correspondents which are most commonly used in computation can be performed with maximum accuracy on digital computers. A new fundamental computer operation, the scalar product, is introduced to develop this advanced computer arithmetic.A process of automatic error control called validation which delivers high accuracy with guarantees for scientific computations is described. Validation of computations for a large class of nu...

143 citations


Journal ArticleDOI
31 Aug 1986
TL;DR: A new formulation of Phong shading is described that reduces the amount of computation per pixel to only 2 additions for simple Lambertian reflection and 5 additions and 1 memory reference for Phong's complete reflection model.
Abstract: Computer image generation systems often represent curved surfaces as a mesh of planar polygons that are shaded to restore a smooth appearance. Phong shading is a well known algorithm for producing a realistic shading but it has not been used by real-time systems because of the 3 additions, 1 division, and 1 square-root required per pixel for its evaluation. We describe a new formulation for Phong shading that reduces the amount of computation per pixel to only 2 additions for simple Lambertian reflection and 5 additions and 1 memory reference for Phong's complete reflection model. We also show how to extend our method to compute the specular component with the eye at a finite distance from the scene rather than at infinity as is usually assumed. The method can be implemented in hardware for real-time applications or in software to speed image generation for almost any system.

132 citations


Journal ArticleDOI
TL;DR: In this article, a quadratic eigenvalue problem involving tridiagonal matrices is solved for wave propagation in an anisotropic layered medium, for which the eigenvalues can be found with great speed and accuracy.
Abstract: The determination of the natural modes of wave propagation in an anisotropic layered medium requires the solution of a transcendental eigenvalue problem that is usually approached numerically with the aid of search techniques. Such computations require great effort. The method presented in this paper provides an alternate solution to this problem in terms of a quadratic eigenvalue problem involving tridiagonal matrices, for which the eigenvalues can be found with great speed and accuracy. The technique is then illustrated by means of an example involving a cross-anisotropic Gibson solid.

Journal ArticleDOI
TL;DR: This work presents a brief description of the modified signed-digit number system and suggests one optical architecture for implementing MSD fixed-point addition, subtraction, and multiplication.
Abstract: Improving the precision of optically performed computations is a critical aspect of photonic computing. One possible method for improving precision is through the use of modified signed-digit (MSD) arithmetic. Optical implementation of MSD arithmetic offers several important advantages over other optical techniques such as the digital multiplication by analog convolution (DMAC) algorithm or the use of residue arithmetic. These advantages include the parallel pipeline flow of digits due to carry-free addition and subtraction, fixed-point as well as floating-point capability, and the potential for performing divisions. We present a brief description of the modified signed-digit number system and suggest one optical architecture for implementing MSD fixed-point addition, subtraction, and multiplication.

Journal ArticleDOI
TL;DR: This article deals with the standardized class of parallel machines, which among the most attractive examples of array units discussed in the literature are the chip structure and, most notably, the class of systolic arrays.
Abstract: In this article, the authors deal with the standardized class of parallel machines. Fast, highly parallel, dedicated array units are well suited to VLSI or even WSI implementation because of the extreme regularity of their architecture and their interconnection locality. Given these attributes, it is reasonable, as H.T. Kung suggests, to look for algorithms inherently suited to such arrays (signal-processing algorithms, for instance, fall within this class). Among the most attractive examples of array units discussed in the literature are the chip structure and, most notably, the class of systolic arrays. On such architectures it is possible to activate a wavefront computation mode, in which computation propagates along one direction only for the various interconnection axes. The systems considered here are, then, regular interconnections of processing elements (cells), with information flowing in one direction only along all interconnection lines. The authors require that no memory devices be present in the array, with the possible exception of local ''service'' memories (for example, registers in serial arithmetic units). This limited use of memory elements is acceptable for attached processors that generally communicate by means of I/O lines with the main memories.

Journal ArticleDOI
TL;DR: The network model is too simplified to serve as a model of human performance, but it does demonstrate that one global property of outlines can be computed through local interactions in a parallel network.
Abstract: The differentiation of figure from ground plays an important role in the perceptual organization of visual stimuli. The rapidity with which we can discriminate the inside from the outside of a figure suggests that at least this step in the process may be performed in visual cortex by a large number of neurons in several different areas working together in parallel. We have attempted to simulate this collective computation by designing a network of simple processing units that receives two types of information: bottom-up input from the image containing the outlines of a figure, which may be incomplete, and a top-down attentional input that biases one part of the image to be the inside of the figure. No presegmentation of the image was assumed. Two methods for performing the computation were explored: gradient descent, which seeks locally optimal states, and simulated annealing, which attempts to find globally optimal states by introducing noise into the computation. For complete outlines, gradient descent was faster, but the range of input parameters leading to successful performance was very narrow. In contrast, simulated annealing was more robust: it worked over a wider range of attention parameters and a wider range of outlines, including incomplete ones. Our network model is too simplified to serve as a model of human performance, but it does demonstrate that one global property of outlines can be computed through local interactions in a parallel network. Some features of the model, such as the role of noise in escaping from nonglobal optima, may generalize to more realistic models.

Journal ArticleDOI
TL;DR: In this paper, a computer program has been developed to compute turbulent flows over three-dimensional rectangular surface-mounted bluff bodies and the results have been applied to wind flows over buildings, and the program solves the steady-state Reynolds equation using a κ-ϵ model of turbulence.

Book ChapterDOI
31 Dec 1986
TL;DR: A new algorithm for the detection of the termination of a distributed computation and to demonstrate how the algorithm can be derived in a number of steps is presented.
Abstract: The purpose of this paper is twofold, viz. to present a new [0] algorithm for the detection of the termination of a distributed computation and to demonstrate how the algorithm can be derived in a number of steps.

Journal ArticleDOI
TL;DR: In this paper, the reliability or unreliability of a k-out-of-n system involving non-identical components is evaluated using a symmetric switching function and roundoff errors introduced in the computations are analyzed.

Journal ArticleDOI
01 May 1986
TL;DR: From the standpoint that performance of a single processor of a data flow computer must be comparable to that of a Von Neumann computer, comparison of both computers is discussed and improvement of the SIGMA-1 instruction set is proposed.
Abstract: A processing element and a structure element of data flow computer SIGMA-1 for scientific computations is now operational. The elements are evaluated for several benchmark programs. For efficient execution of loop constructs, the sticky token mechanism which holds loop invariants is evaluated and exhibits a remarkable effect. From the standpoint that performance of a single processor of a data flow computer must be comparable to that of a Von Neumann computer, comparison of both computers is discussed and improvement of the SIGMA-1 instruction set is proposed.


Journal ArticleDOI
J S Denker1
TL;DR: The workings of a standard model with particular emphasis on various schemes for learning and adaptation is reviewed, which can be used as associative memories, or as analog computers to solve optimization problems.

Journal ArticleDOI
TL;DR: In this article, a modification of an algorithm recently suggested by the same authors in this journal was presented, and the speed of convergence was improved for the same complexity of computation, which is the same as in this paper.
Abstract: We present a modification of an algorithm recently suggested by the same authors in this journal (Ref. 1). The speed of convergence is improved for the same complexity of computation.

Journal ArticleDOI
TL;DR: In this article, a model is formulated and solution strategies are investigated for the simulation of a staged batch distillation unit with chemical reactions in the liquid phase, which involves three computation phases: calculation of column profiles at total reflux, calculation of initial derivatives of the algebraic variables and integration of the dynamic model itself.

Journal ArticleDOI
TL;DR: In this article, a branch and bound algorithm was developed for selecting the optimal set of wavelengths for spectroscopic quantitative analysis of mixture samples. But this method is based on the criterion of the minimum mean square error between concentrations of the mixture components and their estimates, and the lower bound of the mean square errors for the combinations in a given subset is derived as a recurrence inequality.
Abstract: A new computer algorithm has been developed for selecting the optimal set of wavelengths for spectroscopic quantitative analysis of mixture samples. The method is based on the criterion of the minimum mean square error between concentrations of the mixture components and their estimates. The branch and bound algorithm finds the optimal set from all possible combinations of wavelengths. This algorithm saves computation time significantly, compared with the enumerative method. The mathematical formulation of the lower bound of the mean square errors for the combinations in a given subset is derived as a recurrence inequality. Experimental results of wavelength selection for infrared absorption spectra of xylene-isomer mixtures are shown to demonstrate the effectiveness of the algorithm in terms of computation complexity and accuracy in quantitative analysis for the fixed measurement time.

Proceedings ArticleDOI
04 Jan 1986
TL;DR: A new family of algorithms for adaptive weight computation for sensor arrays with arbitrary geometries, which includes existing algorithms of the steepest-descent and matrix-factorization types, plus a range of new algorithms that are intermediate between these two classes, in terms of performance and computational complexity.
Abstract: This paper describes a new family of algorithms for adaptive weight computation for sensor arrays with arbitrary geometries. This family includes existing algorithms of the steepest-descent and matrix-factorization types, plus a range of new algorithms that are intermediate between these two classes, in terms of performance and computational complexity. This approach is particularly valuable for applications where steepest descent is too slow, but the standard matrix methods require too much computation.

Journal ArticleDOI
TL;DR: In this article, the effect of 1287 earthquakes on the Chandler wobble's excitation function was examined using the centroid-moment tensor solution technique, and it was observed that the earthquakes' static deformation fields had little influence on the chirp during 1977-1983.
Abstract: Variations in the Chandler wobble's excitation function are examined in order to study the effect of 1287 earthquakes on the Chandler wobble. The computation of the moment tensor data using the centroid-moment tensor solution technique is described. An excitation function is calculated from the moment tensor data and compared to an observed excitation function derived from the polar motion observations of Gross and Chao (1985). It is observed, based on the power spectrum of the earthquake excitation function, that the earthquakes' static deformation fields have little influence on the Chandler wobble during 1977-1983.


Book ChapterDOI
01 Oct 1986
TL;DR: Their suitability for imageprocessing will be put forth, showing that they can be viewed as competitors to, and collaborators with, mesh and pyramid computers, architectures which are often promoted as being ideal for image processing.
Abstract: Hypercube computers have recently become popular parallel computers for a variety of engineering and scientific computations. However, despite the fact that the characteristics which make them useful scientific processors also makes them efficient image processors, they have not yet been extensively used as image processing machines. This is partially due to the hardware characteristics of current hypercube computers, partially to the particular history of the groups which first built hypercubes, and partially to the fact that the image processing community did not initially realize some of the advantages of hypercubes. In this paper, their suitability for image processing will be put forth, showing that they can be viewed as competitors to, and collaborators with, mesh and pyramid computers, architectures which are often promoted as being ideal for image processing.

Journal ArticleDOI
TL;DR: New methods for handling arrays are introduced, and the notion of array is done away with entirely at the execution level in order to take advantage of the data-flow semantics at their best logical level of performance.
Abstract: Data-flow languages have been hailed as the solution to the programmability of general-purpose multiprocessors. However, data-flow semantics introduce constructs that lead to much overhead at compilation, allocation, and execution time. Indeed, due to its functionality, the data-flow model of computation does not handle repetitive program constructs very efficiently. This is due to the fact that the cornerstone of data flow, namely the concept of single assignment, is opposed to the idea of reexecution of a portion of program as in a loop. A corollary of this problem is the effective representation, storage, and processing of data structures, as these will most often be used in loops. In this paper, various aspects of this issue are explailned in detail. Several solutions that have been put forward in the current literature are then surveyed and analyzed. In order to offset some of the disadvantages presented by these, we introduce new methods for handling arrays. In the first one, we raise the level of computation to that of arrays for more efficient operation. In the two others, the opposite approach is taken, and the notion of array is done away with entirely at the execution level in order to take advantage of the data-flow semantics at their best logical level of performance.

Journal ArticleDOI
TL;DR: In this paper, the boundary element computation in elastostatics is carried out for an assumed shape of an unknown flaw in question, and the calculated results are compared with the reference data and the assumed flaw shape is modified.

Journal ArticleDOI
TL;DR: A detailed analysis of the relative advantages of both direct and associative truth table processing is presented, and the respective merits from the point of view of the number of computations per second and the energy required per computation are outlined.
Abstract: The operating characteristics of digital optical computers utilizing spatial light modulator based shadow casting are reviewed. The geometric and physical limitations of this method are examined. Because of the highly parallel nature of this approach, such systems are capable of truth-table processing. A detailed analysis of the relative advantages of both direct and associative truth table processing is presented. For each case, the use of binary or of multiple-valued logic is considered. The respective merits from the point of view of the number of computations per second and the energy required per computation are outlined. Switching energy considerations based on the use of multiple wavelengths in these systems are also discussed. The characteristic features of this type of optical logic are compared to those of electronic logic.