scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 1990"


Journal ArticleDOI
TL;DR: These six volumes as mentioned in this paper compile the mathematical knowledge required by researchers in mechanics, physics, engineering, chemistry and other branches of application of mathematics for the theoretical and numerical resolution of physical models on computers.
Abstract: These six volumes - the result of a ten year collaboration between the authors, two of France's leading scientists and both distinguished international figures - compile the mathematical knowledge required by researchers in mechanics, physics, engineering, chemistry and other branches of application of mathematics for the theoretical and numerical resolution of physical models on computers. Since the publication in 1924 of the Methoden der mathematischen Physik by Courant and Hilbert, there has been no other comprehensive and up-to-date publication presenting the mathematical tools needed in applications of mathematics in directly implementable form. The advent of large computers has in the meantime revolutionised methods of computation and made this gap in the literature intolerable: the objective of the present work is to fill just this gap. Many phenomena in physical mathematics may be modeled by a system of partial differential equations in distributed systems: a model here means a set of equations, which together with given boundary data and, if the phenomenon is evolving in time, initial data, defines the system. The advent of high-speed computers has made it possible for the first time to caluclate values from models accurately and rapidly. Researchers and engineers thus have a crucial means of using numerical results to modify and adapt arguments and experiments along the way. Every fact of technical and industrial activity has been affected by these developments. Modeling by distributed systems now also supports work in many areas of physics (plasmas, new materials, astrophysics, geophysics), chemistry and mechanics and is finding increasing use in the life sciences. Volumes 5 and 6 cover problems of Transport and Evolution.

2,137 citations


Journal ArticleDOI
TL;DR: The resulting technique is predominantly linear, efficient, and suitable for parallel processing, and is local in space-time, robust with respect to noise, and permits multiple estimates within a single neighborhood.
Abstract: We present a technique for the computation of 2D component velocity from image sequences. Initially, the image sequence is represented by a family of spatiotemporal velocity-tuned linear filters. Component velocity, computed from spatiotemporal responses of identically tuned filters, is expressed in terms of the local first-order behavior of surfaces of constant phase. Justification for this definition is discussed from the perspectives of both 2D image translation and deviations from translation that are typical in perspective projections of 3D scenes. The resulting technique is predominantly linear, efficient, and suitable for parallel processing. Moreover, it is local in space-time, robust with respect to noise, and permits multiple estimates within a single neighborhood. Promising quantiative results are reported from experiments with realistic image sequences, including cases with sizeable perspective deformation.

1,113 citations


Journal ArticleDOI
TL;DR: There is a fundamental connection between computation and phase transitions, especially second-order or “critical” transitions, and some of the implications for the understanding of nature if such a connection is borne out are discussed.

1,082 citations


Journal ArticleDOI
TL;DR: A solution algorithm to the network reconfiguration problem, which is a constrained, multiobjective, nondifferentiable, optimization problem, that allows the designer to obtain a desirable, global noninferior point in a reasonable computation time.
Abstract: Using a two-stage solution methodology and a modified simulated annealing technique, the authors develop a solution algorithm to the network reconfiguration problem, which is a constrained, multiobjective, nondifferentiable, optimization problem. This solution algorithm allows the designer to obtain a desirable, global noninferior point in a reasonable computation time. Also, given a desired number of switch-on/switch-off operations involved in the network configuration, the solution algorithm can identify the most effective operations. In order to reduce the computation time required, the idea of approximate calculations is explored and incorporated into the solution algorithm, where two efficient load-flow methods are employed; one for high temperature and the other for low temperature. The solution algorithm has been implemented in a software package and tested on a 69-bus system with very promising results. >

379 citations


Proceedings ArticleDOI
01 Apr 1990
TL;DR: This paper shows how to do an on-line simulation of an arbitrary RAM program by a probabilistic RAM whose memory access pattern is independent of the program which is being executed, and with a poly-logarithmic slowdown in the running time.
Abstract: A machine is oblivious if the sequence in which it accesses memory locations is equivalent for any two programs with the same running time. For example, an oblivious Turing Machine is one for which the movement of the heads on the tapes is identical for each computation. (Thus, it is independent of the actual input.) What is the slowdown in the running time of any machine, if it is required to be oblivious? In 1979 Pippenger and Fischer [PF] showed how a twotape oblivious Turing Machine can simulate, on-line, a onetape Turing Machine, with a logarithmic slowdown in the running time. We show a similar result for the randomaccess machine (RAM) model of computation, solving an open problem posed by Goldreich [G]. In particular, we show how to do an on-line simulation of an arbitrary RAM program by a probabilistic RAM whose memory access pattern is independent of the program which is being executed, and with a poly-logarithmic slowdown in the running time. Our proof yields a technique of efficiently hiding (through randomization) the access pattern into any composite datastructure. As one of the applications, we exhibit a simple and efficient software protection scheme for a generic oneprocessor RAM model of computation.

296 citations


Book
01 Jan 1990
TL;DR: This study of techniques for formal theorem-proving focuses on the applications of Cambridge LCF (Logic for Computable Functions), a computer program for reasoning about computation.
Abstract: From the Publisher: This study of techniques for formal theorem-proving focuses on the applications of Cambridge LCF (Logic for Computable Functions), a computer program for reasoning about computation.

260 citations


Book ChapterDOI
01 Jan 1990
TL;DR: This analysis extends the work from De Jong's thesis, which dealt with disruption of n-point crossover on 2nd order hyperplanes, to present various extensions to this theory, including an analysis of the disruption of kth order hyperplane crossover.
Abstract: In this paper we present some theoretical results on two forms of multi-point crossover: n-point crossover and uniform crossover. This analysis extends the work from De Jong's thesis, which dealt with disruption of n-point crossover on 2nd order hyperplanes. We present various extensions to this theory, including 1) an analysis of the disruption of n-point crossover on kth order hyperplanes; 2) the computation of tighter bounds on the disruption caused by n-point crossover, by handling cases where parents share critical allele values; and 3) an analysis of the disruption caused by uniform crossover on kth order hyperplanes. The implications of these results on implementation issues and performance are discussed, and several directions for further research are suggested.

254 citations


Journal ArticleDOI
TL;DR: This paper describes an efficient implementation of a nested decomposition algorithm for the multistage stochastic linear programming problem and results compare the performance of the algorithm to MINOS 5.0.
Abstract: This paper describes an efficient implementation of a nested decomposition algorithm for the multistage stochastic linear programming problem. Many of the computational tricks developed for deterministic staircase problems are adapted to the stochastic setting and their effect on computation times is investigated. The computer code supports an arbitrary number of time periods and various types of random structures for the input data. Numerical results compare the performance of the algorithm to MINOS 5.0.

232 citations


Journal ArticleDOI
TL;DR: This paper discusses how general principles for optimally mapping computations onto parallel computers have been developed and how these principles may help illuminate the relationship between maps and computations in the nervous system.

215 citations


Journal ArticleDOI
TL;DR: The purpose is to review the current status and to provide an overall perspective of parallel algorithms for solving dense, banded, or block-structured problems arising in the major areas of direct solution of linear systems, least squares computations, eigenvalue and singular value computation, and rapid elliptic solvers.
Abstract: Scientific and engineering research is becoming increasingly dependent upon the development and implementation of efficient parallel algorithms on modern high-performance computers. Numerical linear algebra is an indispensable tool in such research and this paper attempts to collect and describe a selection of some of its more important parallel algorithms. The purpose is to review the current status and to provide an overall perspective of parallel algorithms for solving dense, banded, or block-structured problems arising in the major areas of direct solution of linear systems, least squares computations, eigenvalue and singular value computations, and rapid elliptic solvers. A major emphasis is given here to certain computational primitives whose efficient execution on parallel and vector computers is essential in order to obtain high performance algorithms.

203 citations


Journal ArticleDOI
TL;DR: Working under constraints suggested by the brain may make traditional computation more difficult, but it may lead to solutions to AI problems that would otherwise be overlooked.
Abstract: In our quest to build intelligent machines, we have but one naturally occurring model: the human brain. It follows that one natural idea for artificial intelligence (AI) is to simulate the functioning of the brain directly on a computer. Indeed, the idea of building an intelligent machine out of artificial neurons has been around for quite some time. Some early results on brain-line mechanisms were achieved by [18], and other researchers pursued this notion through the next two decades, e.g., [1, 4, 19, 21, 24]. Research in neural networks came to a virtual halt in the 1970s, however, when the networks under study were shown to be very weak computationally. Recently, there has been a resurgence of interest in neural networks. There are several reasons for this, including the appearance of faster digital computers on which to simulate larger networks, interest in building massively parallel computers, and most importantly, the discovery of powerful network learning algorithms.The new neural network architectures have been dubbed connectionist architectures. For the most part, these architectures are not meant to duplicate the operation of the human brain, but rather receive inspiration from known facts about how the brain works. They are characterized by Large numbers of very simple neuron-like processing elements;Large numbers of weighted connections between the elements—the weights on the connections encode the knowledge of a network;Highly parallel, distributed control; andEmphasis on learning internal representations automatically.Connectionist researchers conjecture that thinking about computation in terms of the brain metaphor rather than the digital computer metaphor will lead to insights into the nature of intelligent behavior.Computers are capable of amazing feats. They can effortlessly store vast quantities of information. Their circuits operate in nanoseconds. They can perform extensive arithmetic calculations without error. Humans cannot approach these capabilities. On the other hand, humans routinely perform simple tasks such as walking, talking, and commonsense reasoning. Current AI systems cannot do any of these things better than humans. Why not? Perhaps the structure of the brain is somehow suited to these tasks, and not suited to tasks like high-speed arithmetic calculation. Working under constraints suggested by the brain may make traditional computation more difficult, but it may lead to solutions to AI problems that would otherwise be overlooked.What constraints, then, does the brain offer us? First of all, individual neurons are extremely slow devices when compared to their counterparts in digital computers. Neurons operate in the millisecond range, an eternity to a VLSI designer. Yet, humans can perform extremely complex tasks, like interpreting a visual scene or understanding a sentence, in just a tenth of a second. In other words, we do in about a hundred steps what current computers cannot do in ten million steps. How can this be possible? Unlike a conventional computer, the brain contains a huge number of processing elements that act in parallel. This suggests that in our search for solutions, we look for massively parallel algorithms that require no more than 100 processing steps [9].Also, neurons are failure-prone devices. They are constantly dying (you have certainly lost a few since you began reading this article), and their firing patterns are irregular. Components in digital computers, on the other hand, must operate perfectly. Why? Such components store bits of information that are available nowhere else in the computer: the failure of one component means a loss of information. Suppose that we built AI programs that were not sensitive to the failure of a few components, perhaps by using redundancy and distributing information across a wide range of components? This would open the possibility of very large-scale implementations. With current technology, it is far easier to build a billion-component integrated circuit in which 95 percent of the components work correctly than it is to build a perfectly functioning million-component machine [8].Another thing people seem to be able to do better than computers is handle fuzzy situations. We have very large memories of visual, auditory, and problem-solving episodes, and one key operation in solving new problems is finding closest matches to old situations. Inexact matching is something brain-style models seem to be good at, because of the diffuse and fluid way in which knowledge is represented.The idea behind connectionism, then, is that we may see significant advances in AI if we approach problems from the point of view of brain-style computation rather than rule-based symbol manipulation. At the end of this article, we will look more closely at the relationship between connectionist and symbolic AI.

Book ChapterDOI
TL;DR: O'Keefe and Nadel as mentioned in this paper proposed a cognitive map theory that identifies two allocentric parameters, the centroid and the eccentricity, which can be calculated from the array of cues in an environment and which serve as the bases for an allocentric polar co-ordinate system.
Abstract: Evidence from single unit and lesion studies suggests that the hippocampal formation acts as a spatial or cognitive map (O'Keefe and Nadel, 1978). In this chapter, I summarise some of the unit recording data and then outline the most recent computational version of the cognitive map theory. The novel aspects of the present version of the theory are that it identifies two allocentric parameters, the centroid and the eccentricity, which can be calculated from the array of cues in an environment and which can serve as the bases for an allocentric polar co-ordinate system. Computations within this framework enable the animal to identify its location within an environment, to predict the location which will be reached as a result of any specific movement from that location, and conversely, to calculate the spatial transformation necessary to go from the current location to a desired location. Aspects of the model are identified with the information provided by cells in the hippocampus and dorsal presubiculum. The hippocampal place cells are involved in the calculation of the centroid and the presubicular direction cells in the calculation of the eccentricity.

Journal ArticleDOI
TL;DR: A new theory for the calculation of proper elements is presented in this article, which defines an explicit algorithm applicable to any chosen set of orbits and accounts for the effect of shallow resonances on secular frequencies.
Abstract: A new theory for the calculation of proper elements is presented This theory defines an explicit algorithm applicable to any chosen set of orbits and accounts for the effect of shallow resonances on secular frequencies The proper elements are computed with an iterative algorithm and the behavior of the iteration can be used to define a quality code

Journal ArticleDOI
TL;DR: In this paper, it is shown that the projected gradient of the objective function on the manifold of constraints usually can be formulated explicitly, which gives rise to the construction of a descent flow that can be followed numerically.
Abstract: The problems of computing least squares approximations for various types of real and symmetric matrices subject to spectral constraints share a common structure. This paper describes a general procedure in using the projected gradient method. It is shown that the projected gradient of the objective function on the manifold of constraints usually can be formulated explicitly. This gives rise to the construction of a descent flow that can be followed numerically. The explicit form also facilitates the computation of the second-order optimality conditions. Examples of applications are discussed. With slight modifications, the procedure can be extended to solve least squares problems for general matrices subject to singular-value constraints.

Journal ArticleDOI
TL;DR: In this article, the Bird-Meertens formalism is used to express computations in a compact way and it is shown that it is universal over all four architecture classes and that nontrivial restrictions of functional programming languages exist that can be efficiently executed on disparate architectures.
Abstract: The major parallel architecture classes are considered: single-instruction multiple-data (SIMD) computers, tightly coupled multiple-instruction multiple-data (MIMD) computers, hypercuboid computers and constant-valence MIMD computers. An argument that the PRAM model is universal over tightly coupled and hypercube systems, but not over constant-valence-topology, loosely coupled-system is reviewed, showing precisely how the PRAM model is too powerful to permit broad universality. Ways in which a model of computation can be restricted to become universal over less powerful architectures are discussed. The Bird-Meertens formalism (R.S. Bird, 1989), is introduced and it is shown how it is used to express computations in a compact way. It is also shown that the Bird-Meertens formalism is universal over all four architecture classes and that nontrivial restrictions of functional programming languages exist that can be efficiently executed on disparate architectures. The use of the Bird-Meertens formalism as the basis for a programming language is discussed, and it is shown that it is expressive enough to be used for general programming. Other models and programming languages with architecture-independent properties are reviewed. >

01 Dec 1990
TL;DR: An adaptative version of the algorithm exists that allows one to reduce in a significant way the number of degrees of freedom required for a good computation of the solution of the Burgers equation.
Abstract: The Burgers equation with a small viscosity term, initial and periodic boundary conditions is resolved using a spatial approximation constructed from an orthonormal basis of wavelets. The algorithm is directly derived from the notions of multiresolution analysis and tree algorithms. Before the numerical algorithm is described these notions are first recalled. The method uses extensively the localization properties of the wavelets in the physical and Fourier spaces. Moreover, the authors take advantage of the fact that the involved linear operators have constant coefficients. Finally, the algorithm can be considered as a time marching version of the tree algorithm. The most important point is that an adaptive version of the algorithm exists: it allows one to reduce in a significant way the number of degrees of freedom required for a good computation of the solution. Numerical results and description of the different elements of the algorithm are provided in combination with different mathematical comments on the method and some comparison with more classical numerical algorithms.

Proceedings ArticleDOI
01 May 1990
TL;DR: The notion of a discriminating predicate, based on hash functions, that partitions the computation between the processors in order to achieve parallelism is introduced and the trade-offs between redundancy and interprocessor-communication are demonstrated.
Abstract: This paper presents several complementary methods for the parallel, bottom-up evaluation of Datalog queries. We introduce the notion of a discriminating predicate, based on hash functions, that partitions the computation between the processors in order to achieve parallelism. A parallelization scheme with the property of non-redundant computation (no duplication of computation by processors) is then studied in detail. The mapping of Datalog programs onto a network of processors, such that the results is a non-redundant computation, is also studied. The methods reported in this paper clearly demonstrate the trade-offs between redundancy and interprocessor-communication for this class of problems.

Journal ArticleDOI
Mark A. Walton1
TL;DR: A proof is given for a simple algorithm for the computation of fusion rules in Wess-Zumino-Witten (WZW) models.

Journal ArticleDOI
TL;DR: In this paper, a method of computing synthetic seismograms for stratified, azimuthally anisotropic, viscoelastic earth models is presented, which is an extended form of the Kennett algorithm that is efficient for multioffset vertical seismic profiling.
Abstract: We outline a method of computing synthetic seismograms for stratified, azimuthally anisotropic, viscoelastic earth models. This method is an extended form of the Kennett algorithm that is efficient for multioffset vertical seismic profiling. The model consists of a stack of homogeneous plane layers, and the response is computed iteratively by successive inclusion of deeper layers. In each layer, the 6×6 system matrix A is diagonalized numerically; this permits treatment of triclinic materials, i.e., those with the lowest possible symmetry. Jacobi iteration is an efficient way to diagonalize A because the entries of A change little from one wavenumber to the next. When the material properties are frequency dependent, the wavenumber loops are inside the frequency loop, and the computation is slow even on a supercomputer. When the material parameters are frequency independent, it is better to make frequency the deepest loop, with diagonalization of A outside the loop, in which case vectorization gives a relatively rapid computation. Temporal wraparound is avoided by making use of complex frequencies, and spatial aliasing is avoided by using a generalized Filon's method to evaluate both the wavenumber integrals. Various methods of generating anisotropic elastic constants from microlayers, cracks, and fractures and joints are discussed. Example computations are given for azimuthally isotropic and azimuthally anisotropic (AA) earth models. Comparison of computations using single and double wavenumber integrations for a realistic AA model shows that single wave-number integration often gives incorrect answers especially at near offsets. Errors due to use of a single wavenumber integration are explained heuristically by use of wave front diagrams for point and line sources.

Journal ArticleDOI
M. Gerndt1
TL;DR: This paper describes special aspects of MIMD parallelization in SUPERB, an interactive SIMD/M IMD parallelizing system for the SUPRENUM machine, which updates of distributed variables in parallelized applications.
Abstract: This paper describes special aspects of MIMD parallelization in SUPERB. SUPERB is an interactive SIMD/MIMD parallelizing system for the SUPRENUM machine. The main topic of this paper is the updating of distributed variables in parallelized applications. The intended applications perform local computations on a large data domain.

Journal ArticleDOI
TL;DR: An approximate numerical reparameterization technique that improves on a previous algorithm by using a different numerical integration procedure that recursively subdivides the curve and creates a table of the subdivision points is presented.
Abstract: Specifying constraints on motion is simpler if the curve is parameterized by arc length, but many parametric curves of practical interest cannot be parameterized by arc length. An approximate numerical reparameterization technique that improves on a previous algorithm by using a different numerical integration procedure that recursively subdivides the curve and creates a table of the subdivision points is presented. The use of the table greatly reduces the computation required for subsequent arc length calculations. After table construction, the algorithm takes nearly constant time for each arc length calculation. A linear increase in the number of control points can result in a more than linear increase in computation. Examples of this type of behavior are shown. >

Journal ArticleDOI
TL;DR: Parallel algorithms on SIMD (single-instruction stream multiple-data stream) machines for hierarchical clustering and cluster validity computation are proposed, which uses a parallel memory system and an alignment network to facilitate parallel access to both pattern matrix and proximity matrix.
Abstract: Parallel algorithms on SIMD (single-instruction stream multiple-data stream) machines for hierarchical clustering and cluster validity computation are proposed. The machine model uses a parallel memory system and an alignment network to facilitate parallel access to both pattern matrix and proximity matrix. For a problem with N patterns, the number of memory accesses is reduced from O(N/sup 3/) on a sequential machine to O(N/sup 2/) on an SIMD machine with N PEs. >

Proceedings ArticleDOI
01 Aug 1990
TL;DR: In this paper a weight discretization paradigm is presented for back(ward error) propagation neural networks which can work with a very limited number of discretized levels.
Abstract: Neural networks are a primary candidate architecture for optical computing. One of the major problems in using neural networks for optical computers is that the information holders: the interconnection strengths (or weights) are normally real valued (continuous), whereas optics (light) is only capable of representing a few distinguishable intensity levels (discrete). In this paper a weight discretization paradigm is presented for back(ward error) propagation neural networks which can work with a very limited number of discretization levels. The number of interconnections in a (fully connected) neural network grows quadratically with the number of neurons of the network. Optics can handle a large number of interconnections because of the fact that light beams do not interfere with each other. A vast amount of light beams can therefore be used per unit of area. However the number of different values one can represent in a light beam is very limited. A flexible, portable (machine independent) neural network software package which is capable of weight discretization, is presented. The development of the software and some experiments have been done on personal computers. The major part of the testing, which requires a lot of computation, has been done using a CRAY X-MP/24 super computer.


Proceedings ArticleDOI
01 Feb 1990
TL;DR: A novel implementation of the progressive refinement radiosity algorithm is described using the capabilities of a multiprocessor graphics workstation and speedups of a factor of 40 or more over the equivalent software implementation are observed.
Abstract: This paper describes a novel implementation of the progressive refinement radiosity algorithm. Algorithm performance is greatly enhanced using the capabilities of a multiprocessor graphics workstation. Hemi-cube item buffers are produced using the graphics hardware while the remaining computations are performed in parallel on the multiple host processors. Speedups of a factor of 40 or more over the equivalent software implementation are observed. Load balancing issues are discussed and a system performance model is developed based on actual results.Additionally, a new user interface scheme is presented where the radiosity calculations and walk-through tasks are separated. At each new iteration, the radiosity algorithm automatically updates colors used by the viewing program via shared memory while simultaneously obtaining hints on where to further refine the solution.

Journal ArticleDOI
TL;DR: In this article, a computational method has been developed to treat the unsteady aerodynamic interaction between a helicopter rotor, wake, and fuselage, where two existing codes, a lifting line-prescribed wake rotor analysis and a source panel fuselage analysis, were modified and coupled to allow prediction of unstable fuselage pressures and airloads.
Abstract: A computational method has been developed to treat the unsteady aerodynamic interaction between a helicopter rotor, wake, and fuselage. Two existing codes, a lifting line-prescribed wake rotor analysis and a source panel fuselage analysis, were modified and coupled to allow prediction of unsteady fuselage pressures and airloads. A prescribed displacement technique was developed to position the rotor wake about the fuselage. Also coupled into the method were optional blade dynamics or rigid blade performance analyses to set the rotor operating conditions. Sensitivity studies were performed to determine the influence of the wake and fuselage geometry on the computational results. Solutions were computed for an ellipsoidal fuselage and a four bladed rotor at several advance ratios, using both the classical helix and the generalized distorted wake model. Results are presented that describe the induced velocities, pressures, and airloads on the fuselage and the induced velocities and bound circulation at the rotor. The ability to treat arbitrary geometries was demonstrated using a simulated helicopter fuselage. Initial computations were made to simulate the geometry of an experimental rotor-fuselage interaction study performed at the Georgia Institute of Technology.


Journal ArticleDOI
TL;DR: A fast computational method to compute the average costs under a given ( n, N )-policy is presented, based on a well-known embedding technique and a heuristic based on this computational method, is presented by which the optimal values of n and N can be determined.

Journal ArticleDOI
TL;DR: In this paper, the stability robustness of polynomials with coefficients which are affine functions of the parameter perturbations is investigated and a simple and numerically effective procedure, which is based on the Hahn-Banach theorem of convex analysis and which is applicable for any arbitrary norm, is obtained.

Journal ArticleDOI
TL;DR: The FlowFront as mentioned in this paper is a C program that simulates the advance of a non-Newtonian fluid, such as a lava, over a digital elevation model (DEM) and produces a map of flow thicknesses, normally as a raster image for display.