scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 1984"


Journal Article
TL;DR: In this article, a convolution-backprojection formula is deduced for direct reconstruction of a three-dimensional density function from a set of two-dimensional projections, which has useful properties, including errors that are relatively small in many practical instances and a form that leads to convenient computation.
Abstract: A convolution-backprojection formula is deduced for direct reconstruction of a three-dimensional density function from a set of two-dimensional projections. The formula is approximate but has useful properties, including errors that are relatively small in many practical instances and a form that leads to convenient computation. It reduces to the standard fan-beam formula in the plane that is perpendicular to the axis of rotation and contains the point source. The algorithm is applied to a mathematical phantom as an example of its performance.

5,356 citations


Journal ArticleDOI
TL;DR: In this article, a convolution-backprojection formula is deduced for direct reconstruction of a three-dimensional density function from a set of two-dimensional projections, which has useful properties, including errors that are relatively small in many practical instances and a form that leads to convenient computation.
Abstract: A convolution-backprojection formula is deduced for direct reconstruction of a three-dimensional density function from a set of two-dimensional projections. The formula is approximate but has useful properties, including errors that are relatively small in many practical instances and a form that leads to convenient computation. It reduces to the standard fan-beam formula in the plane that is perpendicular to the axis of rotation and contains the point source. The algorithm is applied to a mathematical phantom as an example of its performance.

5,329 citations


Journal ArticleDOI
TL;DR: In this article, the problem of motion measurement is formulated as the computation of an instantaneous two-dimensional velocity field, and a smoothness constraint of the velocity field is formulated based on the physical assumption that surfaces are generally smooth.

349 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a theory for an excess electron in simple fluids, based upon the path integral formulation of quantum theory which maps the behavior of the electron on to that of a classical isomorphic polymer.
Abstract: In this paper we develop a theory for an excess electron in simple fluids. It is based upon the path integral formulation of quantum theory which maps the behavior of the electron on to that of a classical isomorphic polymer. The influence functional, i.e., the solvent induced potential between different sites on the polymer, is estimated from the RISM integral equation. The functional is not pair decomposable and the resulting polymer problem is not trivial to solve. The evaluation of the electron partition function and correlation functions is pursued with two approximations: (i) a mean field approximation which neglects the role of polymer fluctuations on the solvent induced interactions: (ii) a polaron approximation which is a linear approximation in the sense that it neglects large fluctuations in the polymer structure itself. The theory brings to light a new approach to the computation of multiparticle correlation functions in fluids, and the theory provides what appears to be a practical scheme for attacking a number of other problems including the analysis of polymer conformations in liquid environments. While this paper focuses on the role of packing forces in these systems, the theory can be generalized to include polarization and charge interactions as well.

217 citations


Journal ArticleDOI
TL;DR: This paper presents a computational solution to an important optimization problem arising in optimal sensitivity theory to treat the multivariable problem exactly as the scalar problem in that stability constraints are handled via interpolation.
Abstract: This paper presents a computational solution to an important optimization problem arising in optimal sensitivity theory. The approach is to treat the multivariable problem exactly as the scalar problem in that stability constraints are handled via interpolation. The resulting computations are easily implemented using existing methods.

179 citations


Journal ArticleDOI
TL;DR: A linear-time algorithm and its short computer program in BASIC for k-out-of-n:G system reliability computation is presented.
Abstract: A linear-time algorithm and its short computer program in BASIC for k-out-of-n:G system reliability computation is presented.

164 citations


Proceedings ArticleDOI
01 Jan 1984
TL;DR: An algorithm for computing ray traced pictures is presented, which adaptively subdivides scenes into S subregions, each with roughly uniform load, which can yield speedups of O(S2/3) over the standard algorithm.
Abstract: An algorithm for computing ray traced pictures is presented, which adaptively subdivides scenes into S subregions, each with roughly uniform load. It can yield speedups of O(S2/3) over the standard algorithm.This algorithm can be mapped onto a parallel architecture consisting of a three dimensional array of computers which operate autonomously. The algorithm and architecture are well matched, so that communication overhead is small with respect to the computation, for sufficiently complex scenes. This allows close to linear improvements in performance, even with thousands of computers, in addition to the improvement due to subdivision.The algorithm and architecture provide mechanisms to gracefully degrade in response to excessive load. The architecture also tolerates failures of computers without errors in the computation.

164 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a detailed study of the inclusion concept in dynamic systems, which is a suitable mathematical framework for comparing systems with different dimensions and offers immediate results in reduced-order modeling and the overlapping decentralized control of complex systems.
Abstract: The purpose of this paper is to present a detailed study of the inclusion concept in dynamic systems, which is a suitable mathematical framework for comparing systems with different dimensions. The framework offers immediate results in reduced-order modeling and the overlapping decentralized control of complex systems. The presentation, which is limited to linear constant systems, relies on both the matrix algebra (computations) and the geometric elements (structure) to provide a balanced view of the issues involved in the concept of inclusion. The framework is quite broad, and has been used to consider nonlinear and time-varying systems, as well as systems with hereditary and stochastic effects.

161 citations


Journal ArticleDOI
TL;DR: Empirical studies show the predictions of this computation of an instantaneous two-dimensional velocity field from the changing image to be consistent with human motion perception, and an additional constraint of smoothness of the velocity field is formulated, based on the physical assumption that surfaces are generally smooth.
Abstract: The organization of movement in the changing retinal image provides a valuable source of information for analysing the environment in terms of objects, their motion in space, and their three-dimensional structure. A description of this movement is not provided to our visual system directly, however; it must be inferred from the pattern of changing intensity that reaches the eye. This paper examines the problem of motion measurement, which we formulate as the computation of an instantaneous two-dimensional velocity field from the changing image. Initial measurements of motion take place at the location of significant intensity changes. These measurements provide only one component of local velocity, and must be integrated to compute the two-dimensional velocity field. A fundamental problem for this integration stage is that the velocity field is not determined uniquely from information available in the changing image. We formulate an additional constraint of smoothness of the velocity field, based on the physical assumption that surfaces are generally smooth, which allows the computation of a unique velocity field. A theoretical analysis of the conditions under which this computation yields the correct velocity field suggests that the solution is physically plausible. Empirical studies show the predictions of this computation to be consistent with human motion perception.

160 citations


Book
09 Aug 1984
TL;DR: In this article, the authors introduce error-free computation for digital computers and provide an introduction to the theory of error free computations with respect to single-and multiple-modulus residue number systems.
Abstract: This book is written as an introduction to the theory of error-free computation. In addition, we include several chapters that illustrate how error-free com putation can be applied in practice. The book is intended for seniors and first year graduate students in fields of study involving scientific computation using digital computers, and for researchers (in those same fields) who wish to obtain an introduction to the subject. We are motivated by the fact that there are large classes of ill-conditioned problems, and there are numerically unstable algorithms, and in either or both of these situations we cannot tolerate rounding errors during the numerical computations involved in obtaining solutions to the problems. Thus, it is important to study finite number systems for digital computers which have the property that computation can be performed free of rounding errors. In Chapter I we discuss single-modulus and multiple-modulus residue number systems and arithmetic in these systems, where the operands may be either integers or rational numbers. In Chapter II we discuss finite-segment p-adic number systems and their relationship to the p-adic numbers of Hensel [1908]. Each rational number in a certain finite set is assigned a unique Hensel code and arithmetic operations using Hensel codes as operands is mathe matically equivalent to those same arithmetic operations using the cor responding rational numbers as operands. Finite-segment p-adic arithmetic shares with residue arithmetic the property that it is free of rounding errors."

124 citations


Proceedings ArticleDOI
01 Jun 1984
TL;DR: This paper describes a method for annotating each statement s in a program with a set MOD(s) conraining those variables whose values can be changed as a result of executing s.
Abstract: To understand when it is safe to apply a given optimization, a compiler must have explicit knowledge about the impact of executing individual statements on the values of variables accessible to the statements. The impact of a statement is termed its side effect. This paper describes a method for annotating each statement s in a program with a set MOD(s) conraining those variables whose values can be changed as a result of executing s. For statements which contain no procedure calls, the side effects can be determined by simple examination of the statement with some knowledge of the semantics of the source language.


Journal ArticleDOI
TL;DR: The work on calculating optic flow from the motion of edge features in an image sequence is reviewed, based on a spatiotemporal extension of the Marr-Hildreth edge detection scheme that smooths the data over time as well as over the spatial, image, coordinates.


Journal ArticleDOI
TL;DR: In this paper, the evolution of the classical "billiard ball computer" is analyzed and shown to result in a one-bit increase of entropy per step of computation, and the quantum spin computers are not only microscopically, but also operationally reversible Readoff of the output of quantum computation is shown not to interfere with this reversibility.
Abstract: Classical and quantum models of dynamically reversible computers are considered Instabilities in the evolution of the classical 'billiard ball computer' are analyzed and shown to result in a one-bit increase of entropy per step of computation 'Quantum spin computers', on the other hand, are not only microscopically, but also operationally reversible Readoff of the output of quantum computation is shown not to interfere with this reversibility Dissipation, while avoidable in principle, can be used in practice along with redundancy to prevent errors

Book ChapterDOI
TL;DR: A self contained account of the relationship between the Gaussian arithmetic-geometric mean iteration and the fast computation of elementary functions and a particularly pleasant algorithm for x is one of the by-products.
Abstract: We produce a self contained account of the relationship between the Gaussian arithmetic-geometric mean iteration and the fast computation of elementary functions. A particularly pleasant algorithm for x is one of the by-products.

Journal ArticleDOI
TL;DR: A well established algorithm for stress computation is reviewed in detail, illustrating a number of computational hazards and proposing simple solutions.
Abstract: Stress computation in finite element materially non‐linear analysis is an important problem that has perhaps been receiving less attention than it deserves. Not only does it consume a significant share of total computer time, but also inaccuracies and ‘savings’ thereupon may well jeopardize the gains aimed at by sophisticating elsewhere the numerical strategy. A well established algorithm for stress computation is reviewed in detail, illustrating a number of computational hazards and proposing simple solutions.

Journal ArticleDOI
TL;DR: In this article, the question of the energy dissipation in the computational process is considered, and it is found that dissipation is an integral part of computation, and a complementarity is suggested between systems that are describable in thermodynamic terms and systems that can be used for computation.
Abstract: The question of the energy dissipation in the computational process is considered. Contrary to previous studies, dissipation is found to be an integral part of computation. A complementarity is suggested between systems that are describable in thermodynamic terms and systems that can be used for computation.

Journal ArticleDOI
TL;DR: In this paper, a new Givens ordering was proposed, empirically and by an approximate theoretical analysis, to take appreciably fewer stages than the standard GivENS ordering.
Abstract: A new Givens ordering is shown, empirically and by an approximate theoretical analysis, to take appreciably fewer stages than the standard scheme. Sharper error bounds than Gentleman's ensue, and the scheme is better suited to parallel computation. Other schemes, less efficient but more easily analysed, are discussed. The effect of a possible limit in practice on the number of simultaneous computations is considered.

Journal ArticleDOI
TL;DR: A subclass of so called analyser programs has been chosen for which all partial computation that becomes possible during mixed computation is defined over a finite domain of nonsuspended variables, which provides termination of mixed computation and allows also to embody in the residual program a control structure encoded in the data.
Abstract: A polyvariant mixed computation algorithm for low-level non-structured programs is presented. A subclass of so called analyser programs has been chosen for which all partial computation that becomes possible during mixed computation is defined over a finite domain of nonsuspended variables. This not only provides termination of mixed computation but allows also to embody in the residual program a control structure encoded in the data.

Journal ArticleDOI
TL;DR: The converging squares algorithm is a method for locating peaks in sampled data of two dimensions or higher that is robust with respect to noise and data type, and computationally efficient.
Abstract: The converging squares algorithm is a method for locating peaks in sampled data of two dimensions or higher. There are two primary advantages of this algorithm over conventional methods. First, it is robust with respect to noise and data type. There are no empirical parameters to permit adjustment of the process, so results are completely objective. Second, the method is computationally efficient. The inherent structure of the algorithm is that of a resolution pyramid. This enhances computational efficiency as well as contributing to the quality of noise immunity of the method. The algorithm is detailed for two-dimensional data, and is described for three-dimensional data. Quantitative comparisons of computation are made with two conventional peak picking methods. Applications to biomedical image analysis, and for industrial inspection tasks are discussed.

Journal ArticleDOI
TL;DR: In this paper, LRP 233 Reference CRPP-REPORT-1984-017 Record created on 2008-04-18, modified on 2016-08-08, and used for LRP-2013

Journal ArticleDOI
TL;DR: A sparsity-based technique is developed for the identification of coherent areas in large power systems, based on the slow-coherency approach, which introduces small machines at the load buses to retain the system sparseness.
Abstract: A sparsity-based technique is developed for the identification of coherent areas in large power systems The technique, based on the slow-coherency approach, is novel in that it introduces small machines at the load buses to retain the system sparseness Then the computation of the slow eigenbasis for the identification of slow-coherent groups of machines is performed by the Lanczos algorithm which is an efficient eigenfunction computation method for large, sparse, symmetric but unstructured matrices The technique also groups the load buses into coherent areas, information that is useful for network reduction Two large scale models of portions of the US power system are used as illustrations The computation time required is of the order of magnitude of that required for a few load flow solutions

Journal ArticleDOI
TL;DR: This paper gives an implementation of the force method which is numerically stable and preserves sparsity, and in its approach each of the two main phases is carried out using orthogonal factorizaton techniques recently developed for linear least squares problems.
Abstract: Historically there are two principal methods of matrix structural analysis, the displacement (or stiffness) method and the force (or flexibility) method. In recent times the force method has been used relatively little because the displacement method has been deemed easier to implement on digital computers, especially for large sparse systems. The force method has theoretical advantages, however, for multiple redesign problems or nonlinear elastic analysis because it allows the solution of modified problems without restarting the computation from the beginning. In this paper we give an implementation of the force method which is numerically stable and preserves sparsity. Although it is motivated by earlier elimination schemes, in our approach each of the two main phases of the force method is carried out using orthogonal factorizaton techniques recently developed for linear least squares problems.

Book
01 Jan 1984
TL;DR: As one of the part of book categories, an introduction to numerical computation always becomes the most wanted book.
Abstract: If you really want to be smarter, reading can be one of the lots ways to evoke and realize. Many people who like reading will have more knowledge and experiences. Reading can be a way to gain information from economics, politics, science, fiction, literature, religion, and many others. As one of the part of book categories, an introduction to numerical computation always becomes the most wanted book. Many people are absolutely searching for this book. It means that many love to read this kind of book.

Journal ArticleDOI
TL;DR: An algorithm is presented which allows the computation of the transfer function of a singular system from its state-space description, without inverting a polynomial matrix, and it reduces the computational cost.
Abstract: An algorithm is presented which allows the computation of the transfer function of a singular system from its state-space description, without inverting a polynomial matrix. This algorithm is an extension of the Leverrier algorithm for the more general case of singular systems and it reduces the computational cost.

Journal ArticleDOI
TL;DR: This work presents the structure of the weather code as expressed in VAL, a functional programming language designed by the Computation Structures Group, and develops the corresponding machine-level program structures for efficient execution on a data flow supercomputer.
Abstract: Data flow computers promise efficient parallel computation limited in speed only by data dependencies in the calculation being performed. At the Massachusetts Institute of Technology Laboratory for Computer Science, the Computation Structures Group is working to design practical data flow computers that can outperform conventional supercomputers. Since data flow computers differ radically in structure from conventional (sequential) computers, the projection of their performance must be done through analysis of specific computations. The performance improvement that data flow computers offer is shown for a NASA benchmark program that implements a global weather model. We present the structure of the weather code as expressed in VAL, a functional programming language designed by the Computation Structures Group, and develop the corresponding machine-level program structures for efficient execution on a data flow supercomputer. On the basis of this analysis, we specify the capacities of hardware units and the number of each type of unit required to achieve a twenty-fold improvement in performance for the weather simulation application.

BookDOI
01 Oct 1984
TL;DR: Correlation of Algorithms, Software and Hardware of Parallel Computers.
Abstract: 1. Synthesis of Parallel Numerical Algorithms.- 2. Complexity of Parallel Algorithms.- 3. Automatic Construction of Parallel Programs.- 4. Formal Models of Parallel Computations.- 5. On Parallel Languages.- 6. Proving Correctness and Automatic Synthesis of Parallel Programs.- 7. Operating Systems for Modular Partially Reconfigurable Multiprocessor-Systems.- 8. Algorithms for Scheduling Homogeneous Multiprocessor Computers.- 9. Algorithms for Scheduling Inhomogeneous Multiprocessor Computers.- 10. Parallel Processors and Multicomputer Systems.- 11. Data Flow Computer Architecture.- 12. Correlation of Algorithms, Software and Hardware of Parallel Computers.

Journal ArticleDOI
01 Jan 1984
TL;DR: This work states the acoustic wave problem in mathematical form and presents the algorithm used for its solution and discusses the most important technical aspects of implementing the algorithm in vector form on the Cyber system.
Abstract: There are now several laboratories worldwide which routinely perform three-dimensional (3D) physical modeling of acoustic phenomena of interest to seismologists. Corresponding computerized mathematical models have been limited to two dimensions on conventional computers due to the immense computational requirements of accurate full-size problems. The introduction of high-performance vector processors in the 100 + MFLOP range has at last made 3D computerized models a practical reality. Several major oil companies and large seismic contractors have installed such processors and are contributing to research and development in this area as an improved tool in the search for fossil energy sources. We now have such models and we have some limited experience in using them, however, much more work is required before these mathematical models can routinely supplant the physical ones. Here we describe one such model which was developed for the Cyber 205. A two-dimensional (2D) model for the VAX-11 with FP5-100 Array Processors was modified to three dimensions and vectorized suitably for the larger system. We state the acoustic wave problem in mathematical form and present the algorithm used for its solution. We then review the theoretical justification for selecting this particular algorithm. We also discuss the most important technical aspects of implementing the algorithm in vector form on the Cyber system. We present typical output and finally we give timing results for several model sizes, both 2D and 3D, on the Cyber 205 and 2D on the VAX-11 with FPS-100 Array Processors.

Journal ArticleDOI
TL;DR: Experiments on parallel computing arrays show that this mechanism leads naturally to rapid self-repair, adaptation to the environment, recognition and discrimination of fuzzy inputs, and conditional learning, properties that are commonly associated with biological computation.
Abstract: We experimentally examine the consequences of the hypothesis that the brain operates reliably, even though individual components may intermittently fail, by computing with dynamical attractors. Specifically, such a mechanism exploits dynamic collective behavior of a system with attractive fixed points in its phase space. In contrast to the usual methods of reliable computation involving a large number of redundant elements, this technique of self-repair only requires collective computation with a few units, and it is amenable to quantitative investigation. Experiments on parallel computing arrays show that this mechanism leads naturally to rapid self-repair, adaptation to the environment, recognition and discrimination of fuzzy inputs, and conditional learning, properties that are commonly associated with biological computation.