scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 1992"


Journal ArticleDOI
TL;DR: A class of problems is described which can be solved more efficiently by quantum computation than by any classical or stochastic method.
Abstract: A class of problems is described which can be solved more efficiently by quantum computation than by any classical or stochastic method. The quantum computation solves the problem with certainty in exponentially less time than any classical deterministic computation.

2,509 citations


Book
01 Jan 1992
TL;DR: This book discusses molecular manufacturing systems, nanoscale Structural Components, and Nanomechanical Computational Systems, as well as some of the techniques used in macromolecular engineering and its applications.
Abstract: PHYSICAL PRINCIPLES. Classical Magnitudes and Scaling Laws. Potential Energy Surfaces. Molecular Dynamics. Positional Uncertainty. Transitions, Errors, and Damage. Energy Dissipation. Mechanosynthesis. COMPONENTS AND SYSTEMS. Nanoscale Structural Components. Mobile Interfaces and Moving Parts. Intermediate Subsystems. Nanomechanical Computational Systems. Molecular Sorting, Processing, and Assembly. Molecular Manufacturing Systems. IMPLEMENTATION STRATEGIES. Macromolecular Engineering. Paths to Molecular Manufacturing. Appendices. Afterword. Symbols, Units, and Constants. Glossary. References. Index.

1,214 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a greedy algorithm for the active contour model, which has performance comparable to the dynamic programming and variational calculus approaches, but is more than an order of magnitude faster than that approach, being O(nm).
Abstract: A model for representing image contours in a form that allows interaction with higher level processes has been proposed by Kass et al. (in Proceedings of First International Conference on Computer Vision, London, 1987, pp. 259–269). This active contour model is defined by an energy functional, and a solution is found using techniques of variational calculus. Amini et al. (in Proceedings, Second International Conference on Computer Vision, 1988, pp. 95–99) have pointed out some of the problems with this approach, including numerical instability and a tendency for points to bunch up on strong portions of an edge contour. They proposed an algorithm for the active contour model using dynamic programming. This approach is more stable and allows the inclusion of hard constraints in addition to the soft constraints inherent in the formulation of the functional; however, it is slow, having complexity O(nm3), where n is the number of points in the contour and m is the size of the neighborhood in which a point can move during a single iteration. In this paper we summarize the strengths and weaknesses of the previous approaches and present a greedy algorithm which has performance comparable to the dynamic programming and variational calculus approaches. It retains the improvements of stability, flexibility, and inclusion of hard constraints introduced by dynamic programming but is more than an order of magnitude faster than that approach, being O(nm). A different formulation is used for the continuity term than that of the previous authors so that points in the contour are more evenly spaced. The even spacing also makes the estimation of curvature more accurate. Because the concept of curvature is basic to the formulation of the contour functional, several curvature approximation methods for discrete curves are presented and evaluated as to efficiency of computation, accuracy of the estimation, and presence of anomalies.

1,111 citations


Journal ArticleDOI
TL;DR: This article describes a transformation that simplifies the problem and places it into a form that allows efficient calculation using standard numerical multiple integration algorithms.
Abstract: The numerical computation of a multivariate normal probability is often a difficult problem. This article describes a transformation that simplifies the problem and places it into a form that allows efficient calculation using standard numerical multiple integration algorithms. Test results are presented that compare implementations of two algorithms that use the transformation with currently available software.

1,012 citations


Proceedings ArticleDOI
15 Jun 1992
TL;DR: The performance of six optical flow techniques is compared, emphasizing measurement accuracy, and it is found that some form of confidence measure/threshold is crucial for all techniques in order to separate the inaccurate from the accurate.
Abstract: The performance of six optical flow techniques is compared, emphasizing measurement accuracy. The most accurate methods are found to be the local differential approaches, where nu is computed explicitly in terms of a locally constant or linear model. Techniques using global smoothness constraints appear to produce visually attractive flow fields, but in general seem to be accurate enough for qualitative use only and insufficient as precursors to the computations of egomotion and 3D structures. It is found that some form of confidence measure/threshold is crucial for all techniques in order to separate the inaccurate from the accurate. Drawbacks of the six techniques are discussed. >

697 citations


Journal ArticleDOI
TL;DR: In this article, a fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented, which only requires information available from an ordinary program for power flow calculations.
Abstract: The minimum singular value of the power flow Jacobian matrix has been used as a static voltage stability index, indicating the distance between the studied operating point and the steady-state voltage stability limit. A fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented. The main advantages of the algorithm are the small amount of computation time needed, and that it only requires information available from an ordinary program for power flow calculations. The proposed method fully utilizes the sparsity of the power flow Jacobian matrix and the memory requirements for the computation are low. These advantages are preserved when applied to various submatrices of the Jacobian matrix. The algorithm was applied to small test systems and to a large system with over 1000 nodes, with satisfactory results. >

446 citations



Journal ArticleDOI
TL;DR: By incorporating the principles of the stochastic approach into the KLA, a deterministic VQ design algorithm, the soft competition scheme (SCS), is introduced and experimental results are presented where the SCS consistently provided better codebooks than the generalized Lloyd algorithm (GLA), even when the same computation time was used for both algorithms.
Abstract: The authors provide a convergence analysis for the Kohonen learning algorithm (KLA) with respect to vector quantizer (VQ) optimality criteria and introduce a stochastic relaxation technique which produces the global minimum but is computationally expensive. By incorporating the principles of the stochastic approach into the KLA, a deterministic VQ design algorithm, the soft competition scheme (SCS), is introduced. Experimental results are presented where the SCS consistently provided better codebooks than the generalized Lloyd algorithm (GLA), even when the same computation time was used for both algorithms. The SCS may therefore prove to be a valuable alternative to the GLA for VQ design. >

213 citations


Journal ArticleDOI
TL;DR: In this paper, the second-order properties of large molecules (with 50 atoms or more) were computed on workstation computers for static dipole polarizabilities and nuclear magneting shieldings.
Abstract: The ab initio SCF computation of second-order properties of large molecules (with 50 atoms or more) on workstation computers is demonstrated for static dipole polarizabilities and nuclear magneting shieldings. The magnetic shieldings are calculated on the basis of gauge including atomic orbitals (GIAO). Algorithmic advances (semi-direct algorithms with efficient integral pre-screening, and use of a quadratically convergent functional for the polarizabilities) are presented together with an illustrative application to the fullerenes C60 and C70.

197 citations


Journal ArticleDOI
TL;DR: The concept of animate vision was introduced in this article, which is a framework for sequential decision-making, gaze control, and visual learning in human vision, using cooperative sensorimotor behaviors to reduce the need for explicit representation.
Abstract: Vision theories can be categorized in terms of the amount of explicit representation postulated in the perceiver. Gibson's precomputational theory eschewed any explicit representation. In contrast, Marr used layers of explicit representation, hoping to simplify vision computations. Current technological advances in robotic hardware and computer architectures have allowed the building of anthropomorphic devices that capture important technical features of human vision. Experience with these devices suggests that cooperative sensorimotor behaviors can reduce the need for explicit representation. This view is captured in the notion of “animate vision,” which is a framework for sequential decision-making, gaze control, and visual learning.

181 citations



Journal ArticleDOI
TL;DR: An iterative algorithm which can be used to find array weights that produce array patterns with a given look direction and an arbitrary sidelobe specification is presented and experimental evidence suggests that the procedure terminates in remarkably few iterations, even for arrays with significant numbers of elements.
Abstract: A simple iterative algorithm which can be used to find array weights that produce array patterns with a given look direction and an arbitrary sidelobe specification is presented. The method can be applied to nonuniform array geometries in which the individual elements have arbitrary (and differing) radiation patterns. The method is iterative and uses sequential updating to ensure that peak sidelobe levels in the array meet the specification. Computation of each successive pattern is based on the solution of a linearly constrained least-squares problem. The constraints ensure that the magnitude of the sidelobes at the locations of the previous peaks takes on the prespecified values. Phase values for the sidelobes do not change during this process, and problems associated with choosing a specific phase value are therefore avoided. Experimental evidence suggests that the procedure terminates in remarkably few iterations, even for arrays with significant numbers of elements. >

Journal Article
TL;DR: In this article, different numerical methods for the computation of flows with moving immersed elastic boundaries are described. And the results of the above methods at various values of the time-step size are compared in order to explore the numerical stability of the computation.
Abstract: This paper describes thee different numerical methods for the computation of flows with moving immersed elastic boundaries. A two-dimensional incompressible fluid and a boundary in the form of a simple closed curve are considered. The inertia is assumed to be negligible and the Stokes equations are solved. The three methods are explicit, approximate-implicit, and implicit. The first two have been used before, but the implicit method is new in the context of flows with moving immersed boundaries. They differ only with respect to the computation of the boundary force. The results of the above methods at various values of the time-step size are compared in order to explore the numerical stability of the computation.

Journal ArticleDOI
TL;DR: The results of the above methods at various values of the time-step size are compared in order to explore the numerical stability of the computation.
Abstract: This paper describes thee different numerical methods for the computation of flows with moving immersed elastic boundaries. A two-dimensional incompressible fluid and a boundary in the form of a simple closed curve are considered. The inertia is assumed to be negligible and the Stokes equations are solved. The three methods are explicit, approximate-implicit, and implicit. The first two have been used before, but the implicit method is new in the context of flows with moving immersed boundaries. They differ only with respect to the computation of the boundary force. The results of the above methods at various values of the time-step size are compared in order to explore the numerical stability of the computation.

Journal ArticleDOI
TL;DR: The paper presents an implementation for computer-aided design with dimensional parameters based on the use of an expert system to uncouple constraint equations, and to find a possible sequence for the computation of the geometric elements for given dimension values.
Abstract: The paper presents an implementation for computer-aided design with dimensional parameters. The approach is based on the use of an expert system to uncouple constraint equations, and to find a possible sequence for the computation of the geometric elements for given dimension values. A set of rules for the expert system is described that solves the problem for 2D designs. The method is illustrated with an example design.

Journal ArticleDOI
TL;DR: A new multiresolution coarse-to-fine search algorithm for efficient computation of the Hough transform (MHT) using a simple peak detection algorithm that can be generalized for patterns with any number of parameters.
Abstract: A new multiresolution coarse-to-fine search algorithm for efficient computation of the Hough transform is proposed. The algorithm uses multiresolution images and parameter arrays. Logarithmic range reduction is proposed to achieve faster convergence. Discretization errors are taken into consideration when accumulating the parameter array. This permits the use of a very simple peak detection algorithm. Comparative results using three peak detection methods are presented. Tests on synthetic and real-world images show that the parameters converge rapidly toward the true value. The errors in rho and theta , as well as the computation time, are much lower than those obtained by other methods. Since the multiresolution Hough transform (MHT) uses a simple peak detection algorithm, the computation time will be significantly lower than other algorithms if the time for peak detection is also taken into account. The algorithm can be generalized for patterns with any number of parameters. >

Journal ArticleDOI
TL;DR: A computational environment suitable for optimum design of structures in the general class of plane frames is described and the use of the environment is illustrated in a study of a cable‐stayed bridge structure.
Abstract: A computational environment suitable for optimum design of structures in the general class of plane frames is described. Design optimization is based on the use of a genetic algorithm in which a population of individual designs is changed generation by generation applying principles of natural selection and survival of the fittest. The fitness of a design is assessed using an objective function in which violations of design constraints are penalized. Facilities are provided for automatic data editing and reanalysis of the structure. The environment is particularly useful when parametric studies are required. The use of the environment is illustrated in a study of a cable‐stayed bridge structure.

Proceedings ArticleDOI
01 Feb 1992
TL;DR: The concept of deformation distance between manifolds is presented, a distance which measures the `difference in shape' of two manifolds and the link between deformation distances and size functions is pointed out.
Abstract: We define the concept of size functions. They are functions from the real plane to the natural numbers which describe the `shape of the objects' (seen as submanifolds of a Euclidean space). We give two different techniques of computation of size functions and some actual examples of computation. Moreover, we present the concept of deformation distance between manifolds (i.e., curves, surfaces, etc.). It is a distance which measures the `difference in shape' of two manifolds. Finally we point out the link between deformation distances and size functions.

Journal ArticleDOI
TL;DR: The efficiency of the fast Fourier transform makes it almost always the faster method for any large-size system, while the multipole algorithm remains effective for more complex geometries and systems with highly irregular or nonuniform particle distributions.
Abstract: Evaluation of the long-range magnetostatic field is the most time-consuming part in a micromagnetic simulation. In a magnetic system with N particles, the traditional direct pairwise summation method yields O(N/sup 2/) asymptotic computation time. An adaptive fast algorithm fully implementing the multipole and local expansions of the field integral is shown to yield O(N) computation time. Fast Fourier transform techniques are generalized to entail finite size magnetic systems with nonperiodic boundary conditions, yielding O(N log/sub 2/ N) computation time. Examples are given for calculating domain wall structures in Permalloy thin films. The efficiency of the fast Fourier transform makes it almost always the faster method for any large-size system, while the multipole algorithm remains effective for more complex geometries and systems with highly irregular or nonuniform particle distributions. >

Proceedings ArticleDOI
24 Jun 1992
TL;DR: It appears that one can handle medium size problems (less than 100 perturbations) with reasonable computational requirements, both in terms of accuracy of the resulting bounds, and growth rate in required computation with problem size.
Abstract: Upper and lower bounds for the mixed μ problem have recently been developed, and this paper examines the computational aspects of these bounds. In particular a practical algorithm is developed to compute the bounds. This has been implemented as a Matlab function (m-file), and will be available shortly in a test version in conjunction with the μ-Tools toolbox. The algorithm performance is very encouraging, both in terms of accuracy of the resulting bounds, and growth rate in required computation with problem size. In particular it appears that one can handle medium size problems (less than 100 perturbations) with reasonable computational requirements.

Proceedings ArticleDOI
30 Aug 1992
TL;DR: An implementation of a new characterization of 3D simple points that does not need the calculation of the genus or the number of holes and only needs the computation of two connected components numbers in the neighborhood of the considered point.
Abstract: A new characterization of 3D simple points has recently been proposed by the authors (2nd European Conf. Computer Vision (1992)). Unlike previous characterizations, it does not need the calculation of the genus or the number of holes. It only needs the computation of two connected components numbers in the neighborhood of the considered point. An implementation of this characterization is presented. >

Journal ArticleDOI
TL;DR: An iterative algorithm is developed to solve the nonlinear inverse scattering problem for two-dimensional lossless dielectric inhomogeneities using time-domain scattering data and the ability of this method to invert arbitrarily shaped permittivity profiles using few transmitters and receivers is demonstrated.
Abstract: An iterative algorithm is developed to solve the nonlinear inverse scattering problem for two-dimensional lossless dielectric inhomogeneities using time-domain scattering data. The method is based on performing Born-type iterations on a volume integral equation and, hence, successively calculating higher-order approximations to the unknown object profile. Both the full-angle and the limited-angle problems are considered. Solutions are obtained for cases where the first-order Born approximation is severely violated. Wideband time-domain scattered field measurements make it possible to use sparse data sets and thus reduce experimental complexity and computation time. Several examples are given to show the ability of this method to invert arbitrarily shaped permittivity profiles using few transmitters and receivers. The high-resolution capability of the algorithm is also demonstrated. >

01 Jun 1992
TL;DR: In this paper, the polylogarithm function, Lip(z), is defined, and a number of algorithms are derived for its computation, valid in different ranges of its real parameter p and complex argument z.
Abstract: The polylogarithm function, Lip(z), is defined, and a number of algorithms are derived for its computation, valid in different ranges of its real parameter p and complex argument z. These are sufficient to evaluate it numerically, with reasonable efficiency, in all cases.

Proceedings ArticleDOI
01 Aug 1992
TL;DR: Bases Computation Mollert Teo Mora $ using Syzygies* bases Computation mollert teoMora bases computations $ usingSyzygie.
Abstract: Bases Computation Mollert Teo Mora$ Using Syzygies*

Journal ArticleDOI
TL;DR: In this paper, a straightforward method of calculation based on direct Fourier summation of the simulated lattice is used instead of fast Fourier transform (FFT) techniques, which are not in general suited to this type of calculation.
Abstract: An investigation is carried out on the feasibility of calculating the diffuse scattering from computer simulations of crystals containing substitutional and displacement disorder which have hitherto been used in conjunction with optical transform methods to aid in the interpretation of observed X-ray diffraction patterns. A straightforward method of calculation based on direct Fourier summation of the simulated lattice is used instead of fast Fourier transform (FFT) techniques, which are not in general suited to this type of calculation. This computational method provides a number of advantages over the optical method. It allows calculation in three dimensions, more flexibility in the assignment of atomic positions and scattering power of the individual atoms involved and the computation can be made in absolute units allowing for direct comparison with data scaled to electron units. Comparison of the two techniques is presented using, as an example, a simulation of planar disorder in a synthetic mullite. It is found that calculated patterns of comparable quality to ones obtained optically are feasible using the current generation of computers. Nevertheless, the transforms can still consume considerable computational resources particularly when the extension to three dimensions is required.

Journal ArticleDOI
TL;DR: Computations of three-dimensional compressible flows using unstructured meshes having close to one million elements, such as a complete airplane, demonstrate that the Connection Machine systems are suitable for these applications.
Abstract: A finite element method for computational fluid dynamics has been implemented on the Connection Machine systems CM-2 and CM-200. An implicit iterative solution strategy, based on the preconditioned matrix-free GMRES algorithm, is employed. Parallel data structures built on both nodal and elemental sets are used to achieve maximum parallelization. Communication primitives provided through the Connection Machine Scientific Software Library substantially improved the overall performance of the program. Computations of three-dimensional compressible flows using unstructured meshes having close to one million elements, such as a complete airplane, demonstrate that the Connection Machine systems are suitable for these applications. Performance comparisons are also carried out with the vector computers Cray Y-MP and Convex C-1.

Journal ArticleDOI
TL;DR: State derivative values are calculated in a highly efficient manner, the number of computational operations being a linear function in n, allowing the production of a simulation code that can be used on conventional sequential computers, but is well suited for parallel computers with distributed architecture.

Proceedings Article
23 Aug 1992
TL;DR: This paper presents a general method for change computation, based on the use of transition and internal events rules, which generalizes and extends previous work on change computation methods, and in some cases computes changes in a more efficient way.
Abstract: Change computation is an essential component in several capabilities of a deductive database, such as integrity constraints checking, materialized view maintenance and condition monitoring. In this paper, we present a general method for change computation, which is based on the use of transition and internal events rules. These rules explicitly define the insertions, deletions and modifications induced by a database update. Standard SLDNF resolution can be used to compute the induced changes, but other procedures could be used as well. Our method generalizes and extends previous work on change computation methods, and in some cases computes changes in a more efficient way.

Journal ArticleDOI
TL;DR: In this paper, a new scheme for wave propagation simulation in 3D elastic-anisotropic media is presented based on the rapid expansion method (REM) as a time integration algorithm, and the Fourier pseudospectral method for computation of the spatial derivatives.
Abstract: This work presents a new scheme for wave propagation simulation in three‐dimensional (3-D) elastic-anisotropic media. The algorithm is based on the rapid expansion method (REM) as a time integration algorithm, and the Fourier pseudospectral method for computation of the spatial derivatives. The REM expands the evolution operator of the second‐order wave equation in terms of Chebychev polynomials, constituting an optimal series expansion with exponential convergence. The modeling allows arbitrary elastic coefficients and density in lateral and vertical directions. Numerical methods which are based on finite‐difference techniques (in time and space) are not efficient when applied to realistic 3-D models, since they require considerable computer memory and time to obtain accurate results. On the other hand, the Fourier method permits a significant reduction of the working space, and the REM algorithm gives machine accuracy with half the computational effort as the usual second-order temporal differencing sch...

Journal ArticleDOI
TL;DR: A new algorithm based on the double-integral formulation is presented that can be used to calculate moments from either the run-length codes or the chain codes of shape.