scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 1997"


Journal ArticleDOI
01 Mar 1997
TL;DR: A comparative analysis of four popular and efficient algorithms, each of which computes the translational and rotational components of the transform in closed form, as the solution to a least squares formulation of the problem, indicates that under “ideal” data conditions certain distinctions in accuracy and stability can be seen.
Abstract: A common need in machine vision is to compute the 3-D rigid body transformation that aligns two sets of points for which correspondence is known. A comparative analysis is presented here of four popular and efficient algorithms, each of which computes the translational and rotational components of the transform in closed form, as the solution to a least squares formulation of the problem. They differ in terms of the transformation representation used and the mathematical derivation of the solution, using respectively singular value decomposition or eigensystem computation based on the standard $[ \vec{R}, \vec{T} ]$ representation, and the eigensystem analysis of matrices derived from unit and dual quaternion forms of the transform. This comparison presents both qualitative and quantitative results of several experiments designed to determine (1) the accuracy and robustness of each algorithm in the presence of different levels of noise, (2) the stability with respect to degenerate data sets, and (3) relative computation time of each approach under different conditions. The results indicate that under “ideal” data conditions (no noise) certain distinctions in accuracy and stability can be seen. But for “typical, real-world” noise levels, there is no difference in the robustness of the final solutions (contrary to certain previously published results). Efficiency, in terms of execution time, is found to be highly dependent on the computer system setup.

857 citations


Journal ArticleDOI
TL;DR: In this paper, the authors give an explicit way to experimentally determine the evolution operators which completely describe the dynamics of a quantum-mechanical black box: an arbitrary open quantum system.
Abstract: We give an explicit way to experimentally determine the evolution operators which completely describe the dynamics of a quantum-mechanical black box: an arbitrary open quantum system. We show necessary and sufficient conditions for this to be possible and illustrate the general theory by considering specifically one-and two-quantum-bit systems. These procedures may be useful in the comparative evaluation of experimental quantum measurement, communication and computation systems.

834 citations


Proceedings ArticleDOI
01 Jun 1997
TL;DR: The experimental results on a Cray T3D parallel computer show that the Hybrid Distribution algorithm scales linearly and exploits the aggregate memory better and can generate more association rules with a single scan of database per pass.
Abstract: One of the important problems in data mining is discovering association rules from databases of transactions where each transaction consists of a set of items. The most time consuming operation in this discovery process is the computation of the frequency of the occurrences of interesting subset of items (called candidates) in the database of transactions. To prune the exponentially large space of candidates, most existing algorithms, consider only those candidates that have a user defined minimum support. Even with the pruning, the task of finding all association rules requires a lot of computation power and time. Parallel computers offer a potential solution to the computation requirement of this task, provided efficient and scalable parallel algorithms can be designed. In this paper, we present two new parallel algorithms for mining association rules. The Intelligent Data Distribution algorithm efficiently uses aggregate memory of the parallel computer by employing intelligent candidate partitioning scheme and uses efficient communication mechanism to move data among the processors. The Hybrid Distribution algorithm further improves upon the Intelligent Data Distribution algorithm by dynamically partitioning the candidate set to maintain good load balance. The experimental results on a Cray T3D parallel computer show that the Hybrid Distribution algorithm scales linearly and exploits the aggregate memory better and can generate more association rules with a single scan of database per pass.

410 citations


Journal ArticleDOI
TL;DR: In this article, a scheme for reliable transfer of quantum information between two atoms via an optical fiber in the presence of decoherence is proposed, which is based on performing an adiabatic passage through two cavities which remain in their respective vacuum states during the whole operation.
Abstract: A scheme is proposed which allows for reliable transfer of quantum information between two atoms via an optical fibre in the presence of decoherence. The scheme is based on performing an adiabatic passage through two cavities which remain in their respective vacuum states during the whole operation. The scheme may be useful for networking several ion-trap quantum computers, thereby increasing the number of quantum bits involved in a computation.

343 citations


Book
01 Jan 1997
TL;DR: This book draws upon the very latest research and uses executable software simulations to help explain the material and allow the reader to experiment with the ideas behind quantum computers.
Abstract: By the year 2020, the basic memory components of a computer will be the size of individual atoms. At such scales, the current theory of computation will become invalid. A new field called "quantum computing" is emerging that is reinventing the foundations of computer science and information theory in a way that is consistent with quantum physics - the most accurate model of reality that is currently known. Remarkably, this new theory predicts that quantum computers can perform certain tasks breathtakingly faster than classical computers, and, better yet, can accomplish mind-boggling feats such as teleporting information, breaking supposedly "unbreakable" codes, generating true random numbers, and communicating with messages that betray the presence of eavesdropping. "Explorations in Quantum Computing" explains these burgeoning developments in simple terms, and describes the key technological hurdles that must be overcome in order to make quantum computers a reality. This book draws upon the very latest research and uses executable software simulations to help explain the material and allow the reader to experiment with the ideas behind quantum computers. This is the ideal text for anyone wishing to learn more about the next, perhaps "ultimate," computer revolution.

325 citations


Journal ArticleDOI
TL;DR: The traditional “BigNumber” package that forms the work-horse for exact computation must be reinvented to take advantage of many features found in geometric algorithms to make robustness a non-issue by computing exactly.
Abstract: Exact computation is assumed in most algorithms in computational geometry. In practice, implementors perform computation in some fixed-precision model, usually the machine floating-point arithmetic. Such implementations have many well-known problems, here informally called “robustness issues”. To reconcile theory and practice, authors have suggested that theoretical algorithms ought to be redesigned to become robust under fixed-precision arithmetic. We suggest that in many cases, implementors should make robustness a non-issue by computing exactly. The advantages of exact computation are too many to ignore. Many of the presumed difficulties of exact computation are partly surmountable and partly inherent with the robustness goal. This paper formulates the theoretical framework for exact computation based on algebraic numbers. We then examine the practical support needed to make the exact approach a viable alternative. It turns out that the exact computation paradigm encompasses a rich set of computational tactics. Our fundamental premise is that the traditional “BigNumber” package that forms the work-horse for exact computation must be reinvented to take advantage of many features found in geometric algorithms. Beyond this, we postulate several other packages to be built on top of the BigNumber package.

219 citations


Book
01 Jan 1997
TL;DR: The classical Integral Transform Method with Mathematica (ITM) and generalized integral transform technique (GITT) with mathematica have been used extensively in the literature as mentioned in this paper.
Abstract: Improved Formulations. Computational Solutions. Special Topics and Applications. Symbolic-Numerical Computation with Mathematica. Lumped-Differential Formulations with Mathematica. Classical Integral Transform Method with Mathematica. Generalized Integral Transform Technique with Mathematica. References. Appendices.

219 citations


Journal ArticleDOI
TL;DR: In this paper, a hierarchical multipole method was developed for fast computation of the Coulomb matrix, and a linear scaling algorithm for calculation of the Fock matrix was demonstrated for a sequence of water clusters at the restricted Hartree-Fock/3-21G level of theory.
Abstract: Computation of the Fock matrix is currently the limiting factor in the application of Hartree-Fock and hybrid Hartree-Fock/density functional theories to larger systems. Computation of the Fock matrix is dominated by calculation of the Coulomb and exchange matrices. With conventional Gaussian-based methods, computation of the Fock matrix typically scales as ∼N2.7, where N is the number of basis functions. A hierarchical multipole method is developed for fast computation of the Coulomb matrix. This method, together with a recently described approach to computing the Hartree-Fock exchange matrix of insulators [J. Chem. Phys. 105, 2726 (1900)], leads to a linear scaling algorithm for calculation of the Fock matrix. Linear scaling computation the Fock matrix is demonstrated for a sequence of water clusters at the restricted Hartree-Fock/3-21G level of theory, and corresponding accuracies in converged total energies are shown to be comparable with those obtained from standard quantum chemistry programs. Restri...

213 citations


Proceedings ArticleDOI
01 Jan 1997

204 citations


BookDOI
01 Jan 1997

203 citations


Book ChapterDOI
01 Jan 1997

Journal ArticleDOI
TL;DR: In this paper, the kinematic boundary condition and the space conservation law are used to determine the position and shape of the free-surface interface, which can be easily implemented in any existing finite-volume method using either structured or unstructured grids.
Abstract: This article outlines the development and application of an interface-tracking algorithm for computation of free-surface flows using the finite-volume method and moving grids. The kinematic boundary condition and the space conservation law are used to determine the position and shape of the free-surface interface. Several test cases are selected to demonstrate the method's accuracy and applicability for calculation of two- and three-dimensional free-surface flows. The approach can easily be implemented in any existing finite-volume method using either structured or unstructured grids.

Journal ArticleDOI
TL;DR: In this article, the authors estimate the regions in which each mode of evaluation is preferable according to computing efficiency and accuracy considerations, and a fast numerical algorithm is introduced for each region.
Abstract: One of the most basic optical ‘components’ is free-space propagation. A common approximation used when calculating the resultant field distribution after propagation is the Fresnel integral. This integral can be evaluated in two ways: directly or by using the angular spectrum. In this paper, we estimate the regions in which each mode of evaluation is preferable according to computing efficiency and accuracy considerations. A fast numerical algorithm is introduced for each region. The result is relevant also for the evaluation of the Rayleigh-Sommerfeld diffraction formula.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: Two new novel tools for the study of distributed computing are introduced and it is shown that the topological structure that corresponds to an iterated model has a nice recursive structure, and that the iterated version of the atomic snapshot memory solves any task solvable by the non-iterated model.
Abstract: This paper introduces two new novel tools for the study of distributed computing and shows their utility by using them to exhibit a simple derivation of the Herlihy and Shavit characterization of wait-free shared-memory computation. The first tool is the notion of the iterated version of a given model. We show that the topological structure that corresponds to an iterated model has a nice recursive structure, and that the iterated version of the atomic snapshot memory solves any task solvable by the non-iterated model. The second tool is an iterated explicit simple convergence algorithm. In the Ph.D. Thesis oft he first author these tool were used to characterize models more complex than read-write shared-memory.


Journal ArticleDOI
TL;DR: This paper describes a method for estimating the distance between a robot and its surrounding environment using best ellipsoid fit, and presents an incremental version of the distance computation, which takes place along a continuous trajectory taken by the robot.
Abstract: This paper describes a method for estimating the distance between a robot and its surrounding environment using best ellipsoid fit. The method consists of the following two stages. First we approximate the detailed geometry of the robot and its environment by minimum-volume enclosing ellipsoids. The computation of these ellipsoids is a convex optimization problem, for which efficient algorithms are known. Then we compute a conservative distance estimate using an important but little known formula for the distance of a point from and n-dimensional ellipse. The computation of the distance estimate (and its gradient vector) is shown to be an eigenvalue problem, whose solution can be rapidly found using standard techniques. We also present an incremental version of the distance computation, which takes place along a continuous trajectory taken by the robot. We have implemented the proposed approach and present some preliminary results.

Proceedings ArticleDOI
13 Apr 1997
TL;DR: An application of the DNA based evolution program to a search for good DNA encodings is sketched, which takes advantage of errors to produce change and variation in the population.
Abstract: Computation based on manipulation of DNA molecules has the potential to solve problems with massive parallelism. DNA computation, however, is implemented with chemical reactions between the nucleotide bases, and therefore, the results can be error-prone. Application of DNA based computation to traditional computing paradigms requires error-free computation, which the DNA chemistry is unable to support. Careful encoding of the nucleotide sequences can alleviate the production of errors, but these good encodings are difficult to find. In this paper, an algorithm for evolutionary computation with DNA is sketched. Evolutionary computation does not require error-free DNA chemistry, and in fact, takes advantage of errors to produce change and variation in the population. An application of the DNA based evolution program to a search for good DNA encodings is sketched.

Proceedings ArticleDOI
01 Jul 1997
TL;DR: A new non-modular algorithm for the computation of (sub-) resultants is proposed, which combines the technique of half– gcd and the structure of subresultants and leads to running time of 0(n2+C sl+’ ) in the case of univariate polynomials of degree n with coefficients of size s.
Abstract: Daniel Reischert Institut fur Informatik II University of Bonn Romerstrafie 164, D--53117 Bonn, Germany daniel~cs. uni-bonn. de http: //web. cs .bonn. edu/II/staff /daniel A new non-modular algorithm for the computation of (sub-) resultants is proposed. It combines the technique of half– gcd and the structure of subresultants This leads to running time of 0(n2+C sl+’ ) in the case of univariate polynomials of degree n with coefficients of size s. Brown’s and Collins’ no~l-modu]ar algorithm needs time of 0(rz3+C sl ‘c) for this task, if fast multiplication methods are used. An analogous speed–up is obtained in the multivariate case.

Book
01 Nov 1997
TL;DR: A Mathematical Framework for Combinational/Structural analysis of Linear Dynamical Systems by Means of Matroids and Symbolic Methods for the Simulation of Planar Mechanical Systems in Design.
Abstract: Contributors. Preface. Introduction. Polynomial Continuation and its Relationship to the Symbolic Reduction of Polynomial Systems. Elimination Methods: an Introduction. On the Solutions of a Set of Polynomial Equations. Quantifier Elimation for Conjunction of Linear Constraints via a Convex Hull Algoritm. Elimation Theory and Computer Vision: Recognition and Positioning of Curved 3D Objects from Range, Intensity, or Contours. 2D and 3D Object Recognition and Posiitioning with Algebraic Invariants and Covariants. Applications of Invariant Theory in Computer Vision. Distance Metrics for Comparing Shapes in the Plane. A Mathematical Framework for Combinational/Structural analysis of Linear Dynamical Systems by Means of Matroids. Symbolic Methods for the Simulation of Planar Mechanical Systems in Design. Basic Requirements for the Automatic Generation of FORTRAN code. Symbolic and Parallel Adaptive Methods for Partial Differential Equations. An Interactive Symbolic-Numeric Interface to Paralles ELLPACK for Building General PDE Solvers. Symbolic / Numeric Techniques in Modeling and Simulation. Symbolic and Numeric Computation: the Example of IRENA. Author Index. Index.

01 Jan 1997
TL;DR: An algorithm for fast computation of Richards's smooth molecular surface is described and it is shown that this algorithm is easily parallelizable and scales linearly with the number of atoms in.
Abstract: An algorithm for fast computation of Richards's smooth molecular surface is described. Our algorithm is easily parallelizable and scales linearly with the number of atoms in

Proceedings ArticleDOI
10 Sep 1997
TL;DR: This work presents a powerful mechanically based cloth simulation system based on an optimized way to compute elastic forces between vertices of an irregular triangle mesh, which combines the precision of elasticity modelisation with the speed of a simple spring-mass particle system.
Abstract: In this contribution towards creating interactive environments for garment design and simulation, we present a powerful mechanically based cloth simulation system. It is based on an optimized way to compute elastic forces between vertices of an irregular triangle mesh, which combines the precision of elasticity modelisation with the speed of a simple spring-mass particle system. Efficient numerical integration error management keeps computation speed efficient by allowing high computation timesteps and also maintains very good stability, suitable for interactive applications. Constraints, such as collisions or "elastics", are integrated in a unified way that preserves robustness and computation speed. We illustrate the potentialities of our new system through examples showing its efficiency and interactivity.

Journal ArticleDOI
TL;DR: An algorithm that can achieve exact self-calibration for high-precision two-dimensional (2-D) metrology stages by employing the orthogonal Fourier series to expand the stage error map, which allows fast numerical computation.
Abstract: We describe an algorithm that can achieve exact self-calibration for high-precision two-dimensional (2-D) metrology stages. Previous attempts to solve this problem have often given nonexact or impractical solutions. Self-calibration is the procedure of calibrating a metrology stage by an artifact plate whose mark positions are not precisely known. By assuming rigidness of the artifact plate, this algorithm extracts the stage error map from comparison of three different measurement views of the plate. The algorithm employs the orthogonal Fourier series to expand the stage error map, which allows fast numerical computation. When there is no random measurement noise, this algorithm exactly calibrates the stage error at those sites sampled by the mark array. In the presence of random measurement noise, the algorithm introduces a calibration error of about the same size as the random measurement noise itself, which is the limit to be achieved by any self-calibration algorithm. The algorithm has been verified by computer simulation with and without random measurement noise. Other possible applications of this algorithm are also discussed.

Journal ArticleDOI
TL;DR: A novel formulation of the range recovery problem based on computation of the differential variation in image intensities with respect to changes in camera position and a variant based on changes in aperture size is described.
Abstract: We describe a novel formulation of the range recovery problem based on computation of the differential variation in image intensities with respect to changes in camera position (or aperture size). This method uses a single stationary camera and a pair of calibrated optical masks to directly measure this differential quantity. The subsequent computation of the range image involves simple arithmetic combinations, and is suitable for real-time implementation. Both the theoretical and practical implications of this formulation are addressed.

Book ChapterDOI
07 Jul 1997
TL;DR: A general semantic universe of call-by-value computation based on elements of game semantics is presented, and its appropriateness as a semantic universe is validated by the full abstraction result for call- by-value PCF, a generic typed programming language with call-By-value evaluation.
Abstract: We present a general semantic universe of call-by-value computation based on elements of game semantics, and validate its appropriateness as a semantic universe by the full abstraction result for call-by-value PCF, a generic typed programming language with call-by-value evaluation. The key idea is to consider the distinction between call-by-name and call-by-value as that of the structure of information flow, which determines the basic form of games. In this way call-by-name computation and call-by-value computation arise as two independent instances of sequential functional computation with distinct algebraic structures. We elucidate the type structures of the universe following the standard categorical framework developed in the context of domain theory. Mutual relationship between the presented category of games and the corresponding call-by-name universe is also clarified.

Journal ArticleDOI
TL;DR: This paper provides a closed-form approximation to a sum of infinite series based on an optimal fitting to the weights of the Legendre polynomials to compute a single scalp potential in response to an arbitrary current dipole located within a four-shell spherical volume conductor model.
Abstract: Computationally localizing electrical current sources of the electroencephalographic signal requires a volume conductor model which relates theoretical scalp potentials to the dipolar source located within the modeled brain. The commonly used multishell spherical model provides this source-potential relationship using a sum of infinite series whose computation is difficult. This paper provides a closed-form approximation to this sum based on an optimal fitting to the weights of the Legendre polynomials. The second-order (third-order) approximation algorithm, implemented by a provided C-routine, requires only 100 (140) floating point operations to compute a single scalp potential in response to an arbitrary current dipole located within a four-shell spherical volume conductor model. This cost of computation represents only 6.3% (8.9%) of that required by the direct method. The relative mean square error, measured by using 20,000 random dipoles distributed within the modeled brain, is only 0.29% (0.066%).

Reference BookDOI
01 Jan 1997

Book ChapterDOI
01 Jan 1997
TL;DR: This paper surveys the existing models and results in analog, continuous-time computation, and point to some of the open research questions.
Abstract: Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuous-time computation However, while special-case algorithms and devices are being developed, relatively little work exists on the general theory of continuous- time models of computation In this paper, we survey the existing models and results in this area, and point to some of the open research questions

Proceedings ArticleDOI
06 Mar 1997
TL;DR: A simple geometric interpretation allows an efficient implementation of the basic arithmetic operations, and an index calculus for logarithmic-like arithmetic with considerable hardware reductions in look-up table size is introduced.
Abstract: Presents a rigorous theoretical analysis of the main properties of a double-base number system, using bases 2 and 3. In particular, we emphasize the sparseness of the representation. A simple geometric interpretation allows an efficient implementation of the basic arithmetic operations, and we introduce an index calculus for logarithmic-like arithmetic with considerable hardware reductions in look-up table size. Two potential areas of applications are discussed: applications in digital signal processing for computation of inner products and in cryptography for computation of modular exponentiations.

01 Jan 1997
TL;DR: Rubin et al. as mentioned in this paper proposed a step-wise self-assembly method to increase the likelihood of success of assembly, decrese the number of tiles required, and provide additional control of the assembly process.
Abstract: Biomolecular Computation(BMC) is computation at the molecular scale, using biotechnology engineering techniques Most proposed methods for BMC used distributed (molecular) parallelism (DP); where operations are executed in parallel on large numbers of distinct molecules BMC done exclusively by DP requires that the computation execute sequentially within any given molecule (though done in parallel for multiple molecules) In contrast, local parallelism (LP) allows operations to be executed in parallel on each given molecule Winfree, et al [W96, WYS96]) proposed an innovative method for LPBMC, that of computation by unmediated self-assembly of 2D arrays of DNA molecules, applying known domino tiling techniques (see Buchi [B62], Berger [B66], Robinson [R71], and Lewis and Papadimitriou [LP81]) in combination with the DNA self-assembly techniques of Seeman et al [SZC94] We develop improved techniques to more fully exploit the potential power of LP-BMC we propose a refined step-wise assembly method, which provides control of the assembly in distinct steps Step-wise assembly may increase the likelihood of success of assembly, decrese the number of tiles required, and provide additional control of the assembly process The assembly depth is the number of stages of assembly required and the assembly size is the number of tiles required We also introduce the assembly frame, a rigid nanostructure which binds the input DNA strands in place on its boundaries and constrains the shape of the assembly Our main results are LP-BMC algorithms for some fundamental problems that form the basis of many parallel computations For these problems we decrease the assembly size to linear in the input size and and significantly decrease the assembly depth We give LP-BMC algorithms with linear assembly size and logarithmic assembly depth, for the parallel prefix computation problems, which include integer addition, subtraction, multiplication by a constant number, finite state automata simulation, and ∗A preliminary version of this paper appeared in Proc DNA-Based Computers, III: University of Pennsylvania, June 23-26, 1997 DIMACS Series in Discrete Mathematics and Theoretical Computer Science, H Rubin and D H Wood, editors American Mathematical Society, Providence, RI, vol 48, 1999, pp 217-254 †Department of Computer Science, Duke University, Durham, NC , USA and Adjunct, King Abdulaziz University (KAU), Jeddah, Saudi Arabia

Journal ArticleDOI
TL;DR: A parallel algorithm for adaptive mesh refinement that is suitable for implementation on distributed-memory parallel computers is presented and it is shown that the algorithm has a fast expected running time under the parallel random access machine (PRAM) computation model.
Abstract: Computational methods based on the use of adaptively constructed nonuniform meshes reduce the amount of computation and storage necessary to perform many scientific calculations. The adaptive construction of such nonuniform meshes is an important part of these methods. In this paper, we present a parallel algorithm for adaptive mesh refinement that is suitable for implementation on distributed-memory parallel computers. Experimental results obtained on the Intel DELTA are presented to demonstrate that for scientific computations involving the finite element method, the algorithm exhibits scalable performance and has a small run time in comparison with other aspects of the scientific computations examined. It is also shown that the algorithm has a fast expected running time under the parallel random access machine (PRAM) computation model.