scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2004"


Journal ArticleDOI
TL;DR: A simplex computation for an arc-chain formulation of the maximal multi-commodity network flow problem is proposed, which treats non-basic variables implicitly by replacing the usual method of determining a vector to enter the basis with several applications of a combinatorial algorithm for finding a shortest chain joining a pair of points in a network.
Abstract: (This article originally appeared in Management Science, October 1958, Volume 5, Number 1, pp 97-101, published by The Institute of Management Sciences) A simplex computation for an arc-chain formulation of the maximal multi-commodity network flow problem is proposed Since the number of variables in this formulation is too large to be dealt with explicitly, the computation treats non-basic variables implicitly by replacing the usual method of determining a vector to enter the basis with several applications of a combinatorial algorithm for finding a shortest chain joining a pair of points in a network

392 citations


Posted Content
TL;DR: This paper settles the question and shows that the 2-LOCAL HAMILTONIAN problem is QMA-complete, and demonstrates that adiabatic computation with two-local interactions on qubits is equivalent to standard quantum computation.
Abstract: The k-local Hamiltonian problem is a natural complete problem for the complexity class QMA, the quantum analog of NP. It is similar in spirit to MAX-k-SAT, which is NP-complete for k<=2. It was known that the problem is QMA-complete for any k <= 3. On the other hand 1-local Hamiltonian is in P, and hence not believed to be QMA-complete. The complexity of the 2-local Hamiltonian problem has long been outstanding. Here we settle the question and show that it is QMA-complete. We provide two independent proofs; our first proof uses only elementary linear algebra. Our second proof uses a powerful technique for analyzing the sum of two Hamiltonians; this technique is based on perturbation theory and we believe that it might prove useful elsewhere. Using our techniques we also show that adiabatic computation with two-local interactions on qubits is equivalent to standard quantum computation.

362 citations


Posted Content
TL;DR: The model of adiabatic quantum computation has recently attracted attention in the physics and computer science communities, but its exact computational power has been unknown, so this result implies that the adiABatic computation model and the standard quantum circuit model are polynomially equivalent.
Abstract: Adiabatic quantum computation has recently attracted attention in the physics and computer science communities, but its computational power was unknown. We describe an efficient adiabatic simulation of any given quantum algorithm, which implies that the adiabatic computation model and the conventional quantum computation model are polynomially equivalent. Our result can be extended to the physically realistic setting of particles arranged on a two-dimensional grid with nearest neighbor interactions. The equivalence between the models provides a new vantage point from which to tackle the central issues in quantum computation, namely designing new quantum algorithms and constructing fault tolerant quantum computers. In particular, by translating the main open questions in the area of quantum algorithms to the language of spectral gaps of sparse matrices, the result makes these questions accessible to a wider scientific audience, acquainted with mathematical physics, expander theory and rapidly mixing Markov chains.

349 citations


Journal ArticleDOI
TL;DR: An efficient algorithm based on interval analysis that allows us to solve the forward kinematics, i.e., to determine all the possible poses of the platform for given joint coordinates, which is competitive in term of computation time with a real-time algorithm such as the Newton scheme, while being safer.
Abstract: We consider in this paper a Gough-type parallel robot and we present an efficient algorithm based on interval analysis that allows us to solve the forward kinematics, i.e., to determine all the possible poses of the platform for given joint coordinates. This algorithm is numerically robust as numerical round-off errors are taken into account; the provided solutions are either exact in the sense that it will be possible to refine them up to an arbitrary accuracy or they are flagged only as a “possible” solution as either the numerical accuracy of the computation does not allow us to guarantee them or the robot is in a singular configuration. It allows us to take into account physical and technological constraints on the robot (for example, limited motion of the passive joints). Another advantage is that, assuming realistic constraints on the velocity of the robot, it is competitive in term of computation time with a real-time algorithm such as the Newton scheme, while being safer.

314 citations


Journal ArticleDOI
Jie Zhou1, Jinwei Gu1
TL;DR: A model-based method for the computation of orientation field estimation that has a robust performance on different fingerprint images and shows that the performance of a whole fingerprint recognition system can be improved by applying this algorithm instead of previous orientation estimation methods.
Abstract: As a global feature of fingerprints, the orientation field is very important for automatic fingerprint recognition. Many algorithms have been proposed for orientation field estimation, but their results are unsatisfactory, especially for poor quality fingerprint images. In this paper, a model-based method for the computation of orientation field is proposed. First a combination model is established for the representation of the orientation field by considering its smoothness except for several singular points, in which a polynomial model is used to describe the orientation field globally and a point-charge model is taken to improve the accuracy locally at each singular point. When the coarse field is computed by using the gradient-based algorithm, a further result can be gained by using the model for a weighted approximation. Due to the global approximation, this model-based orientation field estimation algorithm has a robust performance on different fingerprint images. A further experiment shows that the performance of a whole fingerprint recognition system can be improved by applying this algorithm instead of previous orientation estimation methods.

190 citations


Book
14 Jun 2004
TL;DR: In this paper, the authors expose the link between the topology of the electromagnetic boundary value problem and a modern approach to algorithms, and propose a framework for linking data structures, algorithms and computation to topological aspects of the problem.
Abstract: Although topology was recognized by Gauss and Maxwell to play a pivotal role in the formulation of electromagnetic boundary value problems, it is a largely unexploited tool for field computation. The development of algebraic topology since Maxwell provides a framework for linking data structures, algorithms, and computation to topological aspects of three-dimensional electromagnetic boundary value problems. This book attempts to expose the link between Maxwell and a modern approach to algorithms.

185 citations


Journal ArticleDOI
TL;DR: A combined interior-point and active-set method for solving the minimum-volume n-dimensional ellipsoid that must contain m given points a1, a2, a3 to solve the convex constrained problem in data mining and robust statistics.
Abstract: We present a practical algorithm for computing the minimum-volume n-dimensional ellipsoid that must contain m given points a1,',am ∈ ℝn. This convex constrained problem arises in a variety of applied computational settings, particularly in data mining and robust statistics. Its structure makes it particularly amenable to solution by interior-point methods, and it has been the subject of much theoretical complexity analysis. Here we focus on computation. We present a combined interior-point and active-set method for solving this problem. Our computational results demonstrate that our method solves very large problem instances (m = 30,000 and n = 30) to a high degree of accuracy in under 30 seconds on a personal computer.

183 citations


Proceedings ArticleDOI
13 Jun 2004
TL;DR: This work presents a simple decentralized algorithm for computing the top k eigenvectors of a symmetric weighted adjacency matrix, and a proof that it converges essentially in O(τMIXlog2 n) rounds of communication and computation, where τMIX is the mixing time of a random walk on the network.
Abstract: In many large network settings, such as computer networks, social networks, or hyperlinked text documents, much information can be obtained from the network's spectral properties. However, traditional centralized approaches for computing eigenvectors struggle with at least two obstacles: the data may be difficult to obtain (both due to technical reasons and because of privacy concerns), and the sheer size of the networks makes the computation expensive. A decentralized, distributed algorithm addresses both of these obstacles: it utilizes the computational power of all nodes in the network and their ability to communicate, thus speeding up the computation with the network size. And as each node knows its incident edges, the data collection problem is avoided as well.Our main result is a simple decentralized algorithm for computing the top k eigenvectors of a symmetric weighted adjacency matrix, and a proof that it converges essentially in O(τMIXlog2n) rounds of communication and computation, where τMIX is the mixing time of a random walk on the network. An additional contribution of our work is a decentralized way of actually detecting convergence, and diagnosing the current error. Our protocol scales well, in that the amount of computation performed at any node in any one round, and the sizes of messages sent, depend polynomially on k, but not on the (typically much larger) number n of nodes.

182 citations


Journal ArticleDOI
01 Sep 2004
TL;DR: An algorithm for fast computation of discretized 3D distance fields of large models composed of tens of thousands of primitives on high resolution grids using graphics hardware and achieves an order of magnitude improvement in the running time.
Abstract: We present an algorithm for fast computation of discretized 3D distance elds using graphics hardware. Given a set of primitives and a distance metric, our algorithm computes the distance eld for each slice of a uniform spatial grid by rasterizing the distance functions of the primitives. We compute bounds on the spatial extent of the Voronoi region of each primitive. These bounds are used to cull and clamp the distance functions rendered for each slice. Our algorithm is applicable to all geometric models and does not make any assumptions about connectivity or a manifold representation. We have used our algorithm to compute distance elds of large models composed of tens of thousands of primitives on high resolution grids. Moreover, we demonstrate its application to medial axis evaluation and proximity computations. As compared to earlier approaches, we are able to achieve an order of magnitude improvement in the running time.

171 citations


Book ChapterDOI
Fridtjof Stein1
30 Aug 2004
TL;DR: In this article, an approach for the estimation of visual motion over an image sequence in real-time is presented. But the method uses the Census Transform as the representation of small image patches and matches these primitives using a table based indexing scheme.
Abstract: This paper presents an approach for the estimation of visual motion over an image sequence in real-time. A new algorithm is proposed which solves the correspondence problem between two images in a very efficient way. The method uses the Census Transform as the representation of small image patches. These primitives are matched using a table based indexing scheme. We demonstrate the robustness of this technique on real-world image sequences of a road scenario captured from a vehicle based on-board camera. We focus on the computation of the optical flow. Our method runs in real-time on general purpose platforms and handles large displacements.

170 citations


Proceedings ArticleDOI
07 Oct 2004
TL;DR: This work demonstrates that such architectures can be built by automatic compilation of C programs; that distributed computation is in some respects fundamentally different from monolithic superscalar processors; and that ASIC implementations of ASH use three orders of magnitude less energy compared to high-end supersCalar processors.
Abstract: This paper describes a computer architecture, Spatial Computation (SC), which is based on the translation of high-level language programs directly into hardware structures. SC program implementations are completely distributed, with no centralized control. SC circuits are optimized for wires at the expense of computation units.In this paper we investigate a particular implementation of SC: ASH (Application-Specific Hardware). Under the assumption that computation is cheaper than communication, ASH replicates computation units to simplify interconnect, building a system which uses very simple, completely dedicated communication channels. As a consequence, communication on the datapath never requires arbitration; the only arbitration required is for accessing memory. ASH relies on very simple hardware primitives, using no associative structures, no multiported register files, no scheduling logic, no broadcast, and no clocks. As a consequence, ASH hardware is fast and extremely power efficient.In this work we demonstrate three features of ASH: (1) that such architectures can be built by automatic compilation of C programs; (2) that distributed computation is in some respects fundamentally different from monolithic superscalar processors; and (3) that ASIC implementations of ASH use three orders of magnitude less energy compared to high-end superscalar processors, while being on average only 33% slower in performance (3.5x worst-case).

T. Tamura1
01 Jan 2004
TL;DR: In this paper, the authors present a brief review of the six types with which the engineer is likely to come into contact: thermocouples, resistance temperature devices (RTD's and thermistors), infrared radiators, bimetallic devices, liquid expansion devices, and change-of-state devices.
Abstract: Temperature can be measured via a diverse array of sensors. All of them infer temperature by sensing some change in a physical characteristic. Six types with which the engineer is likely to come into contact are: thermocouples, resistance temperature devices (RTD’s and thermistors), infrared radiators, bimetallic devices, liquid expansion devices, and change-of-state devices. It is well to begin with a brief review of each.

Journal ArticleDOI
TL;DR: This work describes, implement and analyse in detail a high-order fully discrete spectral algorithm for solving the Helmholtz equation exterior to a bounded (sound-soft, sound-hard or absorbing) obstacle in three space dimensions, with Dirichlet, Neumann or Robin boundary conditions.

Book
01 Jan 2004
TL;DR: This chapter discusses Newton Methods for Nonlinear Optimization, Iterative Methods, and Applications of the Chebyshev Polynomials, which deals with the effects of Finite Precision Arithmetic.
Abstract: 1. Nonlinear Equations. Biscetion and Inverse Linear Interpolation. Newton's Method. The Fixed Point Theorem. Quadratic Convergence of Newton's Method. Variants of Newton's Method. Brent's Method. Effects of Finite Precision Arithmetic. Newton's Method for Systems. Broyden's Method. 2. Linear Systems. Gaussian Elimination with Partial Pivoting. The LU Decomposition. The LU Decomposition with Pivoting. The Cholesky Decomposition. Condition Numbers. The QR Decomposition. Householder Triangularization and the QR Decomposition. Gram-Schmidt Orthogonalization and the QR Decomposition. The Singular Value Decomposition. 3. Iterative Methods. Jacobi and Gauss-Seidel Iteration. Sparsity. Iterative Refinement. Preconditioning. Krylov Space Methods. Numerical Eigenproblems. 4. Polynomial Interpolation. Lagrange Interpolating Polynomials. Piecewise Linear Interpolation. Cubic Splines. Computation of the Cubic Spline Coefficients. 5. Numerical Integration. Closed Newton-Cotes Formulas. Open Newton-Cotes Formulas and Undetermined Coeffients. Gaussian Quadrature. Gauss-Chebyshev Quadrature. Radau and Lobatto Quadrature. Adaptivity and Automatic Integration. Romberg Integration. 6. Differential Equations. Numerical Differentiation. Euler's Method. Improved Euler's Method. Analysis of Explicit One-Step Methods. Taylor and Runge-Kutta Methods. Adaptivity and Stiffness. Multi-Step Methods. 7. Nonlinear Optimization. One-Dimensional Searches. The Method of Steepest Descent. Newton Methods for Nonlinear Optimization. Multiple Random Start Methods. Direct Search Methods. The Nelder-Mead Method. Conjugate Direction Methods. 8. Approximation Methods. Linear and Nonlinear Least Squares. The Best Approximation Problem. Best Uniform Approximation. Applications of the Chebyshev Polynomials. Afterword. Bibliography. Answers. Index.

Proceedings ArticleDOI
04 Jul 2004
TL;DR: An efficient algorithm for determining a suitable projection order for performing cylindrical algebraic decomposition is introduced, motivated by a statistical analysis of comprehensive test set computations.
Abstract: We introduce an efficient algorithm for determining a suitable projection order for performing cylindrical algebraic decomposition. Our algorithm is motivated by a statistical analysis of comprehensive test set computations. This analysis introduces several measures on both the projection sets and the entire computation, which turn out to be highly correlated. The statistical data also shows that the orders generated by our algorithm are significantly close to optimal.

Proceedings ArticleDOI
26 Apr 2004
TL;DR: The distributed algorithms developed are linear dynamical systems that generate sequences of approximations to the desired computation that are locally constructed at each node by exploiting only locally available and macroscopic information about the network topology.
Abstract: In this paper we develop algorithms for distributed computation of a broad range of estimation and detection tasks over networks with arbitrary but fixed connectivity. The distributed algorithms we develop are linear dynamical systems that generate sequences of approximations to the desired computation. The algorithms are locally constructed at each node by exploiting only locally available and macro-scopic information about the network topology. We present methods for designing these distributed algorithms so as to optimize the convergence rates to the desired computation and demonstrate their performance characteristics in the context of a problem of signal estimation from multi-node signal observations in Gaussian noise.

01 Jan 2004
TL;DR: The COMFAC (COMplex parallel FACtor analysis) as mentioned in this paper algorithm is based on combining and improving a number of auxiliary subroutines with the purpose of providing the shortest possible computation time.
Abstract: In this paper an algorithm called COMFAC (COMplex parallel FACtor analysis) is developed for fitting the trilinear parallel factor analysis model to data arising from e.g., DS-CDMA signals or joint azimuth-elevation estimation. The algorithm is based on combining and improving a number of auxiliary subroutines with the purpose of providing the shortest possible computation time. The different steps in the overall COMFAC algorithm are described and the algorithm is applied to different relevant problems.

Journal ArticleDOI
TL;DR: An accurate algorithm to compute the internal Voronoi diagram and medial axis of a 3-D polyhedron using exact arithmetic and exact representations for accurate computation of the medial axis is presented.

Journal ArticleDOI
TL;DR: A novel competitive EM algorithm for finite mixture models to overcome the two main drawbacks of the EM algorithm: often getting trapped at local maxima and sometimes converging to the boundary of the parameter space is presented.

Journal ArticleDOI
TL;DR: This work exploits the structural properties of the graph describing the discrete part of a switching system to develop an efficient procedure for the computation of the safe set, and proposes to compute inner approximations that are controlled invariant and for which a procedure that terminates in a finite number of steps can be obtained.
Abstract: The problem of determining maximal safe sets and hybrid controllers is computationally intractable because of the mathematical generality of hybrid system models. Given the practical and theoretical relevance of the problem, finding implementable procedures that could at least approximate the maximal safe set is important. To this end, we begin by restricting our attention to a special class of hybrid systems: switching systems. We exploit the structural properties of the graph describing the discrete part of a switching system to develop an efficient procedure for the computation of the safe set. This procedure requires the computation of a maximal controlled invariant set. We then restrict our attention to linear discrete-time systems for which there is a wealth of results available in the literature for the determination of maximal controlled invariant sets. However, even for this class of systems, the computation may not converge in a finite number of steps. We then propose to compute inner approximations that are controlled invariant and for which a procedure that terminates in a finite number of steps can be obtained. A tight bound on the error can be given by comparing the inner approximation with the classical outer approximation of the maximal controlled invariant set. Our procedure is applied to the idle-speed regulation problem in engine control to demonstrate its efficiency.

Journal ArticleDOI
TL;DR: A sequential implementation of the algorithm, with a control unit which allows the independent computation of logarithm and exponential, is proposed and the execution times and hardware requirements are estimated for single and double-precision floating-point computations.
Abstract: An architecture for the computation of logarithm, exponential, and powering operations is presented in this paper, based on a high-radix composite algorithm for the computation of the powering function (X/sup Y/). The algorithm consists of a sequence of overlapped operations: 1) digit-recurrence logarithm, 2) left-to-right carry-free (LRCF) multiplication, and 3) online exponential. A redundant number system is used and the selection in 1) and 3) is done by rounding except from the first iteration, when selection by table look-up is necessary to guarantee the convergence of the recurrences. A sequential implementation of the algorithm, with a control unit which allows the independent computation of logarithm and exponential, is proposed and the execution times and hardware requirements are estimated for single and double-precision floating-point computations. These estimates are obtained for radices from r=8 to r=1,024, according to an approximate model for the delay and area of the main logic blocks and help determining the radix values which lead to the most efficient implementations: r=32 and r=128.

Posted Content
TL;DR: Methods for implementing postselected quantum computation with noisy gates based on error-detecting codes are proposed, and it is possible to apply the proposed methods to the problem of preparing arbitrary stabilizer states in large error-correcting codes with local residual errors.
Abstract: Postselected quantum computation is distinguished from regular quantum computation by accepting the output only if measurement outcomes satisfy predetermined conditions. The output must be accepted with nonzero probability. Methods for implementing postselected quantum computation with noisy gates are proposed. These methods are based on error-detecting codes. Conditionally on detecting no errors, it is expected that the encoded computation can be made to be arbitrarily accurate. Although the probability of success of the encoded computation decreases dramatically with accuracy, it is possible to apply the proposed methods to the problem of preparing arbitrary stabilizer states in large error-correcting codes with local residual errors. Together with teleported error-correction, this may improve the error tolerance of non-postselected quantum computation.

Journal ArticleDOI
TL;DR: A novel, highly noise-tolerant computer architecture based on the work of von Neumann that may enable the construction of reliable nanocomputers comprised of noisy gates, and a thermodynamic theory of noisy computation that might set fundamental physical limits on scaling classical computation to the nanoscale.
Abstract: Nanoelectronic devices are anticipated to become exceedingly noisy as they are scaled towards thermodynamic limits. Hence the development of nanoscale classical information systems will require optimal schemes for reliable information processing in the presence of noise. We present a novel, highly noise-tolerant computer architecture based on the work of von Neumann that may enable the construction of reliable nanocomputers comprised of noisy gates. The fundamental principles of this technique of parallel restitution are parallel processing by redundant logic gates, parallelism in the interconnects between gate resources and intermittent signal restitution performed in parallel. The results of our mathematical model, verified by Monte Carlo simulations, show that nanoprocessors consisting of gates incorporating this technique can be made 90% reliable over 10 years of continuous operation with a gate error probability per actuation of and a redundancy of . This compares very favourably with corresponding results utilizing modular redundant architectures of with , and with no noise tolerance. Arbitrary reliability is possible within a noise limit of , with massive redundancy. We show parallel restitution to be a general paradigm applicable to different kinds of information processing, including neural communication. Significantly, we show how our treatment of para-restituted computation as a statistical ensemble coupled to a heat bath allows consideration of the computation entropy of logic gates, and tentatively sketch a thermodynamic theory of noisy computation that might set fundamental physical limits on scaling classical computation to the nanoscale. Our preliminary work indicates that classical computation may be confined to the macroscale by noise, quantum computation possibly being the only information processing possible at the extreme nanoscale.

Journal ArticleDOI
TL;DR: The presuppositions and context of the TM model are reviewed and it is shown that it is unsuited to natural computation, so an expanded definition of computation is considered that includes alternative (especially analog) models as well as the TM.

Journal ArticleDOI
TL;DR: This work presents a generalized, scalable field programmable gate array (FPGA)-based architecture for fast computation of neural models and focuses on the steps involved in implementing a single-compartment and a two-compartments neuron model.
Abstract: The constant requirement for greater performance in neural model simulation has created the need for high-speed simulation platforms. We present a generalized, scalable field programmable gate array (FPGA)-based architecture for fast computation of neural models and focus on the steps involved in implementing a single-compartment and a two-compartment neuron model. Based on timing tests, it is shown that FPGAs can outperform traditional desktop computers in simulating these fairly simple models and would most likely provide even larger performance gains over computers in simulating more complex models. The potential of this method for improving neural modeling and dynamic clamping is discussed. In particular, it is believed that this approach could greatly speed up simulations of both highly complex single neuron models and networks of neurons. Additionally, our design is particularly well suited to automated parameter searches for tuning model behavior and to real-time simulation.


Journal ArticleDOI
TL;DR: Empirical power law relations of the period and transient iterations with the computation precisions and the sizes of coupled systems are obtained, useful for possible applications of chaos, e.g., chaotic cryptography in secure communication.
Abstract: Fundamental problems of periodicity and transient process to periodicity of chaotic trajectories in computer realization with finite computation precision is investigated by taking single and coupled Logistic maps as examples. Empirical power law relations of the period and transient iterations with the computation precisions and the sizes of coupled systems are obtained. For each computation we always find, by randomly choosing initial conditions, a single dominant periodic trajectory which is realized with major portion of probability. These understandings are useful for possible applications of chaos, e.g., chaotic cryptography in secure communication.

Journal ArticleDOI
TL;DR: A very precise boundary element numerical solution of the exact formulation of the hydrodynamic resistance problem with stick boundary conditions is presented and a complete analysis of the sources of error in the numerical work and techniques to eliminate these errors is included.
Abstract: A very precise boundary element numerical solution of the exact formulation of the hydrodynamic resistance problem with stick boundary conditions is presented. BEST, the Fortran 77 program developed for this purpose, computes the full transport tensors in the center of resistance or the center of diffusion for an arbitrarily shaped rigid body, including rotation-translation coupling. The input for this program is a triangulation of the solvent-defined surface of the molecule of interest, given by Connolly's MSROLL or other suitable triangulator. The triangulation is prepared for BEST by COALESCE, a program that allows user control over the quality and number of triangles to describe the surface. High numerical precision is assured by effectively exact integration of the Oseen tensor over triangular surface elements, and by scaling the hydrodynamic computation to the precise surface area of the molecule. Efficiency of computation is achieved by the use of public domain LAPACK routines that call BLAS Level 3 hardware-optimized subroutines available for most processors. A protein computation can be done in less than 10 min of CPU time in a modern Pentium IV processor. The present work includes a complete analysis of the sources of error in the numerical work and techniques to eliminate these errors. The operation of BEST is illustrated with applications to ellipsoids of revolution, and Lysozyme, a small protein. The typical numerical accuracy achieved is 0.05% compared to analytical theory. The numerical precision for a protein is better than 1%, much better than experimental errors in these quantities, and more than 10 times better than traditional bead-based methods.

Journal ArticleDOI
TL;DR: In this paper, the authors extended the approach developed by Andrews [1980] and compared it with existing analytical formulations, and found that the stress-change results are accurate to about 1-2% of the maximum absolute stress change.
Abstract: [1] Computing the distribution of static stress changes on the fault plane of an earthquake, given the distribution of static displacements, is of great importance in earthquake dynamics. This study extends the approach developed by Andrews [1980], and compares it against existing analytical formulations. We present calculations for slip maps of past earthquakes and find that the stress-change results are accurate to about 1–2% of the maximum absolute stress change, while the computation time is greatly reduced. Our method therefore provides a reliable and fast alternative to other methods. In particular, its speed will make computation of large suites of models feasible, thus facilitating the construction of physically consistent source characterizations for strong motion simulations.

Book ChapterDOI
TL;DR: In this paper, the authors present a comprehensive review of past research into adiabatic quantum computation and then propose a scalable architecture for an adiabiatic quantum computer that can treat NP-Hard problems without requiring local coherent operations.
Abstract: We present a comprehensive review of past research into adiabatic quantum computation and then propose a scalable architecture for an adiabatic quantum computer that can treat NP-Hard Problems without requiring local coherent operations. Instead, computation can be performed entirely by adiabatically varying a magnetic field applied to all the qubits simultaneously. Local (incoherent) operations are needed only for: (1) switching on or off certain pairwise, nearestneighbor inductive couplings in order to set the problem to be solved; and (2) measuring some subset of the qubits in order to obtain the answer to the problem.