scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 1998"


Journal ArticleDOI
TL;DR: This provides the first complete experimental demonstration of loading an initial state into a quantum computer, performing a computation requiring fewer steps than on a classical computer, and then reading out the final state.
Abstract: Using nuclear magnetic resonance techniques with a solution of chloroform molecules we implement Grover's search algorithm for a system with four states. By performing a tomographic reconstruction of the density matrix during the computation good agreement is seen between theory and experiment. This provides the first complete experimental demonstration of loading an initial state into a quantum computer, performing a computation requiring fewer steps than on a classical computer, and then reading out the final state.

594 citations


Journal ArticleDOI
16 Jan 1998-Science
TL;DR: In this paper, it was shown that arbitrarily accurate quantum computation is possible provided that the error per operation is below a threshold value, i.e., it is possible to perform arbitrary quantum computations without decoherence and operational errors.
Abstract: Practical realization of quantum computers will require overcoming decoherence and operational errors, which lead to problems that are more severe than in classical computation. It is shown that arbitrarily accurate quantum computation is possible provided that the error per operation is below a threshold value.

496 citations


Journal ArticleDOI
Rahul Sarpeshkar1
TL;DR: The results suggest that it is likely that the brain computes in a hybrid fashion and that an underappreciated and important reason for the efficiency of the human brain, which consumes only 12 W, is the hybrid and distributed nature of its architecture.
Abstract: We review the pros and cons of analog and digital computation. We propose that computation that is most efficient in its use of resources is neither analog computation nor digital computation but, rather, a mixture of the two forms. For maximum efficiency, the information and information-processing resources of the hybrid form must be distributed over many wires, with an optimal signal-to-noise ratio per wire. Our results suggest that it is likely that the brain computes in a hybrid fashion and that an underappreciated and important reason for the efficiency of the human brain, which consumes only 12 W, is the hybrid and distributed nature of its architecture.

495 citations


Posted Content
TL;DR: In this paper, a simple and general simulation technique that transforms any black-box quantum algorithm (a la Grover's database search algorithm) to a quantum communication protocol for a related problem, in a way that fully exploits the quantum parallelism is presented.
Abstract: We present a simple and general simulation technique that transforms any black-box quantum algorithm (a la Grover's database search algorithm) to a quantum communication protocol for a related problem, in a way that fully exploits the quantum parallelism. This allows us to obtain new positive and negative results. The positive results are novel quantum communication protocols that are built from nontrivial quantum algorithms via this simulation. These protocols, combined with (old and new) classical lower bounds, are shown to provide the first asymptotic separation results between the quantum and classical (probabilistic) two-party communication complexity models. In particular, we obtain a quadratic separation for the bounded-error model, and an exponential separation for the zero-error model. The negative results transform known quantum communication lower bounds to computational lower bounds in the black-box model. In particular, we show that the quadratic speed-up achieved by Grover for the OR function is impossible for the PARITY function or the MAJORITY function in the bounded-error model, nor is it possible for the OR function itself in the exact case. This dichotomy naturally suggests a study of bounded-depth predicates (i.e. those in the polynomial hierarchy) between OR and MAJORITY. We present black-box algorithms that achieve near quadratic speed up for all such predicates.

391 citations


Book
01 Jan 1998
TL;DR: In Models of Computation, John Savage re-examines theoretical computer science, offering a fresh approach that gives priority to resource tradeoffs and complexity classifications over the structure of machines and their relationships to languages.
Abstract: From the Publisher: Your book fills the gap which all of us felt existed too long. Congratulations on this excellent contribution to our field." --Jan van Leeuwen, Utrecht University "This is an impressive book. The subject has been thoroughly researched and carefully presented. All the machine models central to the modern theory of computation are covered in depth; many for the first time in textbook form. Readers will learn a great deal from the wealth of interesting material presented." --Andrew C. Yao, Professor of Computer Science, Princeton University "Models of Computation" is an excellent new book that thoroughly covers the theory of computation including significant recent material and presents it all with insightful new approaches. This long-awaited book will serve as a milestone for the theory community." --Akira Maruoka, Professor of Information Sciences, Tohoku University "This is computer science." --Elliot Winard, Student, Brown University In Models of Computation: Exploring the Power of Computing, John Savage re-examines theoretical computer science, offering a fresh approach that gives priority to resource tradeoffs and complexity classifications over the structure of machines and their relationships to languages. This viewpoint reflects a pedagogy motivated by the growing importance of computational models that are more realistic than the abstract ones studied in the 1950s, '60s and early '70s. Assuming onlysome background in computer organization, Models of Computation uses circuits to simulate machines with memory, thereby making possible an early discussion of P-complete and NP-complete problems. Circuits are also used to demonstrate that tradeoffs between parameters of computation, such as space and time, regulate all computations by machines with memory. Full coverage of formal languages and automata is included along with a substantive treatment of computability. Topics such as space-time tradeoffs, memory hierarchies, parallel computation, and circuit complexity, are integrated throughout the text with an emphasis on finite problems and concrete computational models FEATURES: Includes introductory material for a first course on theoretical computer science. Builds on computer organization to provide an early introduction to P-complete and NP-complete problems. Includes a concise, modern presentation of regular, context-free and phrase-structure grammars, parsing, finite automata, pushdown automata, and computability. Includes an extensive, modern coverage of complexity classes. Provides an introduction to the advanced topics of space-time tradeoffs, memory hierarchies, parallel computation, the VLSI model, and circuit complexity, with parallelism integrated throughout. Contains over 200 figures and over 400 exercises along with an extensive bibliography. ** Instructor's materials are available from your sales rep. If you do not know your local sales representative, please call 1-800-552-2499 for assistance, or use the Addison Wesley Longman rep-locator at ...

311 citations


Journal ArticleDOI
TL;DR: In this article, the stickers model is introduced and a random access memory is used to store the information of a DNA strand in order to solve a wide class of search problems in the context of a microprocessor-controlled robotic workstation.
Abstract: We introduce a new model of molecular computation that we call the sticker model. Like many previous proposals it makes use of DNA strands as the physical substrate in which information is represented and of separation by hybridization as a central mechanism. However, unlike previous models, the stickers model has a random access memory that requires no strand extension and uses no enzymes; also (at least in theory), its materials are reusable. The paper describes computation under the stickers model and discusses possible means for physically implementing each operation. Finally, we go on to propose a specific machine architecture for implementing the stickers model as a microprocessor-controlled parallel robotic workstation. In the course of this development a number of previous general concerns about molecular computation (Smith, 1996; Hartmanis, 1995; Linial ct al., 1995) are addressed. First, it is clear that general-purpose algorithms can be implemented by DNA-based computers, potentially solving a wide class of search problems. Second, we Rnd that there are challenging problems, for which only modest volumes of DNA should suffice. Third, we demonstrate that the formation and breaking of covalent bonds is not intrinsic to DNA-based computation. Fourth, we show that a single essential biotechnology, sequence-specific separation, suffices for constructing a general-purpose molecular computer. Concerns about errors in this separation operation and means to reduce them are addressed elsewhere (Karp ct at, 1995; Rowels and Winfree, 1999). Despite these encouraging theoretical advances, we emphasize that substantial engineering challenges remain at almost all stages and that the ultimate success or failure of DNA computing will certainly depend on whether these challenges can be met in laboratory investigations.

214 citations


Book
01 Jan 1998
TL;DR: This book covers all major areas of unconventional computation, especially quantum computing, computing using organic molecules (DNA), and various proposals for computations that go beyond the Turing model.
Abstract: From the Publisher: This book contains invited lectures and refereed talks presented at the First International Conference on Unconventional Models of Computation, Auckland, New Zealand, January 1998, organised by the Centre for Discrete Mathematics and Theoretical Computer Science, New Zealand, and the Santa Fe Institute, USA. It covers all major areas of unconventional computation, especially quantum computing, computing using organic molecules (DNA), and various proposals for computations that go beyond the Turing model.

203 citations


01 Jan 1998
TL;DR: In this paper, an acceleration algorithm for more rapidly computing interactions between widely separated points in the forward-backward method is proposed and results in an O(N) algorithm with increasing surface size.
Abstract: The forward-backward method has been shown to be an effective iterative technique for the computation of scattering from one-dimensional rough surfaces, often converging rapidly even for very large surface heights. However, previous studies with this method have computed interactions between widely separated points on the surface exactly, resulting in anO(N2) computational algorithm that becomes intractable for large rough surface sizes, as are required when low grazing incidence angles are approached. An acceleration algorithm for more rapidly computing interactions between widely separated points in the forward-backward method is proposed in this paper and results in an O(N) algorithm with increasing surface size. The approach is based on a spectral domain representation of source currents and the Green's function and is developed for both perfectly conducting and impedance boundary surfaces. The method is applied in a Monte Carlo study of low grazing incidence backscattering from very rough (up to 10 m/s wind speed) ocean-like surfaces at 14 GHz and is found to require only a small fraction of the CPU time required by other competing methods; such as the banded matrix iterative approach/canonical grid and fast multipole methods.

172 citations


Book
01 Dec 1998
TL;DR: The most inspiring book today from a very professional writer in the world, modeling and computation of boundary layer flows as discussed by the authors, is the book that many people waiting for to publish, and the book lovers are really curious to see how this book is actually.
Abstract: Now welcome, the most inspiring book today from a very professional writer in the world, modeling and computation of boundary layer flows. This is the book that many people in the world waiting for to publish. After the announced of this book, the book lovers are really curious to see how this book is actually. Are you one of them? That's very proper. You may not be regret now to seek for this book to read.

171 citations


Journal ArticleDOI
TL;DR: In this paper, an acceleration algorithm for more rapidly computing interactions between widely separated points in the forward-backward method is proposed and results in an O(N) algorithm with increasing surface size.
Abstract: The forward-backward method has been shown to be an effective iterative technique for the computation of scattering from one-dimensional rough surfaces, often converging rapidly even for very large surface heights. However, previous studies with this method have computed interactions between widely separated points on the surface exactly, resulting in anO(N2) computational algorithm that becomes intractable for large rough surface sizes, as are required when low grazing incidence angles are approached. An acceleration algorithm for more rapidly computing interactions between widely separated points in the forward-backward method is proposed in this paper and results in an O(N) algorithm with increasing surface size. The approach is based on a spectral domain representation of source currents and the Green's function and is developed for both perfectly conducting and impedance boundary surfaces. The method is applied in a Monte Carlo study of low grazing incidence backscattering from very rough (up to 10 m/s wind speed) ocean-like surfaces at 14 GHz and is found to require only a small fraction of the CPU time required by other competing methods; such as the banded matrix iterative approach/canonical grid and fast multipole methods.

170 citations


Patent
20 Apr 1998
TL;DR: In this article, the authors describe a system that allows multiple partitioned computations to be queued for distribution to any number of client computers when the clients indicate their availability, i.e., a predetermined time without any keyboard or mouse input.
Abstract: The present invention utilizes the otherwise unproductive minutes and hours when a networked client computer is not in use by a local human operator. The method and system described herein allow multiple partitioned computations to be queued for distribution to any number of client computers when the clients indicate their availability. Availability may be determined by the same criteria used to activate screen-saver programs, i.e., a predetermined time without any keyboard or mouse input. Application programs are designed to accept a common calling sequence. An application-independent master control program coordinates the distribution of computation segments, the combination of partial results, and the formatting of the final result. An application-independent client control program reports availability of client computers, downloads application program files, invokes the application to compute partial results for a range of computation segments, and uploads the partial results to the master computer. One class of distributed computation supported is finding the minimum or maximum value of a calculated target cell in a spreadsheet, based on a number of input cells taking values within a specified range.

Proceedings ArticleDOI
TL;DR: This work proposes a new analytic technique that uses an extremely efficient three-dimensional multiphase streamline simulator as a forward model and the inverse method is analogous to seismic waveform inversion and thus, allows us to utilize efficient methods from geophysical imaging.
Abstract: One of the outstanding challenges in reservoir characterization is to build high resolution reservoir models that satisfy static as well as dynamic data. However, integration of dynamic data typically requires the solution of an inverse problem that can be computationally intensive and becomes practically infeasible for fine-scale reservoir models. A critical issue here is computation of sensitivity coefficients, the derivatives of dynamic production history with respect to model parameters such as permeability and porosity. We propose a new analytic technique that has several advantages over existing approaches. First, the method utilizes an extremely efficient three-dimensional multiphase streamline simulator as a forward model. Second, the parameter sensitivities are formulated in terms of one-dimensional integrals of analytic functions along the streamlines. Thus, the computation of sensitivities for all model parameters requires only a single simulation run to construct the velocity field and generate the streamlines. The integration of dynamic data is then performed using a two-step iterative inversion that involves (i) 'lining-up' the breakthrough times at the producing wells and then (ii) matching the production history. Our approach follows from an analogy between streamlines and ray tracing in seismology. The inverse method is analogous to seismic waveform inversion and thus, allows us to utilize efficient methods from geophysical imaging. The feasibility of our proposed approach for large-scale field applications has been demonstrated by integrating production response directly into three dimensional reservoir models consisting of 31500 grid blocks in less than 3 hours in a Silicon Graphics without any artificial reduction of parameter space, for example, through the use of 'pilot points'. Use of 'pilot points' will allow us to substantially increase the model size without any significant increase in computation time.

Journal ArticleDOI
TL;DR: It is proved that universal fault-tolerant computation is possible with any higher-dimensional stabilizer code for prime d, and the theory of fault-Tolerant computations using such codes is discussed.
Abstract: Instead of a quantum computer where the fundamental units are 2-dimensional qubits, we can consider a quantum computer made up of d-dimensional systems. There is a straightforward generalization of the class of stabilizer codes to d-dimensional systems, and I will discuss the theory of fault-tolerant computation using such codes. I prove that universal fault-tolerant computation is possible with any higher-dimensional stabilizer code for prime d.

Book ChapterDOI
01 Jan 1998
TL;DR: A unifying approach to discrete Gabor analysis, based on unitary matrix factorization, is presented, which is shown that different algorithms for the computation of the dual window correspond to different factorizations of the frame operator.
Abstract: We present a unifying approach to discrete Gabor analysis, based on unitary matrix factorization. The factorization point of view is notably useful for the design of efficient numerical algorithms. This presentation is the first systematic account of its kind. In particular, it is shown that different algorithms for the computation of the dual window correspond to different factorizations of the frame operator. Simple number theoretic conditions on the time-frequency lattice parameters imply additional structural properties of the frame operator, which do not appear in an infinite-dimensional setting. Further the computation of adaptive dual windows is discussed. We point out why the conjugate gradient method is particularly useful in connection with Gabor expansions and discuss approximate solutions and preconditioners.

Journal ArticleDOI
TL;DR: The V2F model as mentioned in this paper makes use of the standard k-e model, but extends it by incorporating near-wall turbulence anisotropy and non-local pressure-strain effects, while retaining a linear eddy viscosity assumption.
Abstract: The V2F model makes use of the standard k–e model, but extends it by incorporating near-wall turbulence anisotropy and non-local pressure-strain effects, while retaining a linear eddy viscosity assumption It has the attraction of fewer equations and more numerical robustness than Reynolds stress models The model is presented in a form that is completely independent of distance to the wall This formalism is well suited to complex, 3-D, multi-zone configurations It has been applied to the computation of two complex 3-D turbulent flows: the infinitely swept bump and the appendage-body junction; some preliminary results for the flow in a U-bend are also presented Despite the use of a linear, eddy viscosity formula, the V2F model is shown to provide excellent predictions of mean flow quantities The appendage-body test case involves very complex features, such as a 3-D separation and a horseshoe vortex The V2F simulations have been shown to successfully reproduce these features, both qualitatively and quantitatively The calculation of the complex flow structure inside and downstream of the U-bend also shows very promising results

Journal ArticleDOI
TL;DR: In this article, a method for computing the number of stationary points of any nature (minima, saddles, etc.) of the Thouless-Anderson-Palmer free energy was proposed.
Abstract: In the context of the $p$-spin spherical model, we introduce a method for the computation of the number of stationary points of any nature (minima, saddles, etc.) of the Thouless-Anderson-Palmer free energy. In doing this we clarify the ambiguities related to the approximations usually adopted in the standard calculations of the number of states in mean-field spin-glass models.

Journal ArticleDOI
TL;DR: Application of the proposed method for electromagnetic field computation and verification of the obtained results using theoretically known solution is presented, and the mathematical background for the moving least square approximation employed in the method is given.
Abstract: Although numerically very efficient the finite element method exhibits difficulties whenever the remeshing of the analysis domain must be performed. For such problems utilizing meshless computation methods is very promising. In this paper, a kind of meshless method called the element-free Galerkin method is introduced for electromagnetic field computation. The mathematical background for the moving least square approximation employed in the method is given, and the numerical implementation is briefly discussed. Application of the proposed method for electromagnetic field computation and verification of the obtained results using theoretically known solution is also presented.

Journal ArticleDOI
TL;DR: In this paper, a simple version of the scalar hysteresis model is presented for different computation tasks where the magnetic field is computed or where magnetic state tracing of a core is needed.
Abstract: In the computation of magnetic fields for rotating electric machines, transformers, and reactors, the hysteresis nonlinearity sometimes has to be taken into account. Nowadays, many scalar hysteresis models are known for one-directional fields. The main problem with using these models is that the models are complicated and they need much computation time. A new simple version of the scalar hysteresis model is presented for different computation tasks where the magnetic field is computed or where the magnetic state tracing of a core is needed. The major and minor hysteresis loops can be simulated. It is possible to make computations starting from the field strength as well as from the flux density. The model is based on the limiting hysteresis cycle and this measured cycle is used as the parametric data for the identification of ferromagnetic material.

Journal ArticleDOI
01 Jun 1998
TL;DR: SANDROS is a dynamic-graph search algorithm, and can be described as a hierarchical, nonuniform-multiresolution, best-first search to find a heuristically short motion in the configuration space.
Abstract: We present a general search strategy called SANDROS for motion planning, and its applications to motion planning for three types of robots: (1) manipulator; (2) rigid object; and (3) multiple rigid objects. SANDROS is a dynamic-graph search algorithm, and can be described as a hierarchical, nonuniform-multiresolution, best-first search to find a heuristically short motion in the configuration space. The SANDROS planner is resolution complete, and its computation time is commensurate with the problem difficulty measured empirically by the solution-path complexity. For many realistic problems involving a manipulator or a rigid object with six degrees of freedom, its computation times are under 1 minute for easy problems involving wide free space, and several minutes for relatively hard problems.

Journal ArticleDOI
TL;DR: With parallel computation of the test problems presented here, it is demonstrated that the EDICT can be used very effectively to increase the accuracy of the base finite element formulations.

Journal ArticleDOI
TL;DR: A model of analog computer which can recognize various languages in real time by composing iterated maps, which can be seen as a real-time, constant-space, off-line version of Blum, Shub, and Smale's real-valued machines.

Proceedings Article
24 Jul 1998
TL;DR: In this article, the authors compare three architectures (Lauritzen-Spiegelhalter, Hugin and Shenoy-Shafer) from the perspective of graphical structure for message propagation, message-passing scheme, computational efficiency, and storage efficiency.
Abstract: In the last decade, several architectures have been proposed for exact computation of marginals using local computation. In this paper, we compare three architectures--Lauritzen-Spiegelhalter, Hugin, and Shenoy-Shafer--from the perspective of graphical structure for message propagation, message-passing scheme, computational efficiency, and storage efficiency.

Journal ArticleDOI
TL;DR: Methods, which are based on the Cayley--Hamilton theorem, for the computation of An for nonsingular A are presented and it is shown that the value of A is determined by the inequality of the following type:
Abstract: Methods, which are based on the Cayley--Hamilton theorem, for the computation of An for nonsingular A are presented.

Posted Content
TL;DR: In this paper, the authors define the model of quantum circuits with density matrices, where non-unitary gates are allowed, and give a natural definition of using general subroutines, and analyze their computational power.
Abstract: We define the model of quantum circuits with density matrices, where non-unitary gates are allowed. Measurements in the middle of the computation, noise and decoherence are implemented in a natural way in this model, which is shown to be equivalent in computational power to standard quantum circuits. The main result in this paper is a solution for the subroutine problem: The general function that a quantum circuit outputs is a probabilistic function, but using pure state language, such a function can not be used as a black box in other computations. We give a natural definition of using general subroutines, and analyze their computational power. We suggest convenient metrics for quantum computing with mixed states. For density matrices we analyze the so called ``trace metric'', and using this metric, we define and discuss the ``diamond metric'' on superoperators. These metrics enable a formal discussion of errors in the computation. Using a ``causality'' lemma for density matrices, we also prove a simple lower bound for probabilistic functions.

Journal ArticleDOI
TL;DR: A parallel pseudospectral code for calculating the 3-D wavefield by concurrent use of a number of processors based on a partition of the computational domain, where the field quantities are distributed over anumber of processors and the calculation is concurrently done in each subdomain with interprocessor communications.
Abstract: Three-dimensional pseudospectral modeling for a realistic scale problem is still computationally very intensive, even when using current powerful computers. To overcome this, we have developed a parallel pseudospectral code for calculating the 3-D wavefield by concurrent use of a number of processors. The parallel algorithm is based on a partition of the computational domain, where the field quantities are distributed over a number of processors and the calculation is concurrently done in each subdomain with interprocessor communications. Experimental performance tests using three different styles of parallel computers achieved a fairly good speed up compared with conventional computation on a single processor: maximum speed-up rate of 26 using 32 processors of a Thinking Machine CM-5 parallel computer, 1.6 using a Digital Equipment DEC-Alpha two-CPU workstation, and 4.6 using a cluster of eight Sun Microsystems SPARC-Station 10 (SPARC-10) workstations connected by an Ethernet. The result of this test agrees well with the performance theoretically predicted for each system. To demonstrate the feasibility of our parallel algorithm, we show three examples: 3-D acoustic and elastic modeling of fault-zone trapped waves and the calculation of elastic wave propagation in a 3-D syncline model.

Journal ArticleDOI
TL;DR: By using a reasonable amount of cards many useful functions can be computed in such way that each input stays private, if the function to be computed is simple enough.

Journal ArticleDOI
TL;DR: In this paper, the formula for the computation of the gravity field of a polyhedral body whose density is linearly dependent on some coordinate is derived and transformed into the optimum form for numerical calculation.
Abstract: The formula for the computation of the gravity field of a polyhedral body whose density is linearly dependent on some coordinate is derived and transformed into the optimum form for numerical calculation.

Journal ArticleDOI
TL;DR: This article intertwine the subdivision process with the computation of invariant measures and proposes an adaptive scheme for the box refinement which is based on the combination of these methods.
Abstract: Recently subdivision techniques have been introduced in the numerical investigation of the temporal behavior of dynamical systems. In this article we intertwine the subdivision process with the computation of invariant measures and propose an adaptive scheme for the box refinement which is based on the combination of these methods. Using this new algorithm the numerical effort for the computation of box coverings is in general significantly reduced, and we illustrate this fact by several numerical examples.

Journal ArticleDOI
TL;DR: In this paper, the application of controllability techniques to the computation of the time-periodic solutions of evolution equations is discussed in a fairly general context where the time discretization aspect is also discussed.