scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2009"


Book
27 Mar 2009
TL;DR: The approach focuses on large random instances, adopting a common probabilistic formulation in terms of graphical models, and presents message passing algorithms like belief propagation and survey propagation, and their use in decoding and constraint satisfaction solving.
Abstract: This book presents a unified approach to a rich and rapidly evolving research domain at the interface between statistical physics, theoretical computer science/discrete mathematics, and coding/information theory. It is accessible to graduate students and researchers without a specific training in any of these fields. The selected topics include spin glasses, error correcting codes, satisfiability, and are central to each field. The approach focuses on large random instances, adopting a common probabilistic formulation in terms of graphical models. It presents message passing algorithms like belief propagation and survey propagation, and their use in decoding and constraint satisfaction solving. It also explains analysis techniques like density evolution and the cavity method, and uses them to study phase transitions.

1,099 citations


Proceedings ArticleDOI
Paulius Micikevicius1
08 Mar 2009
TL;DR: In this article, a GPU parallelization of the 3D finite difference computation using CUDA is described, which achieves the throughput of between 2,400 to over 3,000 million of output points per second on a single Tesla 10-series GPU.
Abstract: In this paper we describe a GPU parallelization of the 3D finite difference computation using CUDA. Data access redundancy is used as the metric to determine the optimal implementation for both the stencil-only computation, as well as the discretization of the wave equation, which is currently of great interest in seismic computing. For the larger stencils, the described approach achieves the throughput of between 2,400 to over 3,000 million of output points per second on a single Tesla 10-series GPU. This is roughly an order of magnitude higher than a 4-core Harpertown CPU running a similar code from seismic industry. Multi-GPU parallelization is also described, achieving linear scaling with GPUs by overlapping inter-GPU communication with computation.

582 citations


Journal ArticleDOI
TL;DR: In this paper, a nonlinear normal mode (NNM) computation is shown to be possible with limited implementation effort, which paves the way to a practical method for determining the NNMs of nonlinear mechanical systems.

471 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented an effective first step in the mathematical reformulation of physics-based lithium-ion battery models to improve computational efficiency, using an isothermal pseudo-two-dimensional model with volume-averaged equations for the solid phase and incorporating concentrated solution theory, porous electrode theory, and with due consideration to the variations in electronic/ionic conductivities and diffusivities.
Abstract: This paper presents an effective first step in the mathematical reformulation of physics-based lithium-ion battery models to improve computational efficiency. While the additional steps listed elsewhere Electrochem. Solid-State Lett., 10, A225 2007 can be carried out to expedite the computation, the method described here is an effective first step toward efficient reformulation of lithium-ion battery models to expedite computation. The battery model used for the simulation is derived from the first principles as an isothermal pseudo-two-dimensional model with volume-averaged equations for the solid phase and with incorporation of concentrated solution theory, porous electrode theory, and with due consideration to the variations in electronic/ionic conductivities and diffusivities. The nature of the model and the structure of the governing equations are exploited to facilitate model reformulation, yielding efficient and accurate numerical computations.

270 citations


Journal ArticleDOI
TL;DR: In this article, the basic aspects of quantum error correction and fault-tolerant quantum computation are summarized, but not as a detailed guide, but rather as a basic introduction.
Abstract: Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspect of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. This development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

233 citations


Book
30 Jun 2009
TL;DR: This thesis develops the idea of game as computation to a greater degree than has been done previously, and presents a general family of games, called Constraint Logic, which is both mathematically simple and ideally suited for reductions to many actual board games.
Abstract: There is a fundamental connection between the notions of game and of computation. At its most basic level, this is implied by any game complexity result, but the connection is deeper than this. One example is the concept of alternating nondeterminism, which is intimately connected with two-player games. In the first half of this thesis. I develop the idea of game as computation to a greater degree than has been done previously. I present a general family of games, called Constraint Logic, which is both mathematically simple and ideally suited for reductions to many actual board games. A deterministic version of Constraint Logic corresponds to a novel kind of logic circuit which is monotone and reversible. At the other end of the spectrum, I show that a multiplayer version of Constraint Logic is undecidable. That there are undecidable games using finite physical resources is philosophically important, and raises issues related to the Church-Turing thesis. In the second half of this thesis, I apply the Constraint Logic formalism to many actual games and puzzles, providing new hardness proofs. These applications include sliding-block puzzles, sliding-coin puzzles, plank puzzles, hinged polygon dissections, Amazons, Konane, Cross Purposes, TipOver, and others. Some of these have been well-known open problems for some time. For other games, including Minesweeper, the Warehouseman's Problem, Sokoban, and Rush Hour, I either strengthen existing results, or provide new, simpler hardness proofs than the original proofs. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

205 citations


Proceedings Article
07 Dec 2009
TL;DR: A clustering algorithm that approximately optimizes the k-means objective, in the one-pass streaming setting, which is applicable to unsupervised learning on massive data sets, or resource-constrained devices.
Abstract: We provide a clustering algorithm that approximately optimizes the k-means objective, in the one-pass streaming setting. We make no assumptions about the data, and our algorithm is very light-weight in terms of memory, and computation. This setting is applicable to unsupervised learning on massive data sets, or resource-constrained devices. The two main ingredients of our theoretical work are: a derivation of an extremely simple pseudo-approximation batch algorithm for k-means (based on the recent k-means++), in which the algorithm is allowed to output more than k centers, and a streaming clustering algorithm in which batch clustering algorithms are performed on small inputs (fitting in memory) and combined in a hierarchical manner. Empirical evaluations on real and simulated data reveal the practical utility of our method.

194 citations


Journal ArticleDOI
TL;DR: A black-box-type algorithm is presented for the variational computation of energy levels and wave functions using a (ro)vibrational Hamiltonian expressed in an arbitrarily chosen body-fixed frame and in any set of internal coordinates of full or reduced vibrational dimensionality.
Abstract: A black-box-type algorithm is presented for the variational computation of energy levels and wave functions using a (ro)vibrational Hamiltonian expressed in an arbitrarily chosen body-fixed frame and in any set of internal coordinates of full or reduced vibrational dimensionality. To make the required numerical work feasible, matrix representation of the operators is constructed using a discrete variable representation (DVR). The favorable properties of DVR are exploited in the straightforward and numerically exact inclusion of any representation of the potential and the kinetic energy including the G matrix and the extrapotential term. In this algorithm there is no need for an a priori analytic derivation of the kinetic energy operator, as all of its matrix elements at each grid point are computed numerically either in a full- or a reduced-dimensional model. Due to the simple and straightforward definition of reduced-dimensional models within this approach, a fully anharmonic variational treatment of large, otherwise intractable molecular systems becomes available. In the computer code based on the above algorithm, there is no inherent limitation for the maximally coupled number of vibrational degrees of freedom. However, in practice current personal computers allow the treatment of about nine fully coupled vibrational dimensions. Computation of vibrational band origins of full and reduced dimensions showing the advantages and limitations of the algorithm and the related computer code are presented for the water, ammonia, and methane molecules.

188 citations


Book ChapterDOI
TL;DR: The concept of closed symmetric monoidal categories has been introduced in this article to reason about quantum topology and logic, where a linear operator behaves very much like a "cobordism".
Abstract: In physics, Feynman diagrams are used to reason about quantum processes. In the 1980s, it became clear that underlying these diagrams is a powerful analogy between quantum physics and topology: namely, a linear operator behaves very much like a "cobordism". Similar diagrams can be used to reason about logic, where they represent proofs, and computation, where they represent programs. With the rise of interest in quantum cryptography and quantum computation, it became clear that there is extensive network of analogies between physics, topology, logic and computation. In this expository paper, we make some of these analogies precise using the concept of "closed symmetric monoidal category". We assume no prior knowledge of category theory, proof theory or computer science.

179 citations


Journal ArticleDOI
01 Mar 2009
TL;DR: A new GPU algorithm is presented and test for the long-range part of the potentials that computes a cutoff pair potential between lattice points, essentially convolving a fixed 3-D lattice of "weights" over all sub-cubes of a much larger lattice.
Abstract: Physical and engineering practicalities involved in microprocessor design have resulted in flat performance growth for traditional single-core microprocessors. The urgent need for continuing increases in the performance of scientific applications requires the use of many-core processors and accelerators such as graphics processing units (GPUs). This paper discusses GPU acceleration of the multilevel summation method for computing electrostatic potentials and forces for a system of charged atoms, which is a problem of paramount importance in biomolecular modeling applications. We present and test a new GPU algorithm for the long-range part of the potentials that computes a cutoff pair potential between lattice points, essentially convolving a fixed 3D lattice of ''weights'' over all sub-cubes of a much larger lattice. The implementation exploits the different memory subsystems provided on the GPU to stream optimally sized data sets through the multiprocessors. We demonstrate for the full multilevel summation calculation speedups of up to 26 using a single GPU and 46 using multiple GPUs, enabling the computation of a high-resolution map of the electrostatic potential for a system of 1.5 million atoms in under 12s.

141 citations


Journal ArticleDOI
TL;DR: In this article, a new ray tracing software has been developed at the German Aerospace Center for the flux density simulation of heliostat fields with a very high accuracy in a small amount of computation time.
Abstract: A completely new ray tracing software has been developed at the German Aerospace Center. The main purpose of this software is the flux density simulation of heliostat fields with a very high accuracy in a small amount of computation time. The software is primarily designed to process real sun shape distributions and real highly resolved heliostat geometry data, which means a data set of normal vectors of the entire reflecting surface of each heliostat in the field. Specific receiver and secondary concentrator models, as well as models of objects that are shadowing the heliostat field, can be implemented by the user and be linked to the simulation software subsequently. The specific architecture of the software enables the provision of other powerful simulation environments with precise flux density simulation data for the purpose of entire plant simulations. The software was validated through a severe comparison with measured flux density distributions. The simulation results show very good accordance with the measured results.

Journal ArticleDOI
TL;DR: This paper introduces a novel algorithm running in O(N^2 log N) time, i.e., with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm.
Abstract: This paper is concerned with the fast computation of Fourier integral operators of the general form ∫_(R^d) e^[(2πiΦ)(x,k)]f(k)dk, where k is a frequency variable, Φ(x, k) is a phase function obeying a standard homogeneity condition, and f is a given input. This is of interest, for such fundamental computations are connected with the problem of finding numerical solutions to wave equations and also frequently arise in many applications including reflection seismology, curvilinear tomography, and others. In two dimensions, when the input and output are sampled on N × N Cartesian grids, a direct evaluation requires O(N^4) operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in O(N^2 log N) time, i.e., with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel e^[2πiΦ(x,k)] to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel is approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part, and finally remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.

Proceedings Article
01 Jan 2009
TL;DR: It is shown how the problems of maximumcoverage and set-cover in the set-streaming model can be utilized to give efficient online solutions to this problem and verified the effectiveness of the methods both on synthetic and real weblog data.
Abstract: We generalize the graph streaming model to hypergraphs. In this streaming model, hyperedges are arriving online and any computation has to be done on-the-fly using a small amount of space. Each hyperedge can be viewed as a set of elements (nodes), so we refer to our proposed model as the “set-streaming” model of computation. We consider the problem of “maximum coverage”, in which k sets have to be selected that maximize the total weight of the covered elements. In the set-streaming model of computation, we show that our algorithm for maximumcoverage achieves an approximation factor of 14 . When multiple passes are allowed, we also provide a Θ(log n) approximation algorithm for the set-cover. We next consider a multi-topic blog-watch application, an extension of blogalert like applications for handling simultaneous multipletopic requests. We show how the problems of maximumcoverage and set-cover in the set-streaming model can be utilized to give efficient online solutions to this problem. We verify the effectiveness of our methods both on synthetic and real weblog data.

Proceedings ArticleDOI
13 Apr 2009
TL;DR: A general class of problems in sensor networks where the event-triggered algorithm can be used and it is shown that the proposed algorithm reduces the number of message exchanges by two orders of magnitude compared to commonly used dual decomposition algorithms.
Abstract: Many problems in sensor networks can be formulated as optimization problems. Existing distributed optimization algorithms typically rely on choosing a step size to ensure convergence. In this case, the communication between sensor nodes occurs each time the computations are carried out. Since in sensor networks, the energy required for communication can be significantly greater than the energy required to perform computation, it would be beneficial if we can somehow separate communication and computation. This paper presents such a distributed algorithm called the event-triggered algorithm. Under event triggering, each agent broadcasts to its neighbors when a local “error” signal exceeds a state dependent threshold. We give a general class of problems in sensor networks where the event-triggered algorithm can be used. In particular, this paper uses the data gathering problem as an example. We propose an event-triggered distributed algorithm and prove its convergence. Simulation results show that the proposed algorithm reduces the number of message exchanges by two orders of magnitude compared to commonly used dual decomposition algorithms. It also enjoys better scalability with respect to the depth of the tree and the maximum branch number of the tree.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: A survey of various human computation systems which are categorized into initiatory human computation, distributed human computation and social game-based human computation with volunteers, paid engineers and online players is given.
Abstract: Human computation is a technique that makes use of human abilities for computation to solve problems. The human computation problems are the problems those computers are not good at solving but are trivial for humans. In this paper, we give a survey of various human computation systems which are categorized into initiatory human computation, distributed human computation and social game-based human computation with volunteers, paid engineers and online players. For the existing large number of social games, some previous works defined various types of social games, but the recent developed social games cannot be categorized based on the previous works. In this paper, we define the categories and the characteristics of social games which are suitable for all existing ones. Besides, we present a survey on the performance aspects of human computation system. This paper gives a better understanding on human computation system.

Journal ArticleDOI
TL;DR: This paper introduces a feasible architectural design for large scale quantum computation in optical systems by combining the recent developments in topological cluster state computation with the photonic module, a simple chip-based device that can be used as a fundamental building block for a large-scale computer.
Abstract: The development of a large scale quantum computer is a highly sought after goal of fundamental research and consequently a highly non-trivial problem. Scalability in quantum information processing is not just a problem of qubit manufacturing and control but it crucially depends on the ability to adapt advanced techniques in quantum information theory, such as error correction, to the experimental restrictions of assembling qubit arrays into the millions. In this paper, we introduce a feasible architectural design for large scale quantum computation in optical systems. We combine the recent developments in topological cluster state computation with the photonic module, a simple chip-based device that can be used as a fundamental building block for a large-scale computer. The integration of the topological cluster model with this comparatively simple operational element addresses many significant issues in scalable computing and leads to a promising modular architecture with complete integration of active error correction, exhibiting high fault-tolerant thresholds.

Journal ArticleDOI
TL;DR: In this article, the authors propose to complement the usual scenario in the fault-tolerant quantum computation with code deformation, in which a given code is progressively changed in such a way that encoded qubits can be created, manipulated and non-destructively measured.
Abstract: The usual scenario in the fault-tolerant quantum computation involves certain amount of qubits encoded in each code block, transversal operations between them and destructive measurements of ancillary code blocks. We propose to complement these techniques with code deformation, in which a given code is progressively changed in such a way that encoded qubits can be created, manipulated and non-destructively measured. We apply this approach to surface codes, where the computation is performed in a single code layer which is deformed using 'cut and paste' operations. All the interactions between qubits remain purely local in a two-dimensional setting.

Journal ArticleDOI
01 Sep 2009
TL;DR: This correspondence paper evaluates the control performance and the computation time reduction of the sequential decentralized and fully decentralized methods in comparison with the centralized method and shows that the fully decentralized method can be made effective against short term communication failure.
Abstract: This correspondence paper presents the validation of a formation flight control technique with obstacle avoidance capability based on nonlinear model predictive algorithms. Control architectures for multi-agent systems employed in this correspondence paper can be categorized as centralized, sequential decentralized, and fully decentralized methods. Centralized methods generally have better performance than decentralized methods. However, it is well known that the performance of the centralized methods for formation flight degrades when there exists communication failure among the vehicles, and they require more computation time than the decentralized method. This correspondence paper evaluates the control performance and the computation time reduction of the sequential decentralized and fully decentralized methods in comparison with the centralized method and shows that the fully decentralized method can be made effective against short term communication failure. The control inputs for formation flight are computed by nonlinear model predictive control (NMPC). The control input saturation and state constraints are incorporated as inequality constraints using Karush Kuhn Tucker conditions in the NMPC framework, and the collision avoidance can be considered in real time. The proposed schemes are validated by numerical simulations, which include the process and measurement noise for more realistic situations.

Journal ArticleDOI
TL;DR: The power of closed timelike curves and other nonlinear extensions of quantum mechanics for distinguishing nonorthogonal states and speeding up hard computations is studied and it is shown that if a CTC-assisted computer is presented with a labeled mixture of states to be distinguished, the CTC is of no use.
Abstract: We study the power of closed timelike curves (CTCs) and other nonlinear extensions of quantum mechanics for distinguishing nonorthogonal states and speeding up hard computations. If a CTC-assisted computer is presented with a labeled mixture of states to be distinguished---the most natural formulation---we show that the CTC is of no use. The apparent contradiction with recent claims that CTC-assisted computers can perfectly distinguish nonorthogonal states is resolved by noting that CTC-assisted evolution is nonlinear, so the output of such a computer on a mixture of inputs is not a convex combination of its output on the mixture's pure components. Similarly, it is not clear that CTC assistance or nonlinear evolution help solve hard problems if computation is defined as we recommend, as correctly evaluating a function on a labeled mixture of orthogonal inputs.

Journal ArticleDOI
TL;DR: A directional multiscale algorithm for the N-body problem of the two dimensional Helmholtz kernel that is accurate and has the optimal $O(NlogN)$ complexity for problems from two dimensional scattering applications is introduced.
Abstract: This paper is concerned with fast solution of high frequency acoustic scattering problems in two dimensions. We introduce a directional multiscale algorithm for the $N$-body problem of the two dimensional Helmholtz kernel. The algorithm follows the approach developed in Engquist and Ying, SIAM J. Sci. Comput., 29 (4), 2007, where the three dimensional case was studied. The main observation is that, for two regions that follow a directional parabolic geometric conguration, the interaction between these two regions through the 2D Helmholtz kernel is approximately low rank. We propose an improved randomized procedure for generating the low rank separated representation for the interaction between these regions. Based on this representation, the computation of the far field interaction is organized in a multidirectional and multiscale way to achieve maximum efficiency. The proposed algorithm is accurate and has the optimal $O(NlogN)$ complexity for problems from two dimensional scattering applications. Finally, we combine this fast directional algorithm with standard boundary integral formulations to solve acoustic scattering problems that are of thousands of wavelengths in size.

Journal ArticleDOI
TL;DR: In this paper, the decoherence properties of adiabatic quantum computation (AQC) in the presence of in general non-Markovian, e.g., low-frequency, noise were studied.
Abstract: We have studied the decoherence properties of adiabatic quantum computation (AQC) in the presence of in general non-Markovian, e.g., low-frequency, noise. The developed description of the incoherent Landau-Zener transitions shows that the global AQC maintains its properties even for decoherence larger than the minimum gap at the anticrossing of the two lowest-energy levels. The more efficient local AQC, however, does not improve scaling of the computation time with the number of qubits $n$ as in the decoherence-free case. The scaling improvement requires phase coherence throughout the computation, limiting the computation time and the problem size $n$.

Journal ArticleDOI
TL;DR: In this paper, a multimodal nested sampling (MULTINEST) algorithm is proposed to evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes.
Abstract: We describe an application of the MULTINEST algorithm to gravitational wave data analysis. MULTINEST is a multimodal nested sampling algorithm designed to efficiently evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes. The algorithm employs a set of 'live' points which are updated by partitioning the set into multiple overlapping ellipsoids and sampling uniformly from within them. This set of 'live' points climbs up the likelihood surface through nested iso-likelihood contours and the evidence and posterior distributions can be recovered from the point set evolution. The algorithm is model independent in the sense that the specific problem being tackled enters only through the likelihood computation, and does not change how the 'live' point set is updated. In this paper, we consider the use of the algorithm for gravitational wave data analysis by searching a simulated LISA data set containing two non-spinning supermassive black hole binary signals. The algorithm is able to rapidly identify all the modes of the solution and recover the true parameters of the sources to high precision.

Journal ArticleDOI
TL;DR: This article realizes a duality between dynamic dependence graphs and memoization, and combine them to give a change-propagation algorithm that can dramatically increase computation reuse, and refers to this approach as self-adjusting computation.
Abstract: Recent work on adaptive functional programming (AFP) developed techniques for writing programs that can respond to modifications to their data by performing change propagation. To achieve this, executions of programs are represented with dynamic dependence graphs (DDGs) that record data dependences and control dependences in a way that a change-propagation algorithm can update the computation as if the program were from scratch, by re-executing only the parts of the computation affected by the changes. Since change-propagation only re-executes parts of the computation, it can respond to certain incremental modifications asymptotically faster than recomputing from scratch, potentially offering significant speedups. Such asymptotic speedups, however, are rare: for many computations and modifications, change propagation is no faster than recomputing from scratch.In this article, we realize a duality between dynamic dependence graphs and memoization, and combine them to give a change-propagation algorithm that can dramatically increase computation reuse. The key idea is to use DDGs to identify and re-execute the parts of the computation that are affected by modifications, while using memoization to identify the parts of the computation that remain unaffected by the changes. We refer to this approach as self-adjusting computation. Since DDGs are imperative, but (traditional) memoization requires purely functional computation, reusing computation correctly via memoization becomes a challenge. We overcome this challenge with a technique for remembering and reusing not just the results of function calls (as in conventional memoization), but their executions represented with DDGs. We show that the proposed approach is realistic by describing a library for self-adjusting computation, presenting efficient algorithms for realizing the library, and describing and evaluating an implementation. Our experimental evaluation with a variety of applications, ranging from simple list primitives to more sophisticated computational geometry algorithms, shows that the approach is effective in practice: compared to recomputing from-scratch; self-adjusting programs respond to small modifications to their data orders of magnitude faster.

Journal ArticleDOI
TL;DR: This construction indicates that the 3D divergence-free C"0-P"k elements have the full order of approximation for any degree k>=8, which mainly serves the purposes of understanding and ensuring the approximation properties of C"1 finite elements spaces on tetrahedral grids.

Journal ArticleDOI
TL;DR: A method to measure correlations is presented that can be shown to be identical to the original ‘order-n algorithm’ from Frenkel and Smit (Understanding Molecular Simulation, Academic Press, 2002) and is significantly easier to implement than standard methods.
Abstract: A method to measure correlations is presented that can be shown to be identical to the original ‘order-n algorithm’ from Frenkel and Smit (Understanding Molecular Simulation, Academic Press, 2002). In contrast to their work, we present the algorithm without the use of ‘block sums of velocities’. We show that the algorithm gives identical results compared to standard correlation methods for the time points at which the correlation is computed. We apply the algorithm to compute diffusion of methane and benzene in the metal-organic framework IRMOF-1 and focus on the computation of the mean-squared displacement, the velocity autocorrelation function (VACF), and the angular VACF. Other correlation functions can readily be computed using the same algorithm. The savings in computer time and memory result from a reduction of the number of time points, as they can be chosen non-uniformly. In addition, the algorithm is significantly easier to implement than standard methods. Source code for the algorithm is given.

Posted Content
TL;DR: In this paper, the boundary between classical and quantum computational power is investigated, and new classical simulation algorithms that are centered on sampling methods are generated, where standard techniques relying on the exact computation of measurement probabilities fail to provide efficient simulations.
Abstract: We investigate the boundary between classical and quantum computational power. This work consists of two parts. First we develop new classical simulation algorithms that are centered on sampling methods. Using these techniques we generate new classes of classically simulatable quantum circuits where standard techniques relying on the exact computation of measurement probabilities fail to provide efficient simulations. For example, we show how various concatenations of matchgate, Toffoli, Clifford, bounded-depth, Fourier transform and other circuits are classically simulatable. We also prove that sparse quantum circuits as well as circuits composed of CNOT and exp[iaX] gates can be simulated classically. In a second part, we apply our results to the simulation of quantum algorithms. It is shown that a recent quantum algorithm, concerned with the estimation of Potts model partition functions, can be simulated efficiently classically. Finally, we show that the exponential speed-ups of Simon's and Shor's algorithms crucially depend on the very last stage in these algorithms, dealing with the classical postprocessing of the measurement outcomes. Specifically, we prove that both algorithms would be classically simulatable if the function classically computed in this step had a sufficiently peaked Fourier spectrum.

Journal ArticleDOI
TL;DR: A quantum algorithm is provided for the numerical evaluation of molecular properties, whose time cost is a constant multiple of the time needed to compute the molecular energy, regardless of the size of the system.
Abstract: Quantum computers, if available, could substantially accelerate quantum simulations. We extend this result to show that the computation of molecular properties (energy derivatives) could also be sped up using quantum computers. We provide a quantum algorithm for the numerical evaluation of molecular properties, whose time cost is a constant multiple of the time needed to compute the molecular energy, regardless of the size of the system. Molecular properties computed with the proposed approach could also be used for the optimization of molecular geometries or other properties. For that purpose, we discuss the benefits of quantum techniques for Newton’s method and Householder methods. Finally, global minima for the proposed optimizations can be found using the quantum basin hopper algorithm, which offers an additional quadratic reduction in cost over classical multi-start techniques.

Journal ArticleDOI
TL;DR: In this article, a transition prediction method for general 3D configurations is presented, which consists of a coupled program system including a 3D Navier-Stokes solver, a transition module, a boundary layer code and a stability code.

Journal ArticleDOI
TL;DR: This paper proposes to use the Total Lagrangian formulation of the Finite Element method together with Dynamic Relaxation for computing intra-operative organ deformations and proposes a termination criteria that can be used in order to obtain fast results with prescribed accuracy.

Journal ArticleDOI
01 Jul 2009-Proteins
TL;DR: The alpha shape of a molecule is a geometrical representation that provides a unique surface decomposition and a means to filter atomic contacts and is used to revisit and unify the definition and computation of surface residues, contiguous patches, and curvature.
Abstract: The alpha shape of a molecule is a geometrical representation that provides a unique surface decomposition and a means to filter atomic contacts. We used it to revisit and unify the definition and computation of surface residues, contiguous patches, and curvature. These descriptors are evaluated and compared with former approaches on 85 proteins for which both bound and unbound forms are available. Based on the local density of interactions, the detection of surface residues shows a sensibility of 98%, whereas preserving a well-formed protein core. A novel conception of surface patch is defined by traveling along the surface from a central residue or atom. By construction, all surface patches are contiguous and, therefore, allows to cope with common problems of wrong and nonselection of neighbors. In the case of protein-binding site prediction, this new definition has improved the signal-to-noise ratio by 2.6 times compared with a widely used approach. With most common approaches, the computation of surface curvature can be locally biased by the presence of subsurface cavities and local variations of atomic densities. A novel notion of surface curvature is specifically developed to avoid such bias and is parametrizable to emphasize either local or global features. It defines a molecular landscape composed on average of 38% knobs and 62% clefts where interacting residues (IR) are 30% more frequent in knobs. A statistical analysis shows that residues in knobs are more charged, less hydrophobic and less aromatic than residues in clefts. IR in knobs are, however, much more hydrophobic and aromatic and less charged than noninteracting residues (non-IR) in knobs. Furthermore, IR are shown to be more accessible than non-IR both in clefts and knobs. The use of the alpha shape as a unifying framework allows for formal definitions, and fast and robust computations desirable in large-scale projects. This swiftness is not achieved to the detriment of quality, as proven by valid improvements compared with former approaches. In addition, our approach is general enough to be applied on nucleic acids and any other biomolecules. Proteins 2009. (c) 2008 Wiley-Liss, Inc.