scispace - formally typeset
Search or ask a question
Journal ArticleDOI

GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

TL;DR: GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules, and provides a rich set of calculation types.
About: This article is published in SoftwareX.The article was published on 2015-09-01 and is currently open access. It has received 12985 citations till now.
Citations
More filters
01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Journal ArticleDOI
TL;DR: The optimal simulation protocol for each program has been implemented in CHARMM-GUI and is expected to be applicable to the remainder of the additive C36 FF including the proteins, nucleic acids, carbohydrates, and small molecules.
Abstract: Proper treatment of nonbonded interactions is essential for the accuracy of molecular dynamics (MD) simulations, especially in studies of lipid bilayers. The use of the CHARMM36 force field (C36 FF) in different MD simulation programs can result in disagreements with published simulations performed with CHARMM due to differences in the protocols used to treat the long-range and 1-4 nonbonded interactions. In this study, we systematically test the use of the C36 lipid FF in NAMD, GROMACS, AMBER, OpenMM, and CHARMM/OpenMM. A wide range of Lennard-Jones (LJ) cutoff schemes and integrator algorithms were tested to find the optimal simulation protocol to best match bilayer properties of six lipids with varying acyl chain saturation and head groups. MD simulations of a 1,2-dipalmitoyl-sn-phosphatidylcholine (DPPC) bilayer were used to obtain the optimal protocol for each program. MD simulations with all programs were found to reasonably match the DPPC bilayer properties (surface area per lipid, chain order para...

2,182 citations

Journal ArticleDOI
TL;DR: Several of the fundamental algorithms used in LAMMPS are described along with the design strategies which have made it flexible for both users and developers, and some capabilities recently added to the code which were enabled by this flexibility are highlighted.

1,956 citations

Journal ArticleDOI
TL;DR: The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) as mentioned in this paper is a simulator for particle-based modeling of materials at length scales ranging from atomic to mesoscale to continuum.

1,517 citations

Journal ArticleDOI
TL;DR: OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility, which makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community.
Abstract: OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility. It allows users to easily add new features, including forces with novel functional forms, new integration algorithms, and new simulation protocols. Those features automatically work on all supported hardware types (including both CPUs and GPUs) and perform well on all of them. In many cases they require minimal coding, just a mathematical description of the desired function. They also require no modification to OpenMM itself and can be distributed independently of OpenMM. This makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community.

1,364 citations


Cites methods from "GROMACS: High performance molecular..."

  • ...For Gromacs and CHARMM, we compared only explicit solvent....

    [...]

  • ...These are some of the functions provided by OpenMM that may be used by Python scripts: • Reading input files, including standard formats like PDB or PDBx/mmCIF, as well as the pro- prietary formats used by applications such as Amber, CHARMM, Gromacs, and Desmond....

    [...]

  • ...OpenMM does include modelling tools, but it also can directly read the file formats used by Amber [16], CHARMM [17], Gromacs [18], and Desmond [19]....

    [...]

  • ...Comparisons were made to Amber 16, Gromacs 4.6.5, and CHARMM-LITE c40b1....

    [...]

  • ...Results are shown for Amber in Tables 6 and 7, for Gromacs in Tables 8 and 9, and for CHARMM in Tables 10 and 11....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: VMD is a molecular graphics program designed for the display and analysis of molecular assemblies, in particular biopolymers such as proteins and nucleic acids, which can simultaneously display any number of structures using a wide variety of rendering styles and coloring methods.

46,130 citations

Journal ArticleDOI
TL;DR: In this article, three parallel algorithms for classical molecular dynamics are presented, which can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors.

32,670 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Journal ArticleDOI
TL;DR: An N⋅log(N) method for evaluating electrostatic energies and forces of large periodic systems is presented based on interpolation of the reciprocal space Ewald sums and evaluation of the resulting convolutions using fast Fourier transforms.
Abstract: An N⋅log(N) method for evaluating electrostatic energies and forces of large periodic systems is presented. The method is based on interpolation of the reciprocal space Ewald sums and evaluation of the resulting convolutions using fast Fourier transforms. Timings and accuracies are presented for three large crystalline ionic systems.

24,332 citations

Journal ArticleDOI
TL;DR: In this paper, a numerical algorithm integrating the 3N Cartesian equations of motion of a system of N points subject to holonomic constraints is formulated, and the relations of constraint remain perfectly fulfilled at each step of the trajectory despite the approximate character of numerical integration.

18,394 citations