scispace - formally typeset
Search or ask a question

Scalable Molecular Dynamics with NAMD.

TL;DR: NAMD as mentioned in this paper is a parallel molecular dynamics code designed for highperformance simulation of large biomolecular systems, which can scale to hundreds of processors on high-end parallel platforms, as well as tens of processors in low-cost commodity clusters, and also runs on individual desktop and laptop computers.
Abstract: NAMD is a parallel molecular dynamics code designed for high‐performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high‐end parallel platforms, as well as tens of processors on low‐cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This article, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free energy differences are presented. The motivations for and a roadmap to the internal design of NAMD, implemented in C++ and based on Charm++ parallel objects, are outlined. The factors affecting the serial and parallel performance of a simulation are discussed. Finally, typical NAMD use is illustrated with representative applications to a small, a medium, and a large biomolecular system, highlighting particular features of NAMD, for example, the Tcl scripting language. The article also provides a list of the key features of NAMD and discusses the benefits of combining NAMD with the molecular graphics/sequence analysis software VMD and the grid computing/collaboratory software BioCoRE. NAMD is distributed free of charge with source code at www.ks.uiuc.edu. © 2005 Wiley Periodicals, Inc. J Comput Chem 26: 1781–1802, 2005
Citations
More filters
01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Journal ArticleDOI
TL;DR: A range of new simulation algorithms and features developed during the past 4 years are presented, leading up to the GROMACS 4.5 software package, which provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations.
Abstract: Motivation: Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Results: Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. Availability: GROMACS is an open source and free software available from http://www.gromacs.org. Contact: erik.lindahl@scilifelab.se Supplementary information:Supplementary data are available at Bioinformatics online.

6,029 citations

Journal ArticleDOI
30 May 2013-Nature
TL;DR: The complete atomic HIV-1 capsid model provides a platform for further studies of capsid function and for targeted pharmacological intervention and the cryo-electron-microscopy structures enable modelling by large-scale molecular dynamics simulation, resulting in all-atom models for the hexamer-of- hexamer and pentamer- of-hexamer elements.
Abstract: Retroviral capsid proteins are conserved structurally but assemble into different morphologies. The mature human immunodeficiency virus-1 (HIV-1) capsid is best described by a 'fullerene cone' model, in which hexamers of the capsid protein are linked to form a hexagonal surface lattice that is closed by incorporating 12 capsid-protein pentamers. HIV-1 capsid protein contains an amino-terminal domain (NTD) comprising seven α-helices and a β-hairpin, a carboxy-terminal domain (CTD) comprising four α-helices, and a flexible linker with a 310-helix connecting the two structural domains. Structures of the capsid-protein assembly units have been determined by X-ray crystallography; however, structural information regarding the assembled capsid and the contacts between the assembly units is incomplete. Here we report the cryo-electron microscopy structure of a tubular HIV-1 capsid-protein assembly at 8 A resolution and the three-dimensional structure of a native HIV-1 core by cryo-electron tomography. The structure of the tubular assembly shows, at the three-fold interface, a three-helix bundle with critical hydrophobic interactions. Mutagenesis studies confirm that hydrophobic residues in the centre of the three-helix bundle are crucial for capsid assembly and stability, and for viral infectivity. The cryo-electron-microscopy structures enable modelling by large-scale molecular dynamics simulation, resulting in all-atom models for the hexamer-of-hexamer and pentamer-of-hexamer elements as well as for the entire capsid. Incorporation of pentamers results in closer trimer contacts and induces acute surface curvature. The complete atomic HIV-1 capsid model provides a platform for further studies of capsid function and for targeted pharmacological intervention.

734 citations

Journal ArticleDOI
TL;DR: OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures that was designed to be extensible, so new hardware architectures can be accommodated and new functionality can be easily added.
Abstract: OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added.

599 citations