scispace - formally typeset
Search or ask a question
Author

Andreas W. Götz

Bio: Andreas W. Götz is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Density functional theory & QM/MM. The author has an hindex of 26, co-authored 74 publications receiving 6068 citations. Previous affiliations of Andreas W. Götz include San Diego Supercomputer Center & University of Erlangen-Nuremberg.


Papers
More filters
Journal ArticleDOI
TL;DR: An implementation of explicit solvent all atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA-enabled GPUs, providing results that are statistically indistinguishable from the traditional CPU version of the software and with performance that exceeds that achievable by the CPUs running on all conventional CPU-based clusters and supercomputers.
Abstract: We present an implementation of explicit solvent all atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA-enabled GPUs. First released publicly in April 2010 as part of version 11 of the AMBER MD package and further improved and optimized over the last two years, this implementation supports the three most widely used statistical mechanical ensembles (NVE, NVT, and NPT), uses particle mesh Ewald (PME) for the long-range electrostatics, and runs entirely on CUDA-enabled NVIDIA graphics processing units (GPUs), providing results that are statistically indistinguishable from the traditional CPU version of the software and with performance that exceeds that achievable by the CPU version of AMBER software running on all conventional CPU-based clusters and supercomputers. We briefly discuss three different precision models developed specifically for this work (SPDP, SPFP, and DPDP) and highlight the technical details of the approach as it extends beyond previously reported work [Gotz et al., J. Chem. Theory Comput. 2012, DOI: 10.1021/ct200909j; Le Grand et al., Comp. Phys. Comm. 2013, DOI: 10.1016/j.cpc.2012.09.022].We highlight the substantial improvements in performance that are seen over traditional CPU-only machines and provide validation of our implementation and precision models. We also provide evidence supporting our decision to deprecate the previously described fully single precision (SPSP) model from the latest release of the AMBER software package.

2,418 citations

Journal ArticleDOI
TL;DR: An implementation of generalized Born implicit solvent all-atom classical molecular dynamics within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs) and shows performance that is on par with, and in some cases exceeds, that of traditional supercomputers.
Abstract: We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers.

1,645 citations

Journal ArticleDOI
TL;DR: This precision model replaces double precision arithmetic with fixed point integer arithmetic for the accumulation of force components as compared to a previously introduced model that uses mixed single/double precision arithmetic, which significantly boosts performance on modern GPU hardware without sacrificing numerical accuracy.

851 citations

Journal ArticleDOI
Edoardo Aprà1, Eric J. Bylaska1, W. A. de Jong2, Niranjan Govind1, Karol Kowalski1, T. P. Straatsma3, Marat Valiev1, H. J. J. van Dam4, Yuri Alexeev5, J. Anchell6, V. Anisimov5, Fredy W. Aquino, Raymond Atta-Fynn7, Jochen Autschbach8, Nicholas P. Bauman1, Jeffrey C. Becca9, David E. Bernholdt10, K. Bhaskaran-Nair11, Stuart Bogatko12, Piotr Borowski13, Jeffery S. Boschen14, Jiří Brabec15, Adam Bruner16, Emilie Cauet17, Y. Chen18, Gennady N. Chuev19, Christopher J. Cramer20, Jeff Daily1, M. J. O. Deegan, Thom H. Dunning21, Michel Dupuis8, Kenneth G. Dyall, George I. Fann10, Sean A. Fischer22, Alexandr Fonari23, Herbert A. Früchtl24, Laura Gagliardi20, Jorge Garza25, Nitin A. Gawande1, Soumen Ghosh20, Kurt R. Glaesemann1, Andreas W. Götz26, Jeff R. Hammond6, Volkhard Helms27, Eric D. Hermes28, Kimihiko Hirao, So Hirata29, Mathias Jacquelin2, Lasse Jensen9, Benny G. Johnson, Hannes Jónsson30, Ricky A. Kendall10, Michael Klemm6, Rika Kobayashi31, V. Konkov32, Sriram Krishnamoorthy1, M. Krishnan18, Zijing Lin33, Roberto D. Lins34, Rik J. Littlefield, Andrew J. Logsdail35, Kenneth Lopata36, Wan Yong Ma37, Aleksandr V. Marenich20, J. Martin del Campo38, Daniel Mejía-Rodríguez39, Justin E. Moore6, Jonathan M. Mullin, Takahito Nakajima, Daniel R. Nascimento1, Jeffrey A. Nichols10, P. J. Nichols40, J. Nieplocha1, Alberto Otero-de-la-Roza41, Bruce J. Palmer1, Ajay Panyala1, T. Pirojsirikul42, Bo Peng1, Roberto Peverati32, Jiri Pittner15, L. Pollack, Ryan M. Richard43, P. Sadayappan44, George C. Schatz45, William A. Shelton36, Daniel W. Silverstein46, D. M. A. Smith6, Thereza A. Soares47, Duo Song1, Marcel Swart, H. L. Taylor48, G. S. Thomas1, Vinod Tipparaju49, Donald G. Truhlar20, Kiril Tsemekhman, T. Van Voorhis50, Álvaro Vázquez-Mayagoitia5, Prakash Verma, Oreste Villa51, Abhinav Vishnu1, Konstantinos D. Vogiatzis52, Dunyou Wang53, John H. Weare26, Mark J. Williamson54, Theresa L. Windus14, Krzysztof Wolinski13, A. T. Wong, Qin Wu4, Chan-Shan Yang2, Q. Yu55, Martin Zacharias56, Zhiyong Zhang57, Yan Zhao58, Robert W. Harrison59 
Pacific Northwest National Laboratory1, Lawrence Berkeley National Laboratory2, National Center for Computational Sciences3, Brookhaven National Laboratory4, Argonne National Laboratory5, Intel6, University of Texas at Arlington7, State University of New York System8, Pennsylvania State University9, Oak Ridge National Laboratory10, Washington University in St. Louis11, Wellesley College12, Maria Curie-Skłodowska University13, Iowa State University14, Academy of Sciences of the Czech Republic15, University of Tennessee at Martin16, Université libre de Bruxelles17, Facebook18, Russian Academy of Sciences19, University of Minnesota20, University of Washington21, United States Naval Research Laboratory22, Georgia Institute of Technology23, University of St Andrews24, Universidad Autónoma Metropolitana25, University of California, San Diego26, Saarland University27, Sandia National Laboratories28, University of Illinois at Urbana–Champaign29, University of Iceland30, Australian National University31, Florida Institute of Technology32, University of Science and Technology of China33, Oswaldo Cruz Foundation34, Cardiff University35, Louisiana State University36, Chinese Academy of Sciences37, National Autonomous University of Mexico38, University of Florida39, Los Alamos National Laboratory40, University of Oviedo41, Prince of Songkla University42, Ames Laboratory43, University of Utah44, Northwestern University45, Universal Display Corporation46, Federal University of Pernambuco47, CD-adapco48, Cray49, Massachusetts Institute of Technology50, Nvidia51, University of Tennessee52, Shandong Normal University53, University of Cambridge54, Advanced Micro Devices55, Technische Universität München56, Stanford University57, Wuhan University of Technology58, Stony Brook University59
TL;DR: The NWChem computational chemistry suite is reviewed, including its history, design principles, parallel tools, current capabilities, outreach, and outlook.
Abstract: Specialized computational chemistry packages have permanently reshaped the landscape of chemical and materials science by providing tools to support and guide experimental efforts and for the prediction of atomistic and electronic properties. In this regard, electronic structure packages have played a special role by using first-principle-driven methodologies to model complex chemical and materials processes. Over the past few decades, the rapid development of computing technologies and the tremendous increase in computational power have offered a unique chance to study complex transformations using sophisticated and predictive many-body techniques that describe correlated behavior of electrons in molecular and condensed phase systems at different levels of theory. In enabling these simulations, novel parallel algorithms have been able to take advantage of computational resources to address the polynomial scaling of electronic structure methods. In this paper, we briefly review the NWChem computational chemistry suite, including its history, design principles, parallel tools, current capabilities, outreach, and outlook.

342 citations

Journal ArticleDOI
Edoardo Aprà, Eric J. Bylaska, W. A. de Jong, Niranjan Govind, Karol Kowalski, T. P. Straatsma, Marat Valiev, H. J. J. van Dam, Yuri Alexeev, James L. Anchell, Victor M. Anisimov, Fredy W. Aquino, Raymond Atta-Fynn, Jochen Autschbach, Nicholas P. Bauman, Jeffrey C. Becca, David E. Bernholdt, Kiran Bhaskaran-Nair, Stuart Bogatko, Piotr Borowski, Jeffrey Scott Boschen, Jiří Brabec, Adam Bruner, Emilie Cauet, Y. Chen, Gennady N. Chuev, Christopher J. Cramer, Jeff Daily, M. J. O. Deegan, Thomas Dunning, Michel Dupuis, Kenneth G. Dyall, George I. Fann, Sean A. Fischer, Alexandr Fonari, H. Früuchtl, Laura Gagliardi, Jorge Garza, Nitin A. Gawande, Sayan Ghosh, Kurt R. Glaesemann, Andreas W. Götz, Jeff R. Hammond, Volkhard Helms, Eric D. Hermes, Kimihiko Hirao, So Hirata, Mathias Jacquelin, Lasse Jensen, Benny G. Johnson, Hannes Jónsson, Ricky A. Kendall, Michael Klemm, Rika Kobayashi, V. Konkov, Sriram Krishnamoorthy, Manojkumar Krishnan, Zijing Lin, Roberto D. Lins, Rik J. Littlefield, Andrew J. Logsdail, Kenneth Lopata, Wan Yong Ma, Aleksandr V. Marenich, J. Martin del Campo, Daniel Mejía-Rodríguez, Justin E. Moore, Jonathan M. Mullin, Takahito Nakajima, Daniel R. Nascimento, Jeffrey A. Nichols, Patrick Nichols, J. Nieplocha, A. Otero de la Roza, Bruce J. Palmer, Ajay Panyala, T. Pirojsirikul, Bo Peng, Roberto Peverati, Jiri Pittner, L. Pollack, Ryan M. Richard, P. Sadayappan, George C. Schatz, William A. Shelton, Daniel W. Silverstein, Dayle M. A. Smith, Thereza A. Soares, Duo Song, Marcel Swart, H. L. Taylor, G. S. Thomas, Vinod Tipparaju, Donald G. Truhlar, Kiril Tsemekhman, T. Van Voorhis, Álvaro Vázquez-Mayagoitia, Prakash Verma, Oreste Villa, Abhinav Vishnu, Konstantinos D. Vogiatzis, Dunyou Wang, John H. Weare, Mark J. Williamson, T. L. Windus, Krzysztof Wolinski, A. T. Wong, Qin Wu, Chan-Shan Yang, Q. Yu, Martin Zacharias, Zhiyong Zhang, Yan Zhao, Robert W. Harrison 
TL;DR: The NWChem computational chemistry suite as discussed by the authors provides tools to support and guide experimental efforts and for the prediction of atomistic and electronic properties by using first-principledriven methodologies to model complex chemical and materials processes.
Abstract: Specialized computational chemistry packages have permanently reshaped the landscape of chemical and materials science by providing tools to support and guide experimental efforts and for the prediction of atomistic and electronic properties. In this regard, electronic structure packages have played a special role by using first-principledriven methodologies to model complex chemical and materials processes. Over the last few decades, the rapid development of computing technologies and the tremendous increase in computational power have offered a unique chance to study complex transformations using sophisticated and predictive many-body techniques that describe correlated behavior of electrons in molecular and condensed phase systems at different levels of theory. In enabling these simulations, novel parallel algorithms have been able to take advantage of computational resources to address the polynomial scaling of electronic structure methods. In this paper, we briefly review the NWChem computational chemistry suite, including its history, design principles, parallel tools, current capabilities, outreach and outlook.

314 citations


Cited by
More filters
01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Journal ArticleDOI
TL;DR: The implementation of Open Babel is detailed, key advances in the 2.3 release are described, and a variety of uses are outlined both in terms of software products and scientific research, including applications far beyond simple format interconversion.
Abstract: A frequent problem in computational modeling is the interconversion of chemical structures between different formats. While standard interchange formats exist (for example, Chemical Markup Language) and de facto standards have arisen (for example, SMILES format), the need to interconvert formats is a continuing problem due to the multitude of different application areas for chemistry data, differences in the data stored by different formats (0D versus 3D, for example), and competition between software along with a lack of vendor-neutral formats. We discuss, for the first time, Open Babel, an open-source chemical toolbox that speaks the many languages of chemical data. Open Babel version 2.3 interconverts over 110 formats. The need to represent such a wide variety of chemical and molecular data requires a library that implements a wide range of cheminformatics algorithms, from partial charge assignment and aromaticity detection, to bond order perception and canonicalization. We detail the implementation of Open Babel, describe key advances in the 2.3 release, and outline a variety of uses both in terms of software products and scientific research, including applications far beyond simple format interconversion. Open Babel presents a solution to the proliferation of multiple chemical file formats. In addition, it provides a variety of useful utilities from conformer searching and 2D depiction, to filtering, batch conversion, and substructure and similarity searching. For developers, it can be used as a programming library to handle chemical data in areas such as organic chemistry, drug design, materials science, and computational chemistry. It is freely available under an open-source license from http://openbabel.org .

6,040 citations

Journal ArticleDOI
TL;DR: An implementation of explicit solvent all atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA-enabled GPUs, providing results that are statistically indistinguishable from the traditional CPU version of the software and with performance that exceeds that achievable by the CPUs running on all conventional CPU-based clusters and supercomputers.
Abstract: We present an implementation of explicit solvent all atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA-enabled GPUs. First released publicly in April 2010 as part of version 11 of the AMBER MD package and further improved and optimized over the last two years, this implementation supports the three most widely used statistical mechanical ensembles (NVE, NVT, and NPT), uses particle mesh Ewald (PME) for the long-range electrostatics, and runs entirely on CUDA-enabled NVIDIA graphics processing units (GPUs), providing results that are statistically indistinguishable from the traditional CPU version of the software and with performance that exceeds that achievable by the CPU version of AMBER software running on all conventional CPU-based clusters and supercomputers. We briefly discuss three different precision models developed specifically for this work (SPDP, SPFP, and DPDP) and highlight the technical details of the approach as it extends beyond previously reported work [Gotz et al., J. Chem. Theory Comput. 2012, DOI: 10.1021/ct200909j; Le Grand et al., Comp. Phys. Comm. 2013, DOI: 10.1016/j.cpc.2012.09.022].We highlight the substantial improvements in performance that are seen over traditional CPU-only machines and provide validation of our implementation and precision models. We also provide evidence supporting our decision to deprecate the previously described fully single precision (SPSP) model from the latest release of the AMBER software package.

2,418 citations

Journal ArticleDOI
TL;DR: The most recent developments, since version 9 was released in April 2006, of the Amber and AmberTools MD software packages are outlined, referred to here as simply the Amber package.
Abstract: Molecular dynamics (MD) allows the study of biological and chemical systems at the atomistic level on timescales from femtoseconds to milliseconds. It complements experiment while also offering a way to follow processes difficult to discern with experimental techniques. Numerous software packages exist for conducting MD simulations of which one of the widest used is termed Amber. Here, we outline the most recent developments, since version 9 was released in April 2006, of the Amber and AmberTools MD software packages, referred to here as simply the Amber package. The latest release represents six years of continued development, since version 9, by multiple research groups and the culmination of over 33 years of work beginning with the first version in 1979. The latest release of the Amber package, version 12 released in April 2012, includes a substantial number of important developments in both the scientific and computer science arenas. We present here a condensed vision of what Amber currently supports and where things are likely to head over the coming years. Figure 1 shows the performance in ns/day of the Amber package version 12 on a single-core AMD FX-8120 8-Core 3.6GHz CPU, the Cray XT5 system, and a single GPU GTX680. © 2012 John Wiley & Sons, Ltd.

1,734 citations

01 Feb 1995
TL;DR: In this paper, the unpolarized absorption and circular dichroism spectra of the fundamental vibrational transitions of the chiral molecule, 4-methyl-2-oxetanone, are calculated ab initio using DFT, MP2, and SCF methodologies and a 5S4P2D/3S2P (TZ2P) basis set.
Abstract: : The unpolarized absorption and circular dichroism spectra of the fundamental vibrational transitions of the chiral molecule, 4-methyl-2-oxetanone, are calculated ab initio. Harmonic force fields are obtained using Density Functional Theory (DFT), MP2, and SCF methodologies and a 5S4P2D/3S2P (TZ2P) basis set. DFT calculations use the Local Spin Density Approximation (LSDA), BLYP, and Becke3LYP (B3LYP) density functionals. Mid-IR spectra predicted using LSDA, BLYP, and B3LYP force fields are of significantly different quality, the B3LYP force field yielding spectra in clearly superior, and overall excellent, agreement with experiment. The MP2 force field yields spectra in slightly worse agreement with experiment than the B3LYP force field. The SCF force field yields spectra in poor agreement with experiment.The basis set dependence of B3LYP force fields is also explored: the 6-31G* and TZ2P basis sets give very similar results while the 3-21G basis set yields spectra in substantially worse agreements with experiment. jg

1,652 citations