scispace - formally typeset
S

Simon Batzner

Researcher at Harvard University

Publications -  17
Citations -  874

Simon Batzner is an academic researcher from Harvard University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 6, co-authored 9 publications receiving 180 citations. Previous affiliations of Simon Batzner include Massachusetts Institute of Technology.

Papers
More filters
Posted ContentDOI

SE(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials

TL;DR: The NequIP method achieves state-of-the-art accuracy on a challenging set of diverse molecules and materials while exhibiting remarkable data efficiency, challenging the widely held belief that deep neural networks require massive training sets.
Journal ArticleDOI

On-the-fly active learning of interpretable Bayesian force fields for atomistic rare events

TL;DR: In this paper, an adaptive Bayesian inference method for automating the training of interpretable, low-dimensional, and multi-element interatomic force fields using structures drawn on the fly from molecular dynamics simulations is presented.
Journal ArticleDOI

E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials

TL;DR: NequIP as mentioned in this paper is an E(3)-equivariant neural network approach for learning interatomic potentials from ab-initio calculations for molecular dynamics simulations, which achieves state-of-the-art accuracy on a challenging and diverse set of molecules and materials while exhibiting remarkable data efficiency.
Journal ArticleDOI

Learning local equivariant representations for large-scale atomistic dynamics

TL;DR: Allegro as discussed by the authors represents a many-body potential using iterated tensor products of learned equivariant representations without atom-centered message passing and achieves state-of-the-art performance on QM9 and revMD17.
Journal ArticleDOI

A fast neural network approach for direct covariant forces prediction in complex multi-element extended systems

TL;DR: A staggered NNFF architecture is shown that exploits both rotation-invariant and -covariant features to directly predict atomic force vectors without using spatial derivatives, and can also directly predict complex covariant vector outputs from local environments, in other domains beyond computational material science.