J
José Miguel Hernández-Lobato
Researcher at University of Cambridge
Publications - 189
Citations - 11905
José Miguel Hernández-Lobato is an academic researcher from University of Cambridge. The author has contributed to research in topics: Bayesian probability & Bayesian inference. The author has an hindex of 43, co-authored 178 publications receiving 8930 citations. Previous affiliations of José Miguel Hernández-Lobato include Microsoft & Autonomous University of Madrid.
Papers
More filters
Journal ArticleDOI
Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules
Rafael Gómez-Bombarelli,Jennifer N. Wei,David Duvenaud,José Miguel Hernández-Lobato,Benjamin Sanchez-Lengeling,Dennis Sheberla,Jorge Aguilera-Iparraguirre,Timothy D. Hirzel,Ryan P. Adams,Alán Aspuru-Guzik,Alán Aspuru-Guzik +10 more
TL;DR: In this article, a deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder, and a predictor, which can generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds.
Journal ArticleDOI
Automatic chemical design using a data-driven continuous representation of molecules
Rafael Gómez-Bombarelli,Jennifer N. Wei,David Duvenaud,José Miguel Hernández-Lobato,Benjamin Sanchez-Lengeling,Dennis Sheberla,Jorge Aguilera-Iparraguirre,Timothy D. Hirzel,Ryan P. Adams,Alán Aspuru-Guzik,Alán Aspuru-Guzik +10 more
TL;DR: A method to convert discrete representations of molecules to and from a multidimensional continuous representation that allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds is reported.
Posted Content
Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks
TL;DR: Probabilistic Backpropagation (PBP) as discussed by the authors uses a forward propagation of probabilities through the network and then does a backward computation of gradients to estimate the posterior variance on the network weights.
Journal ArticleDOI
Minerva: enabling low-power, highly-accurate deep neural network accelerators
Brandon Reagen,Paul N. Whatmough,Robert Adolf,Saketh Rama,Hyunkwang Lee,Sae Kyu Lee,José Miguel Hernández-Lobato,Gu-Yeon Wei,David Brooks +8 more
TL;DR: Minerva as mentioned in this paper proposes a co-design approach across the algorithm, architecture, and circuit levels to optimize DNN hardware accelerators, and shows that fine-grained, heterogeneous dataatype optimization reduces power by 1.5× and aggressive, inline predication and pruning of small activity values further reduces power.
Posted Content
Predictive Entropy Search for Efficient Global Optimization of Black-box Functions
TL;DR: Predictive Entropy Search (PES) as mentioned in this paper selects the next evaluation point that maximizes the expected information gained with respect to the global maximum in each iteration, at each iteration.