scispace - formally typeset
P

Pieter-Jan Kindermans

Researcher at Google

Publications -  66
Citations -  7845

Pieter-Jan Kindermans is an academic researcher from Google. The author has contributed to research in topics: Deep learning & Artificial neural network. The author has an hindex of 31, co-authored 62 publications receiving 5458 citations. Previous affiliations of Pieter-Jan Kindermans include Ghent University & Technical University of Berlin.

Papers
More filters
Journal ArticleDOI

SchNet - A deep learning architecture for molecules and materials.

TL;DR: SchNet as mentioned in this paper is a deep learning architecture specifically designed to model atomistic systems by making use of continuous-filter convolutional layers, where the model learns chemically plausible embeddings of atom types across the periodic table.
Proceedings Article

Understanding and Simplifying One-Shot Architecture Search

TL;DR: With careful experimental analysis, it is shown that it is possible to efficiently identify promising architectures from a complex search space without either hypernetworks or reinforcement learning controllers.
Journal ArticleDOI

SchNet - a deep learning architecture for molecules and materials

TL;DR: SchNet as discussed by the authors is a deep learning architecture specifically designed to model atomistic systems by making use of continuous-filter convolutional layers, which can accurately predict a range of properties across chemical space for molecules and materials.
Book ChapterDOI

The (Un)reliability of saliency methods

TL;DR: This work uses a simple and common pre-processing step ---adding a constant shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute.
Posted Content

Don't Decay the Learning Rate, Increase the Batch Size

TL;DR: This procedure is successful for stochastic gradient descent, SGD with momentum, Nesterov momentum, and Adam, and reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times.