scispace - formally typeset
T

Thomas Unterthiner

Researcher at Google

Publications -  51
Citations -  31915

Thomas Unterthiner is an academic researcher from Google. The author has contributed to research in topics: Convolutional neural network & Deep learning. The author has an hindex of 26, co-authored 47 publications receiving 15696 citations. Previous affiliations of Thomas Unterthiner include Johannes Kepler University of Linz & University of Göttingen.

Papers
More filters
Posted Content

Rectified Factor Networks

TL;DR: On gene expression data from two pharmaceutical drug discovery studies, RFNs detected small and rare gene modules that revealed highly relevant new biological insights which were so far missed by other unsupervised methods.
Proceedings Article

First Order Generative Adversarial Networks

TL;DR: The First Order GAN as mentioned in this paper proposes a novel divergence which approximates the Wasserstein distance while regularizing the critic's first order information, and this divergence fulfills the requirements for unbiased steepest descent updates.
Posted Content

Fr\'echet ChemNet Distance: A metric for generative models for molecules in drug discovery

TL;DR: The Frechet ChemNet distance (FCD) as discussed by the authors is a distance measure between two sets of molecules that can be used as an evaluation metric for generative models, which can detect if generated molecules are diverse and have similar chemical and biological properties as real molecules.
Proceedings Article

Understanding Robustness of Transformers for Image Classification

TL;DR: In this paper, a variety of different measures of robustness of Vision Transformer (ViT) models were investigated and compared to ResNet baselines, and it was shown that when pre-trained with a sufficient amount of data, ViT models are at least as robust as the ResNet counterparts on a broad range of perturbations.
Proceedings Article

MLP-Mixer: An all-MLP Architecture for Vision

TL;DR: MLP-Mixer as mentioned in this paper is an architecture based exclusively on multi-layer perceptrons (MLP), which contains two types of layers: one with MLPs applied independently to image patches (i.e. "mixing" the per-location features), and one with LSTM applied across patches, and it achieves competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-theart models.