L
Lukas Schott
Researcher at University of Tübingen
Publications - 16
Citations - 828
Lukas Schott is an academic researcher from University of Tübingen. The author has contributed to research in topics: Robustness (computer science) & MNIST database. The author has an hindex of 11, co-authored 15 publications receiving 624 citations. Previous affiliations of Lukas Schott include Bosch.
Papers
More filters
Posted Content
Comparative Study of Deep Learning Software Frameworks
TL;DR: A comparative study of five deep learning frameworks, namely Caffe, Neon, TensorFlow, Theano, and Torch, on three aspects: extensibility, hardware utilization, and speed finds that Theano and Torch are the most easily extensible frameworks.
Proceedings Article
Towards the First Adversarially Robust Neural Network Model on MNIST
TL;DR: In this article, a novel robust classification model was proposed that performs analysis by synthesis using learned class-conditional data distributions, which yields state-of-the-art robustness on MNIST against L0, L2 and L-infinity perturbations.
Posted Content
Comparative Study of Caffe, Neon, Theano, and Torch for Deep Learning
TL;DR: A comparative study of four deep learning frameworks, namely Caffe, Neon, Theano, and Torch, on three aspects: extensibility, hardware utilization, and speed finds that Theano and Torch are the most easily extensible frameworks.
Posted Content
A simple way to make neural networks robust against diverse image corruptions
Evgenia Rusak,Lukas Schott,Roland S. Zimmermann,Julian Bitterwolf,Oliver Bringmann,Matthias Bethge,Wieland Brendel +6 more
TL;DR: It is demonstrated that a simple but properly tuned training with additive Gaussian and Speckle noise generalizes surprisingly well to unseen corruptions, easily reaching the previous state of the art on the corruption benchmark ImageNet-C (with ResNet50) and on MNIST-C.
Posted Content
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding
David A. Klindt,Lukas Schott,Yash Sharma,Ivan Ustyuzhaninov,Wieland Brendel,Matthias Bethge,Dylan M. Paiton +6 more
TL;DR: Evidence that objects in segmented natural movies undergo transitions that are typically small in magnitude with occasional large jumps, which is characteristic of a temporally sparse distribution is provided and SlowVAE, a model for unsupervised representation learning that uses a sparse prior on temporally adjacent observations to disentangle generative factors without any assumptions on the number of changing factors is presented.