M
Matthias Minderer
Researcher at Google
Publications - 26
Citations - 14338
Matthias Minderer is an academic researcher from Google. The author has contributed to research in topics: Computer science & Data pre-processing. The author has an hindex of 12, co-authored 17 publications receiving 2143 citations. Previous affiliations of Matthias Minderer include University of Zurich & Cold Spring Harbor Laboratory.
Papers
More filters
Posted Content
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy,Lucas Beyer,Alexander Kolesnikov,Dirk Weissenborn,Xiaohua Zhai,Thomas Unterthiner,Mostafa Dehghani,Matthias Minderer,Georg Heigold,Sylvain Gelly,Jakob Uszkoreit,Neil Houlsby +11 more
TL;DR: Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.
Proceedings Article
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy,Lucas Beyer,Alexander Kolesnikov,Dirk Weissenborn,Xiaohua Zhai,Thomas Unterthiner,Mostafa Dehghani,Matthias Minderer,Georg Heigold,Sylvain Gelly,Jakob Uszkoreit,Neil Houlsby +11 more
TL;DR: The Vision Transformer (ViT) as discussed by the authors uses a pure transformer applied directly to sequences of image patches to perform very well on image classification tasks, achieving state-of-the-art results on ImageNet, CIFAR-100, VTAB, etc.
Journal ArticleDOI
Dynamic Reorganization of Neuronal Activity Patterns in Parietal Cortex
TL;DR: It is proposed that dynamic neuronal activity patterns could balance plasticity for learning and stability for memory in mice as they learned additional associations.
Proceedings Article
Unsupervised learning of object structure and dynamics from videos
TL;DR: A keypoint-based image representation is adopted and a stochastic dynamics model of the keypoints is learned that outperforms unstructured representations on a range of motion-related tasks such as object tracking, action recognition and reward prediction.
Posted Content
On Robustness and Transferability of Convolutional Neural Networks
Josip Djolonga,Jessica Yung,Michael Tschannen,Rob Romijnders,Lucas Beyer,Alexander Kolesnikov,Joan Puigcerver,Matthias Minderer,Alexander D'Amour,Dan Moldovan,Sylvain Gelly,Neil Houlsby,Xiaohua Zhai,Mario Lucic +13 more
TL;DR: It is found that increasing both the training set and model sizes significantly improve the distributional shift robustness and it is shown that, perhaps surprisingly, simple changes in the preprocessing can significantly mitigate robustness issues in some cases.