A
Aaron Sarna
Researcher at Google
Publications - 19
Citations - 3515
Aaron Sarna is an academic researcher from Google. The author has contributed to research in topics: Implicit function & Computer science. The author has an hindex of 13, co-authored 16 publications receiving 1365 citations.
Papers
More filters
Posted Content
Supervised Contrastive Learning.
Prannay Khosla,Piotr Teterwak,Chen Wang,Aaron Sarna,Yonglong Tian,Phillip Isola,Aaron Maschinot,Ce Liu,Dilip Krishnan +8 more
TL;DR: In this paper, the authors extend the self-supervised batch contrastive approach to the fully supervised setting, allowing them to effectively leverage label information and achieve state-of-the-art performance in unsupervised training of deep image models.
Proceedings Article
Supervised Contrastive Learning
Prannay Khosla,Piotr Teterwak,Chen Wang,Aaron Sarna,Yonglong Tian,Phillip Isola,Aaron Maschinot,Ce Liu,Dilip Krishnan +8 more
TL;DR: A novel training methodology that consistently outperforms cross entropy on supervised learning tasks across different architectures and data augmentations is proposed, and the batch contrastive loss is modified, which has recently been shown to be very effective at learning powerful representations in the self-supervised setting.
Proceedings ArticleDOI
Local Deep Implicit Functions for 3D Shape
TL;DR: Local Deep Implicit Functions (LDIF), a 3D shape representation that decomposes space into a structured set of learned implicit functions that provides higher surface reconstruction accuracy than the state-of-the-art (OccNet), while requiring fewer than 1\% of the network parameters.
Proceedings ArticleDOI
Unsupervised Training for 3D Morphable Model Regression
TL;DR: In this paper, a method for training a regression network from image pixels to 3D morphable model coordinates using only unlabeled photographs is presented. But the training loss is based on features from a facial recognition network, computed on-the-fly by rendering the predicted faces with a differentiable renderer.
Proceedings ArticleDOI
Learning Shape Templates With Structured Implicit Functions
TL;DR: It is shown that structured implicit functions are suitable for learning and allow a network to smoothly and simultaneously fit multiple classes of shapes in a general shape template from data.