scispace - formally typeset
A

Aditya Grover

Researcher at Stanford University

Publications -  85
Citations -  12305

Aditya Grover is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 22, co-authored 62 publications receiving 6774 citations. Previous affiliations of Aditya Grover include Indian Institute of Technology Delhi & University of California, Berkeley.

Papers
More filters
Posted Content

Pretrained Transformers as Universal Computation Engines.

TL;DR: The authors investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning, in particular without finetuned of the self-attention and feedforward layers of the residual blocks.
Proceedings Article

Fair Generative Modeling via Weak Supervision

TL;DR: A weakly supervised algorithm for overcoming dataset bias for deep generative models, which reduces bias w.r.t. latent factors by an average of up to 34.6% over baselines for comparable image generation using generative adversarial networks.
Proceedings ArticleDOI

Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modeling

Tung Nguyen, +1 more
TL;DR: This work proposes Transformer Neural Processes (TNPs), a new member of the NP family that casts uncertainty-aware meta learning as a sequence modeling problem and achieves state-of-the-art performance on various benchmark problems, outperforming all previous NP variants.
Posted Content

Uncertainty Autoencoders: Learning Compressed Representations via Variational Information Maximization

TL;DR: This work proposes Uncertainty Autoencoders, a learning framework for unsupervised representation learning inspired by compressed sensing that provides a unified treatment to several lines of research in dimensionality reduction, compressed sensing, and generative modeling.
Proceedings Article

Uncertainty Autoencoders: Learning Compressed Representations via Variational Information Maximization

TL;DR: In this article, uncertainty autoencoders are used for unsupervised representation learning inspired by compressed sensing, and the learning objective optimizes for a tractable variational lower bound to the mutual information between the data points and the latent representations.