A
Aditya Grover
Researcher at Stanford University
Publications - 85
Citations - 12305
Aditya Grover is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 22, co-authored 62 publications receiving 6774 citations. Previous affiliations of Aditya Grover include Indian Institute of Technology Delhi & University of California, Berkeley.
Papers
More filters
Posted Content
Anytime Sampling for Autoregressive Models via Ordered Autoencoding
TL;DR: In this article, the authors propose a new family of autoregressive models that enable anytime sampling, where dimensions are ordered based on their importance with respect to reconstruction and trade off sample quality for computational efficiency by truncating the generation process.
Tacks in multimodal contrastive learning
Aditya Grover,Kai-Wei Chang +1 more
TL;DR: CleanCLIP as discussed by the authors is a finetuning framework that weakens the learned spurious associations introduced by backdoor attacks by independently re-aligning the representations for individual modalities, which can significantly reduce the impact of backdoor attacks.
P areto -e fficient d ecision a gents for o ffline m ulti -o bjective r einforcement l earning
TL;DR: In this article , a data-driven setup for offline multi-objective reinforcement learning (MORL) is proposed, where a preference-agnostic policy agent is learned using only a finite dataset of offline demonstrations of other agents and their preferences.
Journal ArticleDOI
ClimateLearn: Benchmarking Machine Learning for Weather and Climate Modeling
TL;DR: The ClimateLearn project as mentioned in this paper is an open-source PyTorch library that simplifies the training and evaluation of machine learning models for data-driven climate science, including weather forecasting and climate downscaling.
Posted Content
Moser Flow: Divergence-based Generative Modeling on Manifolds
TL;DR: In this article, the authors introduce Moser Flow (MF), a new class of generative models within the family of continuous normalizing flows (CNF), which also produces a CNF via a solution to the change-of-variable formula, however differently from other CNF methods, its model (learned) density is parameterized as the source (prior) density minus the divergence of a neural network (NN).