scispace - formally typeset
A

Aditya Grover

Researcher at Stanford University

Publications -  85
Citations -  12305

Aditya Grover is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 22, co-authored 62 publications receiving 6774 citations. Previous affiliations of Aditya Grover include Indian Institute of Technology Delhi & University of California, Berkeley.

Papers
More filters
Journal ArticleDOI

Streamlining variational inference for constraint satisfaction problems

TL;DR: It is shown that streamlined solvers consistently outperform decimation-based solvers on random k-SAT instances for several problem sizes, shrinking the gap between empirical performance and theoretical limits of satisfiability by 16.3% on average for k=3,4,5,6.

Amortized Variational Compressive Sensing

TL;DR: A novel algorithmic framework based on autoencoders that jointly learns the acquisition and recovery of signals while implicitly modeling domain structure is presented, which maximizes a variational lower bound to the mutual information between the signal and the measurements.
Proceedings Article

Reset-Free Lifelong Learning with Skill-Space Planning

TL;DR: LiSP as mentioned in this paper is an algorithmic framework for lifelong RL based on planning in an abstract space of higher-order skills, which learns the skills in an unsupervised manner using intrinsic rewards and plan over the learned skills using a learned dynamics model.
Journal ArticleDOI

Imitating, Fast and Slow: Robust learning from demonstrations via decision-time planning

Carl Qi, +2 more
- 07 Apr 2022 - 
TL;DR: Imitation with Planning at Test-time (IMPLANT) is proposed, a new meta-algorithm for imitation learning that utilizes decision-time planning to correct for compounding errors of any base imitation policy.
Journal ArticleDOI

CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning

TL;DR: CleanCLIP as discussed by the authors is a finetuning framework that weakens the learned spurious associations introduced by backdoor attacks by independently re-aligning the representations for individual modalities, which can significantly reduce the impact of the backdoor attack.