A
Anvith Thudi
Researcher at University of Toronto
Publications - 11
Citations - 68
Anvith Thudi is an academic researcher from University of Toronto. The author has contributed to research in topics: Computer science & Stochastic gradient descent. The author has an hindex of 1, co-authored 5 publications receiving 7 citations.
Papers
More filters
Posted Content
Proof-of-Learning: Definitions and Practice.
Hengrui Jia,Mohammad Yaghini,Christopher A. Choquette-Choo,Natalie Dullerud,Anvith Thudi,Varun Chandrasekaran,Nicolas Papernot +6 more
TL;DR: In this article, the authors introduce the concept of proof-of-learning in machine learning and demonstrate how a seminal training algorithm accumulates secret information due to its stochasticity.
Posted Content
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning.
TL;DR: In this paper, the authors show that even for a given training trajectory one cannot formally prove the absence of certain data points used during training, since one can obtain the same model using different datasets.
Posted Content
SoK: Machine Learning Governance.
Varun Chandrasekaran,Hengrui Jia,Anvith Thudi,Adelin Travers,Mohammad Yaghini,Nicolas Papernot +5 more
TL;DR: In this article, the authors developed the concept of ML governance to balance the benefits and risks of machine learning in computer systems, with the aim of achieving responsible applications of ML systems.
Journal ArticleDOI
Training Private Models That Know What They Don't Know
Stephan Rabanser,Anvith Thudi,Abhradeep Guha Thakurta,Krishnamurthy Dvijotham,Nicolas Papernot +4 more
TL;DR: In this paper , a thorough empirical investigation of selective classifiers under a differential privacy constraint is conducted, and the authors find that several popular selective prediction approaches are ineffective in a differentially private setting as they increase the risk of privacy leakage.
Journal ArticleDOI
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
TL;DR: Differentially private stochastic gradient descent (DP-SGD) as discussed by the authors is the canonical algorithm for private deep learning and it is known that its privacy analysis is tight in the worst-case, but several empirical results suggest that the models obtained leak significantly less privacy for many dataapoints.