A
Adelin Travers
Researcher at University of Toronto
Publications - 5
Citations - 196
Adelin Travers is an academic researcher from University of Toronto. The author has contributed to research in topics: Transfer of learning & Hedge fund. The author has an hindex of 2, co-authored 5 publications receiving 37 citations.
Papers
More filters
Proceedings ArticleDOI
Machine Unlearning
Lucas Bourtoule,Varun Chandrasekaran,Christopher A. Choquette-Choo,Hengrui Jia,Adelin Travers,Baiwu Zhang,David Lie,Nicolas Papernot +7 more
TL;DR: SISA training as mentioned in this paper is a framework that expedites the unlearning process by strategically limiting the influence of a data point in the training procedure, and it is designed to achieve the largest improvements for stateful algorithms like stochastic gradient descent for deep neural networks.
Posted Content
Machine Unlearning
Lucas Bourtoule,Varun Chandrasekaran,Christopher A. Choquette-Choo,Hengrui Jia,Adelin Travers,Baiwu Zhang,David Lie,Nicolas Papernot +7 more
TL;DR: This work introduces SISA training, a framework that decreases the number of model parameters affected by an unlearning request and caches intermediate outputs of the training algorithm to limit thenumber of model updates that need to be computed to have these parameters unlearn.
Posted Content
On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples
Adelin Travers,Lorna Licollari,Guanghan Wang,Varun Chandrasekaran,Adam Dziedzic,David Lie,Nicolas Papernot +6 more
TL;DR: In this paper, the authors introduce surreptitious adversarial examples, a new class of attacks that evades both human and pipeline controls, and instantiate this class with a joint, multi-stage optimization attack.
Posted Content
SoK: Machine Learning Governance.
Varun Chandrasekaran,Hengrui Jia,Anvith Thudi,Adelin Travers,Mohammad Yaghini,Nicolas Papernot +5 more
TL;DR: In this article, the authors developed the concept of ML governance to balance the benefits and risks of machine learning in computer systems, with the aim of achieving responsible applications of ML systems.
Posted Content
Interpretability in Safety-Critical FinancialTrading Systems.
TL;DR: In this article, a gradient-based approach is proposed to precisely stress-test how a trading model's forecasts can be manipulated, and their effects on downstream tasks at the trading execution level.