scispace - formally typeset
Open AccessProceedings ArticleDOI

A Unified PAC-Bayesian Framework for Machine Unlearning via Information Risk Minimization

Reads0
Chats0
TLDR
This paper developed a unified PAC-Bayesian framework for machine unlearning that recovers the two recent design principles - variational unlearning and forgetting Lagrangian -as information risk minimization problems.
Abstract
Machine unlearning refers to mechanisms that can remove the influence of a subset of training data upon request from a trained model without incurring the cost of re-training from scratch. This paper develops a unified PAC-Bayesian framework for machine unlearning that recovers the two recent design principles - variational unlearning [1] and forgetting Lagrangian [2] as information risk minimization problems [3]. Accordingly, both criteria can be interpreted as PAC-Bayesian upper bounds on the test loss of the unlearned model that take the form of free energy metrics.

read more

Citations
More filters
Posted Content

Forget-SVGD: Particle-Based Bayesian Federated Unlearning

TL;DR: For example, Forget-Stein Variational Gradient Descent (Forget-SVGD) as mentioned in this paper leverages the flexibility of non-parametric Bayesian approximate inference to develop a novel Bayesian federated unlearning method.
Proceedings ArticleDOI

Forget-SVGD: Particle-Based Bayesian Federated Unlearning

TL;DR: For example, Forget-Stein Variational Gradient Descent (Forget-SVGD) as discussed by the authors leverages the flexibility of non-parametric Bayesian approximate inference to develop a novel Bayesian federated unlearning method.
References
More filters
Proceedings Article

The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks

TL;DR: This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model, and describes new, efficient procedures that can extract unique, secret sequences, such as credit card numbers.
Proceedings ArticleDOI

PAC-Bayesian model averaging

TL;DR: The method constructs an optimized weighted mixture of concepts analogous to a Bayesian posterior distribution, and the main result is stated for bounded loss, a preliminary analysis for unbounded loss is also given.
Proceedings ArticleDOI

Towards Making Systems Forget with Machine Unlearning

TL;DR: This paper presents a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form, and applies to all stages of machine learning, including feature selection and modeling.
Journal ArticleDOI

PAC-Bayesian Stochastic Model Selection

TL;DR: A PAC-Bayesian performance guarantee for stochastic model selection that is superior to analogous guarantees for deterministic model selection and shown that the posterior optimizing the performance guarantee is a Gibbs distribution.
Proceedings ArticleDOI

PAC-Bayesian learning of linear classifiers

TL;DR: A general PAC- Bayes theorem is presented from which all known PAC-Bayes risk bounds are obtained as particular cases and different learning algorithms for finding linear classifiers that minimize these bounds are proposed.
Related Papers (5)