Open AccessPosted Content
SoK: Machine Learning Governance.
Varun Chandrasekaran,Hengrui Jia,Anvith Thudi,Adelin Travers,Mohammad Yaghini,Nicolas Papernot +5 more
Reads0
Chats0
TLDR
In this article, the authors developed the concept of ML governance to balance the benefits and risks of machine learning in computer systems, with the aim of achieving responsible applications of ML systems.Abstract:
The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society. In this paper, we develop the concept of ML governance to balance such benefits and risks, with the aim of achieving responsible applications of ML. Our approach first systematizes research towards ascertaining ownership of data and models, thus fostering a notion of identity specific to ML systems. Building on this foundation, we use identities to hold principals accountable for failures of ML systems through both attribution and auditing. To increase trust in ML systems, we then survey techniques for developing assurance, i.e., confidence that the system meets its security requirements and does not exhibit certain known failures. This leads us to highlight the need for techniques that allow a model owner to manage the life cycle of their system, e.g., to patch or retire their ML system. Put altogether, our systematization of knowledge standardizes the interactions between principals involved in the deployment of ML throughout its life cycle. We highlight opportunities for future work, e.g., to formalize the resulting game between ML principals.read more
Citations
More filters
Posted Content
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning.
TL;DR: In this paper, the authors show that even for a given training trajectory one cannot formally prove the absence of certain data points used during training, since one can obtain the same model using different datasets.
References
More filters
Journal ArticleDOI
Falcon: Honest-Majority Maliciously Secure Framework for Private Deep Learning
TL;DR: The experiments in the WAN setting show that over large networks and datasets, compute operations dominate the overall latency of MPC, as opposed to the communication.
Proceedings ArticleDOI
Graviton: trusted execution environments on GPUs
TL;DR: Graviton enables applications to offload security- and performance-sensitive kernels and data to a GPU, and execute kernels in isolation from other code running on the GPU and all software on the host, including the device driver, the operating system, and the hypervisor.
Proceedings ArticleDOI
Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment
TL;DR: This work investigates the model inversion problem under adversarial settings, and design a truncation-based technique to align the inversion model to enable effective inversion of the target model from partial predictions that the adversary obtains on victim user's data.
Proceedings ArticleDOI
Statistical change detection for multi-dimensional data
TL;DR: This paper defines a statistical test for deciding if the observed data points are sampled from the underlying distribution that produced the baseline data set and defines a test statistic that is strictly distribution-free under the null hypothesis.
Book ChapterDOI
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing
TL;DR: An algorithm is proposed which leverages disentangled semantic factors to generate adversarial perturbation by altering controlled semantic attributes to fool the learner towards various "adversarial" targets.