scispace - formally typeset
Open AccessPosted Content

SoK: Machine Learning Governance.

Reads0
Chats0
TLDR
In this article, the authors developed the concept of ML governance to balance the benefits and risks of machine learning in computer systems, with the aim of achieving responsible applications of ML systems.
Abstract
The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society. In this paper, we develop the concept of ML governance to balance such benefits and risks, with the aim of achieving responsible applications of ML. Our approach first systematizes research towards ascertaining ownership of data and models, thus fostering a notion of identity specific to ML systems. Building on this foundation, we use identities to hold principals accountable for failures of ML systems through both attribution and auditing. To increase trust in ML systems, we then survey techniques for developing assurance, i.e., confidence that the system meets its security requirements and does not exhibit certain known failures. This leads us to highlight the need for techniques that allow a model owner to manage the life cycle of their system, e.g., to patch or retire their ML system. Put altogether, our systematization of knowledge standardizes the interactions between principals involved in the deployment of ML throughout its life cycle. We highlight opportunities for future work, e.g., to formalize the resulting game between ML principals.

read more

Citations
More filters
Posted Content

On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning.

TL;DR: In this paper, the authors show that even for a given training trajectory one cannot formally prove the absence of certain data points used during training, since one can obtain the same model using different datasets.
References
More filters
Journal ArticleDOI

Learning under Concept Drift: A Review

TL;DR: A high quality, instructive review of current research developments and trends in the concept drift field is conducted, and a framework of learning under concept drift is established including three main components: concept drift detection, concept drift understanding, and concept drift adaptation.
Journal ArticleDOI

An abstract domain for certifying neural networks

TL;DR: This work proposes a new abstract domain which combines floating point polyhedra with intervals and is equipped with abstract transformers specifically tailored to the setting of neural networks, and introduces new transformers for affine transforms, the rectified linear unit, sigmoid, tanh, and maxpool functions.
Posted Content

Federated Optimization: Distributed Optimization Beyond the Datacenter

TL;DR: This work introduces a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed over an extremely large number of nodes, but the goal remains to train a high-quality centralized model.
Posted Content

HotFlip: White-Box Adversarial Examples for Text Classification

TL;DR: An efficient method to generate white-box adversarial examples to trick a character-level neural classifier based on an atomic flip operation, which swaps one token for another, based on the gradients of the one-hot input vectors is proposed.
Proceedings Article

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)

TL;DR: In this paper, the authors introduce Concept Activation Vectors (CAVs), which provide an interpretation of a neural net's internal state in terms of human-friendly concepts, and use CAVs as part of a technique, Testing with CAVs (TCAV), that uses directional derivatives to quantify the degree to which a user defined concept is important to a classification result.