scispace - formally typeset
Open AccessJournal ArticleDOI

FactSheets: Increasing trust in AI services through supplier's declarations of conformity

Reads0
Chats0
TLDR
This paper envisiones an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers.
Abstract
Accuracy is an important concern for suppliers of artificial intelligence (AI) services, but considerations beyond accuracy, such as safety (which includes fairness and explainability), security, and provenance, are also critical elements to engender consumers’ trust in a service. Many industries use transparent, standardized, but often not legally required documents called supplier's declarations of conformity (SDoCs) to describe the lineage of a product along with the safety and performance testing it has undergone. SDoCs may be considered multidimensional fact sheets that capture and quantify various aspects of the product and its development to make it worthy of consumers’ trust. In this article, inspired by this practice, we propose FactSheets to help increase trust in AI services. We envision such documents to contain purpose, performance, safety, security, and provenance information to be completed by AI service providers for examination by consumers. We suggest a comprehensive set of declaration items tailored to AI in the Appendix of this article.

read more

Citations
More filters
Proceedings ArticleDOI

Researching AI Legibility through Design

TL;DR: This paper thoroughly explores prior research in order to critically unpack the AI legibility problem space, and responds with design proposals whose aim is to enhance the legibility, to users, of systems using AI.
Journal ArticleDOI

Survey of Explainable AI Techniques in Healthcare

TL;DR: A survey of explainable AI techniques used in healthcare and related medical imaging applications can be found in this paper , where the authors provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis.
Proceedings ArticleDOI

Identifying Insufficient Data Coverage for Ordinal Continuous-Valued Attributes

TL;DR: In this article, the authors study the notion of coverage for ordinal and continuous-valued attributes, by formalizing the intuition that the learned model can accurately predict only at data points for which there are "enough" similar data points in the training data set.
Proceedings ArticleDOI

Symphony: Composing Interactive Interfaces for Machine Learning

TL;DR: Symphony, a framework for composing interactive ML interfaces with task-specific, data-driven components that can be used across platforms such as computational notebooks and web dashboards, was designed and implemented.
Proceedings ArticleDOI

Facilitating Knowledge Sharing from Domain Experts to Data Scientists for Building NLP Models

TL;DR: Ziva as discussed by the authors is a framework to guide domain experts in sharing essential domain knowledge to data scientists for building NLP models, where experts are able to distill and share their domain knowledge using domain concept extractors and five types of label justification over a representative data sample.
Related Papers (5)