FactSheets: Increasing trust in AI services through supplier's declarations of conformity
Matthew Arnold,Rachel K. E. Bellamy,Michael Hind,Stephanie Houde,Sameep Mehta,Aleksandra Mojsilovic,Ravi Nair,K. Natesan Ramamurthy,Alexandra Olteanu,David Piorkowski,Darrell C. Reimer,John T. Richards,Jason Tsay,Kush R. Varshney +13 more
Reads0
Chats0
TLDR
This paper envisiones an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers.Abstract:
Accuracy is an important concern for suppliers of artificial intelligence (AI) services, but considerations beyond accuracy, such as safety (which includes fairness and explainability), security, and provenance, are also critical elements to engender consumers’ trust in a service. Many industries use transparent, standardized, but often not legally required documents called supplier's declarations of conformity (SDoCs) to describe the lineage of a product along with the safety and performance testing it has undergone. SDoCs may be considered multidimensional fact sheets that capture and quantify various aspects of the product and its development to make it worthy of consumers’ trust. In this article, inspired by this practice, we propose FactSheets to help increase trust in AI services. We envision such documents to contain purpose, performance, safety, security, and provenance information to be completed by AI service providers for examination by consumers. We suggest a comprehensive set of declaration items tailored to AI in the Appendix of this article.read more
Citations
More filters
Special Issue on Responsible AI and Human-AI Interaction
James R. Foulds,Nora McDonald,Aaron K. Massey,Foad Hamidi,Alex Okeson,Rich Caruana,Nick Craswell,Kori Inkpen,Scott M. Lundberg,Harsha Nori,Hanna Wallach,Jennifer Wortman Vaughan,Patrick,Gage Kelley,Yongwei Yang,Courtney Heldreth,Christopher Moessner,Aaron Sedley,Allison Woodruff,John,Richards,Stephanie Houde,Aleksandra Mojsilovic +22 more
TL;DR: Why and how AATs need to be designed in collaboration with intersectional AAT users to ensure that the benefits of AI do not sacrifice privacy for the most vulnerable is reflected.
Posted Content
An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness
TL;DR: The authors evaluate multiple model types on various metrics along these dimensions on several datasets and show that no particular model type performs well on all dimensions, and demonstrate the kinds of trade-offs involved in selecting models evaluated along multiple dimensions.
Posted Content
The Sanction of Authority: Promoting Public Trust in AI
Bran Knowles,John T. Richards +1 more
TL;DR: In this article, the authors argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society.
Proceedings ArticleDOI
Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject Studies
TL;DR: In this article , the authors survey recent literature of empirical human-subject studies on human-AI decision making, and summarize the study design choices made in over 100 papers in three important aspects: (1) decision tasks, (2) AI assistance elements and (3) evaluation metrics.
Journal ArticleDOI
Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information
TL;DR: An approach utilizing semantic Knowledge Graphs to aid in the tasks of modelling, recording, viewing, and auditing accountability information related to the design stage of AI system development is presented and the RAInS ontology has been extended to satisfy these requirements.