scispace - formally typeset
S

Stephanie Houde

Researcher at IBM

Publications -  5
Citations -  647

Stephanie Houde is an academic researcher from IBM. The author has contributed to research in topics: Service provider & Declaration. The author has an hindex of 4, co-authored 5 publications receiving 247 citations.

Papers
More filters
Journal ArticleDOI

AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias

TL;DR: A new open-source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license, to help facilitate the transition of fairness research algorithms for use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms.
Journal ArticleDOI

FactSheets: Increasing trust in AI services through supplier's declarations of conformity

TL;DR: This paper envisiones an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers.
Journal ArticleDOI

Think Your Artificial Intelligence Software Is Fair? Think Again

TL;DR: While fair model- assisted decision making involves more than the application of unbiased models-consideration of application context, specifics of the decisions being made, resolution of conflicting stakeholder viewpoints, and so forth-mitigating bias from machine-learning software is important and possible but difficult and too often ignored.
Proceedings ArticleDOI

AI explainability 360: hands-on tutorial

TL;DR: This tutorial will teach participants to use and contribute to a new open-source Python package named AI Explainability 360 (AIX360) (https://aix360.mybluemix.net), a comprehensive and extensible toolkit that supports interpretability and explainability of data and machine learning models.
Proceedings ArticleDOI

AI Explainability 360 Toolkit

TL;DR: An open-source software toolkit featuring eight diverse state-of-the-art explainability methods, two evaluation metrics, and an extensible software architecture that organizes these methods according to their use in the AI modeling pipeline to improve transparency of machine learning models and provides a platform to integrate new explainability techniques as they are developed.