S
Stephanie Houde
Researcher at IBM
Publications - 5
Citations - 647
Stephanie Houde is an academic researcher from IBM. The author has contributed to research in topics: Service provider & Declaration. The author has an hindex of 4, co-authored 5 publications receiving 247 citations.
Papers
More filters
Journal ArticleDOI
AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias
Rachel K. E. Bellamy,Kuntal Dey,Michael Hind,Samuel C. Hoffman,Stephanie Houde,Kalapriya Kannan,Pranay Lohia,Jacquelyn A. Martino,Shalin Mehta,Aleksandra Mojsilovic,Seema Nagar,K. Natesan Ramamurthy,John T. Richards,Debanjan Saha,Prasanna Sattigeri,Moninder Singh,Kush R. Varshney,Yunfeng Zhang +17 more
TL;DR: A new open-source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license, to help facilitate the transition of fairness research algorithms for use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms.
Journal ArticleDOI
FactSheets: Increasing trust in AI services through supplier's declarations of conformity
Matthew Arnold,Rachel K. E. Bellamy,Michael Hind,Stephanie Houde,Sameep Mehta,Aleksandra Mojsilovic,Ravi Nair,K. Natesan Ramamurthy,Alexandra Olteanu,David Piorkowski,Darrell C. Reimer,John T. Richards,Jason Tsay,Kush R. Varshney +13 more
TL;DR: This paper envisiones an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers.
Journal ArticleDOI
Think Your Artificial Intelligence Software Is Fair? Think Again
Rachel K. E. Bellamy,Kuntal Dey,Michael Hind,Samuel C. Hoffman,Stephanie Houde,Kalapriya Kannan,Pranay Lohia,Sameep Mehta,Aleksandra Mojsilovic,Seema Nagar,Karthikeyan Natesan Ramamurthy,John T. Richards,Diptikalyan Saha,Prasanna Sattigeri,Moninder Singh,Kush R. Varshney,Yunfeng Zhang +16 more
TL;DR: While fair model- assisted decision making involves more than the application of unbiased models-consideration of application context, specifics of the decisions being made, resolution of conflicting stakeholder viewpoints, and so forth-mitigating bias from machine-learning software is important and possible but difficult and too often ignored.
Proceedings ArticleDOI
AI explainability 360: hands-on tutorial
Vijay Arya,Rachel K. E. Bellamy,Pin-Yu Chen,Amit Dhurandhar,Michael Hind,Samuel C. Hoffman,Stephanie Houde,Q. Vera Liao,Ronny Luss,Aleksandra Mojsilovic,Sami Mourad,Pablo Pedemonte,Ramya Raghavendra,John T. Richards,Prasanna Sattigeri,Karthikeyan Shanmugam,Moninder Singh,Kush R. Varshney,Dennis Wei,Yunfeng Zhang +19 more
TL;DR: This tutorial will teach participants to use and contribute to a new open-source Python package named AI Explainability 360 (AIX360) (https://aix360.mybluemix.net), a comprehensive and extensible toolkit that supports interpretability and explainability of data and machine learning models.
Proceedings ArticleDOI
AI Explainability 360 Toolkit
Vijay Arya,Rachel K. E. Bellamy,Pin-Yu Chen,Amit Dhurandhar,Michael Hind,Samuel C. Hoffman,Stephanie Houde,Q. Vera Liao,Ronny Luss,Aleksandra Mojsilovic,Sami Mourad,Pablo Pedemonte,Ramya Raghavendra,John T. Richards,Prasanna Sattigeri,Karthikeyan Shanmugam,Moninder Singh,Kush R. Varshney,Dennis Wei,Yunfeng Zhang +19 more
TL;DR: An open-source software toolkit featuring eight diverse state-of-the-art explainability methods, two evaluation metrics, and an extensible software architecture that organizes these methods according to their use in the AI modeling pipeline to improve transparency of machine learning models and provides a platform to integrate new explainability techniques as they are developed.