scispace - formally typeset
Open AccessJournal ArticleDOI

On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products

Kush R. Varshney, +1 more
- Vol. 5, Iss: 3, pp 246-255
TLDR
Machine learning algorithms increasingly influence our decisions and interact with us in all parts of our daily lives as discussed by the authors, and therefore, just as we consider the safety of power plants, highways, and...
Abstract
Machine learning algorithms increasingly influence our decisions and interact with us in all parts of our daily lives. Therefore, just as we consider the safety of power plants, highways, ...

read more

Citations
More filters
Journal ArticleDOI

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

TL;DR: This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.
Posted Content

Towards A Rigorous Science of Interpretable Machine Learning

TL;DR: This position paper defines interpretability and describes when interpretability is needed (and when it is not), and suggests a taxonomy for rigorous evaluation and exposes open questions towards a more rigorous science of interpretable machine learning.
Journal ArticleDOI

Machine Learning Interpretability: A Survey on Methods and Metrics

TL;DR: A review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics is provided.
Posted Content

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead

TL;DR: In this article, the chasm between explaining black box models and using inherently interpretable models is identified, and several key reasons why explainable models should be avoided in high-stakes decisions.
References
More filters
Journal ArticleDOI

Improving predictive inference under covariate shift by weighting the log-likelihood function

TL;DR: A class of predictive densities is derived by weighting the observed samples in maximizing the log-likelihood function, effective in cases such as sample surveys or design of experiments, where the observed covariate follows a different distribution than that in the whole population.
Journal ArticleDOI

Big Data's Disparate Impact

TL;DR: In the absence of a demonstrable intent to discriminate, the best doctrinal hope for data mining's victims would seem to lie in disparate impact doctrine as discussed by the authors, which holds that a practice can be justified as a business necessity when its outcomes are predictive of future employment outcomes, and data mining is specifically designed to find such statistical correlations.
Journal ArticleDOI

European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”

TL;DR: It is argued that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.
Journal ArticleDOI

Mining with rarity: a unifying framework

TL;DR: It is demonstrated that rare classes and rare cases are very similar phenomena---both forms of rarity are shown to cause similar problems during data mining and benefit from the same remediation methods.
Proceedings ArticleDOI

Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission

TL;DR: This work presents two case studies where high-performance generalized additive models with pairwise interactions (GA2Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy.
Related Papers (5)