scispace - formally typeset
Open AccessJournal ArticleDOI

Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities

Reads0
Chats0
TLDR
This research note describes exemplary risks of black-box AI, the consequent need for explainability, and previous research on Explainable AI (XAI) in information systems research.
Abstract
Artificial Intelligence (AI) has diffused into many areas of our private and professional life. In this research note, we describe exemplary risks of black-box AI, the consequent need for explainab...

read more

Citations
More filters
Journal ArticleDOI

Explainable artificial intelligence: a comprehensive review

TL;DR: A review of explainable artificial intelligence (XAI) can be found in this article, where the authors analyze and review various XAI methods, which are grouped into (i) pre-modeling explainability, (ii) interpretable model, and (iii) post-model explainability.
Posted Content

User Acceptance of Knowledge-Based System Recommendations: Explanations, Arguments, and Fit

TL;DR: This study examines how the fit between KBS explanations and users’ internal explanations influences acceptance of KBS recommendations and compares the predictions of CFT to those of the person-environment fit (PEF) paradigm to find support for CFT in the sense that people are influenced more by cognitively fitting explanations, however PEF is supported in thesense that people take more time to evaluate the explanation.
Journal ArticleDOI

Human-in-the-loop machine learning: a state of the art

TL;DR: Human-in-the-loop machine learning (HILML) as mentioned in this paper is a new type of interaction between humans and machine learning algorithms, where humans can also be involved in the learning process in other ways.
Journal ArticleDOI

How to explain AI systems to end users: a systematic literature review and research agenda

TL;DR: In this article , the authors investigate how AI systems and their decisions ought to be explained for end users through a systematic literature review, and they provide a design framework for explaining AI systems to end users.
Posted Content

Deceptive AI Explanations: Creation and Detection

TL;DR: It is confirmed that deceptive explanations can indeed fool humans while machine learning methods can detect seemingly minor attempts of deception with accuracy that exceeds 80\% given sufficient domain knowledge in the form of training data.
References
More filters
Proceedings ArticleDOI

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Journal ArticleDOI

Mastering the game of Go without human knowledge

TL;DR: An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.
Journal ArticleDOI

Minds, brains, and programs

TL;DR: Only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains, and no program by itself is sufficient for thinking.
Book

Minds, Brains, and Programs

TL;DR: In this article, the main argument of this paper is directed at establishing this claim and the form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality.
Journal ArticleDOI

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

TL;DR: This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.
Related Papers (5)