scispace - formally typeset
Book ChapterDOI

Explaining Recommendations: Design and Evaluation

Reads0
Chats0
TLDR
This chapter starts by describing how explanations can be affected by how recommendations are presented, and the role the interaction with the recommender system plays, and introduces a number of explanation styles, and how they are related to the underlying algorithms.
Abstract
This chapter gives an overview of the area of explanations in recommender systems. We approach the literature from the angle of evaluation: that is, we are interested in what makes an explanation “good”. The chapter starts by describing how explanations can be affected by how recommendations are presented, and the role the interaction with the recommender system plays w.r.t. explanations. Next, we introduce a number of explanation styles, and how they are related to the underlying algorithms. We identify seven benefits that explanations may contribute to a recommender system, and relate them to criteria used in evaluations of explanations in existing recommender systems. We conclude the chapter with outstanding research questions and future work, including current recommender systems topics such as social recommendations and serendipity. Examples of explanations in existing systems are mentioned throughout.

read more

Citations
More filters
Journal ArticleDOI

A Survey of Methods for Explaining Black Box Models

TL;DR: In this paper, the authors provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box decision support systems, given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work.
Posted Content

Manipulating and Measuring Model Interpretability

TL;DR: A sequence of pre-registered experiments showed participants functionally identical models that varied only in two factors commonly thought to make machine learning models more or less interpretable: the number of features and the transparency of the model (i.e., whether the model internals are clear or black box).
Posted Content

How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation.

TL;DR: The authors identify what kinds of increases in complexity have the greatest effect on the time it takes for humans to verify the rationale, and which seem relatively insensitive in the specific context of verification.
Posted Content

'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions

TL;DR: There may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.
Proceedings ArticleDOI

'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions

TL;DR: In this paper, the authors examine people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles, and find that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles.
References
More filters
Proceedings ArticleDOI

Item-based collaborative filtering recommendation algorithms

TL;DR: This paper analyzes item-based collaborative ltering techniques and suggests that item- based algorithms provide dramatically better performance than user-based algorithms, while at the same time providing better quality than the best available userbased algorithms.
Journal ArticleDOI

Hybrid Recommender Systems: Survey and Experiments

TL;DR: This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, EntreeC, a system that combines knowledge-based recommendation and collaborative filtering to recommend restaurants, and shows that semantic ratings obtained from the knowledge- based part of the system enhance the effectiveness of collaborative filtering.
Proceedings ArticleDOI

Heuristic evaluation of user interfaces

TL;DR: Four experiments showed that individual evaluators were mostly quite bad at doing heuristic evaluations and that they only found between 20 and 51% of the usability problems in the interfaces they evaluated.
Journal ArticleDOI

Construction and Validation of a Scale to Measure Celebrity Endorsers' Perceived Expertise, Trustworthiness, and Attractiveness

TL;DR: The authors developed a 15-item semantic differential scale to measure perceived expertise, trustworthiness, and attractiveness of celebrity endorsers, which was validated using respondents' self-reported measures of intention to purchase and perception of quality for the products being tested.
Proceedings ArticleDOI

Explaining collaborative filtering recommendations

TL;DR: This paper presents experimental evidence that shows that providing explanations can improve the acceptance of ACF systems, and presents a model for explanations based on the user's conceptual model of the recommendation process.
Related Papers (5)