scispace - formally typeset
B

Brian Y. Lim

Researcher at National University of Singapore

Publications -  62
Citations -  2859

Brian Y. Lim is an academic researcher from National University of Singapore. The author has contributed to research in topics: Intelligibility (communication) & Computer science. The author has an hindex of 18, co-authored 56 publications receiving 1956 citations. Previous affiliations of Brian Y. Lim include Institute for Infocomm Research Singapore & Carnegie Mellon University.

Papers
More filters
Proceedings ArticleDOI

Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda

TL;DR: This work investigates how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers.
Proceedings ArticleDOI

Designing Theory-Driven User-Centric Explainable AI

TL;DR: This paper proposes a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across philosophy and psychology, and identifies pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases.
Proceedings ArticleDOI

Why and why not explanations improve the intelligibility of context-aware intelligent systems

TL;DR: It is shown that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust, and automatically providing explanations about a system's decision process can help mitigate this problem.
Proceedings ArticleDOI

Assessing demand for intelligibility in context-aware applications

TL;DR: Why users demand certain types of information is discussed, and design implications on how to provide different intelligibility types to make context-aware applications intelligible and acceptable to users are provided.
Proceedings ArticleDOI

Toolkit to support intelligibility in context-aware applications

TL;DR: The Intelligibility Toolkit is developed that makes it easy for application developers to obtain eight types of explanations from the most popular decision models of context-aware applications and its extensible architecture and explanation generation algorithms are described.