scispace - formally typeset
Search or ask a question
Author

Jo Vermeulen

Bio: Jo Vermeulen is an academic researcher from Aarhus University. The author has contributed to research in topics: User interface & Situated. The author has an hindex of 19, co-authored 71 publications receiving 1763 citations. Previous affiliations of Jo Vermeulen include University of Calgary & University of Birmingham.


Papers
More filters
Proceedings ArticleDOI
21 Apr 2018
TL;DR: This work investigates how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers.
Abstract: Advances in artificial intelligence, sensors and big data management have far-reaching societal impacts. As these systems augment our everyday lives, it becomes increasing-ly important for people to understand them and remain in control. We investigate how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers. Using topic modeling, co-occurrence and network analysis, we mapped the research space from diverse domains, such as algorith-mic accountability, interpretable machine learning, context-awareness, cognitive psychology, and software learnability. We reveal fading and burgeoning trends in explainable systems, and identify domains that are closely connected or mostly isolated. The time is ripe for the HCI community to ensure that the powerful new autonomous systems have intelligible interfaces built-in. From our results, we propose several implications and directions for future research to-wards this goal.

539 citations

Proceedings ArticleDOI
07 May 2016
TL;DR: It is found that even a notification that contains important or useful content can cause disruption, and the substantial role of the psychological traits of the individuals on the response time and the disruption perceived from a notification is observed.
Abstract: Notifications are extremely beneficial to users, but they often demand their attention at inappropriate moments. In this paper we present an in-situ study of mobile interruptibility focusing on the effect of cognitive and physical factors on the response time and the disruption perceived from a notification. Through a mixed method of automated smartphone logging and experience sampling we collected 10372 in-the-wild notifications and 474 questionnaire responses on notification perception from 20 users. We found that the response time and the perceived disruption from a notification can be influenced by its presentation, alert type, sender-recipient relationship as well as the type, completion level and complexity of the task in which the user is engaged. We found that even a notification that contains important or useful content can cause disruption. Finally, we observe the substantial role of the psychological traits of the individuals on the response time and the disruption perceived from a notification.

227 citations

Proceedings ArticleDOI
13 Sep 2014
TL;DR: A discussion of ongoing and emerging challenges, namely challenges for meaningful technologies, complex domestic spaces, and human-home collaboration, and promising directions for the field are provided.
Abstract: A considerable amount of research has been carried out towards making long-standing smart home visions technically feasible. The technologically augmented homes made possible by this work are starting to become reality, but thus far living in and interacting with such homes has introduced significant complexity while offering limited benefit. As these technologies are increasingly adopted, the knowledge we gain from their use suggests a need to revisit the opportunities and challenges they pose. Synthesizing a broad body of research on smart homes with observations of industry and experiences from our own empirical work, we provide a discussion of ongoing and emerging challenges, namely challenges for meaningful technologies, complex domestic spaces, and human-home collaboration. Within each of these three challenges we discuss our visions for future smart homes and identify promising directions for the field.

176 citations

Proceedings ArticleDOI
19 Apr 2018
TL;DR: An analysis of 68 published toolkit papers provides an overview of, reflection on, and discussion of evaluation methods for toolkit contributions, and identifies and discusses the value of four toolkit evaluation strategies, including the associated techniques that each employs.
Abstract: Toolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects toolkit research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what 'evaluating' a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From our analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques that each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential limitations, and trade-offs associated with each strategy.

176 citations

Proceedings ArticleDOI
21 Jun 2014
TL;DR: A critical perspective on proxemic interactions in the form of dark patterns: ways proxies can be misused is offered and several root problems that underlie these patterns are identified and potential solutions that could lower their harmfulness are discussed.
Abstract: Proxemics theory explains peoples' use of interpersonal distances to mediate their social interactions with others. Within Ubicomp, proxemic interaction researchers argue that people have a similar social understanding of their spatial relations with nearby digital devices, which can be exploited to better facilitate seamless and natural interactions. To do so, both people and devices are tracked to determine their spatial relationships. While interest in proxemic interactions has increased over the last few years, it also has a dark side: knowledge of proxemics may (and likely will) be easily exploited to the detriment of the user. In this paper, we offer a critical perspective on proxemic interactions in the form of dark patterns: ways proxemic interactions can be misused. We discuss a series of these patterns and describe how they apply to these types of interactions. In addition, we identify several root problems that underlie these patterns and discuss potential solutions that could lower their harmfulness.

124 citations


Cited by
More filters
Journal Article
TL;DR: Thaler and Sunstein this paper described a general explanation of and advocacy for libertarian paternalism, a term coined by the authors in earlier publications, as a general approach to how leaders, systems, organizations, and governments can nudge people to do the things the nudgers want and need done for the betterment of the nudgees, or of society.
Abstract: NUDGE: IMPROVING DECISIONS ABOUT HEALTH, WEALTH, AND HAPPINESS by Richard H. Thaler and Cass R. Sunstein Penguin Books, 2009, 312 pp, ISBN 978-0-14-311526-7This book is best described formally as a general explanation of and advocacy for libertarian paternalism, a term coined by the authors in earlier publications. Informally, it is about how leaders, systems, organizations, and governments can nudge people to do the things the nudgers want and need done for the betterment of the nudgees, or of society. It is paternalism in the sense that "it is legitimate for choice architects to try to influence people's behavior in order to make their lives longer, healthier, and better", (p. 5) It is libertarian in that "people should be free to do what they like - and to opt out of undesirable arrangements if they want to do so", (p. 5) The built-in possibility of opting out or making a different choice preserves freedom of choice even though people's behavior has been influenced by the nature of the presentation of the information or by the structure of the decisionmaking system. I had never heard of libertarian paternalism before reading this book, and I now find it fascinating.Written for a general audience, this book contains mostly social and behavioral science theory and models, but there is considerable discussion of structure and process that has roots in mathematical and quantitative modeling. One of the main applications of this social system is economic choice in investing, selecting and purchasing products and services, systems of taxes, banking (mortgages, borrowing, savings), and retirement systems. Other quantitative social choice systems discussed include environmental effects, health care plans, gambling, and organ donations. Softer issues that are also subject to a nudge-based approach are marriage, education, eating, drinking, smoking, influence, spread of information, and politics. There is something in this book for everyone.The basis for this libertarian paternalism concept is in the social theory called "science of choice", the study of the design and implementation of influence systems on various kinds of people. The terms Econs and Humans, are used to refer to people with either considerable or little rational decision-making talent, respectively. The various libertarian paternalism concepts and systems presented are tested and compared in light of these two types of people. Two foundational issues that this book has in common with another book, Network of Echoes: Imitation, Innovation and Invisible Leaders, that was also reviewed for this issue of the Journal are that 1 ) there are two modes of thinking (or components of the brain) - an automatic (intuitive) process and a reflective (rational) process and 2) the need for conformity and the desire for imitation are powerful forces in human behavior. …

3,435 citations

Journal ArticleDOI
Amina Adadi1, Mohammed Berrada1
TL;DR: This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Abstract: At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.

2,258 citations

01 Jan 2014
TL;DR: Using Language部分的�’学模式既不落俗套,又能真正体现新课程标准所倡导的�'学理念,正是年努力探索的问题.
Abstract: 人教版高中英语新课程教材中,语言运用(Using Language)是每个单元必不可少的部分,提供了围绕单元中心话题的听、说、读、写的综合性练习,是单元中心话题的延续和升华.如何设计Using Language部分的教学,使自己的教学模式既不落俗套,又能真正体现新课程标准所倡导的教学理念,正是广大一线英语教师一直努力探索的问题.

2,071 citations

Proceedings ArticleDOI
01 Oct 2018
TL;DR: In an effort to create best practices and identify open challenges, the authors describe foundational concepts of explainability and show how they can be used to classify existing literature, and discuss why current approaches to explanatory methods especially for deep neural networks are insufficient.
Abstract: There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we describe foundational concepts of explainability and show how they can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.

967 citations

01 Feb 2009

911 citations