scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Evaluating and Informing the Design of Chatbots

TL;DR: A study with 16 first-time chatbot users interacting with eight chatbots over multiple sessions on the Facebook Messenger platform revealed that users preferred chatbots that provided either a 'human-like' natural language conversation ability, or an engaging experience that exploited the benefits of the familiar turn-based messaging interface.
Abstract: Text messaging-based conversational agents (CAs), popularly called chatbots, received significant attention in the last two years. However, chatbots are still in their nascent stage: They have a low penetration rate as 84% of the Internet users have not used a chatbot yet. Hence, understanding the usage patterns of first-time users can potentially inform and guide the design of future chatbots. In this paper, we report the findings of a study with 16 first-time chatbot users interacting with eight chatbots over multiple sessions on the Facebook Messenger platform. Analysis of chat logs and user interviews revealed that users preferred chatbots that provided either a 'human-like' natural language conversation ability, or an engaging experience that exploited the benefits of the familiar turn-based messaging interface. We conclude with implications to evolve the design of chatbots, such as: clarify chatbot capabilities, sustain conversation context, handle dialog failures, and end conversations gracefully.
Citations
More filters
Journal ArticleDOI
TL;DR: It is argued that chatbots should be enriched with social characteristics that cohere with users’ expectations, ultimately avoiding frustration and dissatisfaction.
Abstract: Chatbots’ growing popularity has brought new challenges to HCI, having changed the patterns of human interactions with computers. The increasing need to approximate conversational interaction style...

157 citations


Cites background or methods or result from "Evaluating and Informing the Design..."

  • ...The surveyed literature reports two main benefits of communicability for chatbots: [B1] to unveil functionalities: while interacting with chatbots, users may not know that a desired functionality is available or how to use it (Jain et al., 2018; Valério et al., 2017)....

    [...]

  • ...In Jain et al. (2018), two participants also expected chatbots to retain context from previous interactions to improve recommendations....

    [...]

  • ...Jain et al. (2018) highlighted that participants reported negative experience when finding “mismatch[es] between [a] chatbot’s real context and their assumptions of the chatbot context.”...

    [...]

  • ...Jain et al. (2018) reported that users highlighted this functionality as useful for the reviewed chatbots....

    [...]

  • ...Some of them are also emphasized in other studies, as follows: [S1] to clarify the purpose of the chatbot: First-time users in Jain et al. (2018) highlighted that a clarification about the chatbots’ purpose should be placed in the introductory message....

    [...]

Proceedings ArticleDOI
02 May 2019
TL;DR: The conversational agent space, difficulties in meeting user expectations, potential new design approaches, uses of human-bot hybrids, and implications for the ultimate goal of creating software with general intelligence are described.
Abstract: What began as a quest for artificial general intelligence branched into several pursuits, including intelligent assistants developed by tech companies and task-oriented chatbots that deliver more information or services in specific domains. Progress quickened with the spread of low-latency networking, then accelerated dramatically a few years ago. In 2016, task-focused chatbots became a centerpiece of machine intelligence, promising interfaces that are more engaging than robotic answering systems and that can accommodate our increasingly phone-based information needs. Hundreds of thousands were built. Creating successful non-trivial chatbots proved more difficult than anticipated. Some developers now design for human-chatbot (humbot) teams, with people handling difficult queries. This paper describes the conversational agent space, difficulties in meeting user expectations, potential new design approaches, uses of human-bot hybrids, and implications for the ultimate goal of creating software with general intelligence.

144 citations

Proceedings ArticleDOI
02 May 2019
TL;DR: It is found that providing options and explanations were generally favored, as they manifest initiative from the chatbot and are actionable to recover from breakdowns, and provide a nuanced understanding on the strengths and weaknesses of each repair strategy.
Abstract: Text-based conversational systems, also referred to as chatbots, have grown widely popular. Current natural language understanding technologies are not yet ready to tackle the complexities in conversational interactions. Breakdowns are common, leading to negative user experiences. Guided by communication theories, we explore user preferences for eight repair strategies, including ones that are common in commercially-deployed chatbots (e.g., confirmation, providing options), as well as novel strategies that explain characteristics of the underlying machine learning algorithms. We conducted a scenario-based study to compare repair strategies with Mechanical Turk workers (N=203). We found that providing options and explanations were generally favored, as they manifest initiative from the chatbot and are actionable to recover from breakdowns. Through detailed analysis of participants' responses, we provide a nuanced understanding on the strengths and weaknesses of each repair strategy.

137 citations


Cites background from "Evaluating and Informing the Design..."

  • ...Both breakdowns and current recovery processes decrease peoples’ satisfaction, trust, and willingness to continue using a chatbot [19, 20, 28]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors argue that chatbots should be enriched with social characteristics that cohere with users' expectations, ultimately avoiding frustration and dissatisfaction, and bring together the literature on text-based chatbots to derive a conceptual model of social characteristics for chatbots.
Abstract: Chatbots' growing popularity has brought new challenges to HCI, having changed the patterns of human interactions with computers. The increasing need to approximate conversational interaction styles raises expectations for chatbots to present social behaviors that are habitual in human-human communication. In this survey, we argue that chatbots should be enriched with social characteristics that cohere with users' expectations, ultimately avoiding frustration and dissatisfaction. We bring together the literature on disembodied, text-based chatbots to derive a conceptual model of social characteristics for chatbots. We analyzed 56 papers from various domains to understand how social characteristics can benefit human-chatbot interactions and identify the challenges and strategies to designing them. Additionally, we discussed how characteristics may influence one another. Our results provide relevant opportunities to both researchers and designers to advance human-chatbot interactions.

114 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a systematic literature review of text-based chatbots, focusing on how users interact with text-Based Chatbots, and map the relevant themes that are recurrent in the last ten years of research.
Abstract: Over the last ten years there has been a growing interest around text-based chatbots, software applications interacting with humans using natural written language. However, despite the enthusiastic market predictions, ‘conversing’ with this kind of agents seems to raise issues that go beyond their current technological limitations, directly involving the human side of interaction. By adopting a Human-Computer Interaction (HCI) lens, in this article we present a systematic literature review of 83 papers that focus on how users interact with text-based chatbots. We map the relevant themes that are recurrent in the last ten years of research, describing how people experience the chatbot in terms of satisfaction, engagement, and trust, whether and why they accept and use this technology, how they are emotionally involved, what kinds of downsides can be observed in human-chatbot conversations, and how the chatbot is perceived in terms of its humanness. On the basis of these findings, we highlight open issues in current research and propose a number of research opportunities that could be tackled in future years.

104 citations

References
More filters
Book ChapterDOI
TL;DR: In this article, the results of a multi-year research program to identify the factors associated with variations in subjective workload within and between different types of tasks are reviewed, including task-, behavior-, and subject-related correlates of subjective workload experiences.
Abstract: The results of a multi-year research program to identify the factors associated with variations in subjective workload within and between different types of tasks are reviewed. Subjective evaluations of 10 workload-related factors were obtained from 16 different experiments. The experimental tasks included simple cognitive and manual control tasks, complex laboratory and supervisory control tasks, and aircraft simulation. Task-, behavior-, and subject-related correlates of subjective workload experiences varied as a function of difficulty manipulations within experiments, different sources of workload between experiments, and individual differences in workload definition. A multi-dimensional rating scale is proposed in which information about the magnitude and sources of six workload-related factors are combined to derive a sensitive and reliable estimate of workload.

11,418 citations


"Evaluating and Informing the Design..." refers background in this paper

  • ...At the start of the interview, participants were asked to rank the chatbots and rate them with respect to different metrics, including learning curve, frustration level, and fun to use [24]....

    [...]

01 Jan 2007
TL;DR: In this article, the authors reveal how smart design is the new competitive frontier, and why some products satisfy customers while others only frustrate them, and how to choose the ones that satisfy customers.
Abstract: Revealing how smart design is the new competitive frontier, this innovative book is a powerful primer on how--and why--some products satisfy customers while others only frustrate them.

7,238 citations

Book
01 Jan 1988
TL;DR: Revealing how smart design is the new competitive frontier, this innovative book is a powerful primer on how--and why--some products satisfy customers while others only frustrate them.
Abstract: Revealing how smart design is the new competitive frontier, this innovative book is a powerful primer on how--and why--some products satisfy customers while others only frustrate them.

6,027 citations


Additional excerpts

  • ...This is consistent with Norman’s theory of “human error” [34]....

    [...]

Journal ArticleDOI
TL;DR: A discussion of some psychological issues relevant to the ELIZA approach as well as of future developments concludes the paper.
Abstract: ELIZA is a program operating within the MAC time-sharing system of MIT which makes certain kinds of natural language conversation between man and computer possible. Input sentences are analyzed on the basis of decomposition rules which are triggered by key words appearing in the input text. Responses are generated by reassembly rules associated with selected decomposition rules. The fundamental technical problems with which ELIZA is concerned are: (1) the identification of key words, (2) the discovery of minimal context, (3) the choice of appropriate transformations, (4) generation of responses in the absence of key words, and (5) the provision of an editing capability for ELIZA “scripts”. A discussion of some psychological issues relevant to the ELIZA approach as well as of future developments concludes the paper.

2,873 citations

Book
01 Jan 2000
TL;DR: Embodied conversational agents as mentioned in this paper are computer-generated cartoonlike characters that demonstrate many of the same properties as humans in face-to-face conversation, including the ability to produce and respond to verbal and nonverbal communication.
Abstract: Embodied conversational agents are computer-generated cartoonlike characters that demonstrate many of the same properties as humans in face-to-face conversation, including the ability to produce and respond to verbal and nonverbal communication. They constitute a type of (a) multimodal interface where the modalities are those natural to human conversation: speech, facial displays, hand gestures, and body stance; (b) software agent, insofar as they represent the computer in an interaction with a human or represent their human users in a computational environment (as avatars, for example); and (c) dialogue system where both verbal and nonverbal devices advance and regulate the dialogue between the user and the computer. With an embodied conversational agent, the visual dimension of interacting with an animated character on a screen plays an intrinsic role. Not just pretty pictures, the graphics display visual features of conversation in the same way that the face and hands do in face-to-face conversation among humans.This book describes research in all aspects of the design, implementation, and evaluation of embodied conversational agents as well as details of specific working systems. Many of the chapters are written by multidisciplinary teams of psychologists, linguists, computer scientists, artists, and researchers in interface design. The authors include Elisabeth Andre, Norm Badler, Gene Ball, Justine Cassell, Elizabeth Churchill, James Lester, Dominic Massaro, Cliff Nass, Sharon Oviatt, Isabella Poggi, Jeff Rickel, and Greg Sanders.

1,559 citations