scispace - formally typeset
Search or ask a question

Showing papers on "Chatbot published in 2019"


Journal Article
TL;DR: The authors introduced the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content, given an image, a dialog history and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately.
Abstract: We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being sufficiently grounded in vision to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person real-time chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and consists of $\sim$ ∼ 1.2M dialog question-answer pairs from 10-round, human-human dialogs grounded in $\sim$ ∼ 120k images from the COCO dataset. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders—Late Fusion, Hierarchical Recurrent Encoder and Memory Network (optionally with attention over image features)—and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank and recall $@k$ @ k of human response. We quantify the gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first ‘visual chatbot’! Our dataset, code, pretrained models and visual chatbot are available on https://visualdialog.org .

484 citations


Journal ArticleDOI
TL;DR: Chatbot identity disclosure negatively affects customer purchases because customers perceive the disclosed bot as less knowledgeable and less empathetic.
Abstract: Chatbot identity disclosure negatively affects customer purchases because customers perceive the disclosed bot as less knowledgeable and less empathetic.

311 citations


Journal ArticleDOI
TL;DR: Understanding the user’s side may be crucial for designing better chatbots in the future and, thus, can contribute to advancing the field of human–computer interaction.

283 citations


Journal ArticleDOI
21 Aug 2019
TL;DR: Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients’ concerns and optimising user experience in order to achieve the best uptake and utilisation.
Abstract: BackgroundArtificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing ed...

256 citations


Journal ArticleDOI
TL;DR: Examination of why chatbots are not yet a substantial instrument for language learning engagement/practice, and direction for future practice and chatbot development, indicated that prior interest in human conversation partners was the best single predictor of future interest in chatbot conversations.

149 citations


Proceedings ArticleDOI
02 May 2019
TL;DR: The conversational agent space, difficulties in meeting user expectations, potential new design approaches, uses of human-bot hybrids, and implications for the ultimate goal of creating software with general intelligence are described.
Abstract: What began as a quest for artificial general intelligence branched into several pursuits, including intelligent assistants developed by tech companies and task-oriented chatbots that deliver more information or services in specific domains. Progress quickened with the spread of low-latency networking, then accelerated dramatically a few years ago. In 2016, task-focused chatbots became a centerpiece of machine intelligence, promising interfaces that are more engaging than robotic answering systems and that can accommodate our increasingly phone-based information needs. Hundreds of thousands were built. Creating successful non-trivial chatbots proved more difficult than anticipated. Some developers now design for human-chatbot (humbot) teams, with people handling difficult queries. This paper describes the conversational agent space, difficulties in meeting user expectations, potential new design approaches, uses of human-bot hybrids, and implications for the ultimate goal of creating software with general intelligence.

144 citations


Proceedings ArticleDOI
02 May 2019
TL;DR: It is found that providing options and explanations were generally favored, as they manifest initiative from the chatbot and are actionable to recover from breakdowns, and provide a nuanced understanding on the strengths and weaknesses of each repair strategy.
Abstract: Text-based conversational systems, also referred to as chatbots, have grown widely popular. Current natural language understanding technologies are not yet ready to tackle the complexities in conversational interactions. Breakdowns are common, leading to negative user experiences. Guided by communication theories, we explore user preferences for eight repair strategies, including ones that are common in commercially-deployed chatbots (e.g., confirmation, providing options), as well as novel strategies that explain characteristics of the underlying machine learning algorithms. We conducted a scenario-based study to compare repair strategies with Mechanical Turk workers (N=203). We found that providing options and explanations were generally favored, as they manifest initiative from the chatbot and are actionable to recover from breakdowns. Through detailed analysis of participants' responses, we provide a nuanced understanding on the strengths and weaknesses of each repair strategy.

137 citations


Proceedings ArticleDOI
16 Jan 2019
TL;DR: On the PersonaChat chit-chat dataset with over 131k training examples, it is found that learning from dialogue with a self-feeding chatbot significantly improves performance, regardless of the amount of traditional supervision.
Abstract: The majority of conversations a dialogue agent sees over its lifetime occur after it has already been trained and deployed, leaving a vast store of potential training signal untapped. In this work, we propose the self-feeding chatbot, a dialogue agent with the ability to extract new training examples from the conversations it participates in. As our agent engages in conversation, it also estimates user satisfaction in its responses. When the conversation appears to be going well, the user’s responses become new training examples to imitate. When the agent believes it has made a mistake, it asks for feedback; learning to predict the feedback that will be given improves the chatbot’s dialogue abilities further. On the PersonaChat chit-chat dataset with over 131k training examples, we find that learning from dialogue with a self-feeding chatbot significantly improves performance, regardless of the amount of traditional supervision.

136 citations


Journal ArticleDOI
TL;DR: A novel method of analyzing the content of messages produced in human-chatbot interactions is proposed, using the Condor Tribefinder system the authors developed for text mining that is based on a machine learning classification engine.

131 citations


Journal ArticleDOI
TL;DR: This study, which is the first to investigate chatbot advertising, may hold important managerial implications and the relation between perceived intrusiveness and patronage intentions was investigated.

116 citations


Proceedings ArticleDOI
02 May 2019
TL;DR: It was found that the participants in the chatbot survey, as compared to those in the web survey, were more likely to produce differentiated responses and were less likely to satisfice; the chat bot survey thus resulted in higher-quality data.
Abstract: This study aims to explore the feasibility of a text-based virtual agent as a new survey method to overcome the web survey's common response quality problems, which are caused by respondents' inattention. To this end, we conducted a 2 (platform: web vs. chatbot) × 2 (conversational style: formal vs. casual) experiment. We used satisficing theory to compare the responses' data quality. We found that the participants in the chatbot survey, as compared to those in the web survey, were more likely to produce differentiated responses and were less likely to satisfice; the chatbot survey thus resulted in higher-quality data. Moreover, when a casual conversational style is used, the participants were less likely to satisfice-although such effects were only found in the chatbot condition. These results imply that conversational interactivity occurs when a chat interface is accompanied by messages with effective tone. Based on an analysis of the qualitative responses, we also showed that a chatbot could perform part of a human interviewer's role by applying effective communication strategies.

Proceedings ArticleDOI
Haoyu Song1, Wei-Nan Zhang1, Yiming Cui1, Dong Wang, Ting Liu1 
01 Aug 2019
TL;DR: Both automatic and human evaluations show that the proposed memory-augmented architecture to exploit persona information from context and incorporate a conditional variational autoencoder model together to generate diverse and sustainable conversations.
Abstract: In human conversations, due to their personalities in mind, people can easily carry out and maintain the conversations. Giving conversational context with persona information to a chatbot, how to exploit the information to generate diverse and sustainable conversations is still a non-trivial task. Previous work on persona-based conversational models successfully make use of predefined persona information and have shown great promise in delivering more realistic responses. And they all learn with the assumption that given a source input, there is only one target response. However, in human conversations, there are massive appropriate responses to a given input message. In this paper, we propose a memory-augmented architecture to exploit persona information from context and incorporate a conditional variational autoencoder model together to generate diverse and sustainable conversations. We evaluate the proposed model on a benchmark persona-chat dataset. Both automatic and human evaluations show that our model can deliver more diverse and more engaging persona-based responses than baseline approaches.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that chatbots should be enriched with social characteristics that cohere with users' expectations, ultimately avoiding frustration and dissatisfaction, and bring together the literature on text-based chatbots to derive a conceptual model of social characteristics for chatbots.
Abstract: Chatbots' growing popularity has brought new challenges to HCI, having changed the patterns of human interactions with computers. The increasing need to approximate conversational interaction styles raises expectations for chatbots to present social behaviors that are habitual in human-human communication. In this survey, we argue that chatbots should be enriched with social characteristics that cohere with users' expectations, ultimately avoiding frustration and dissatisfaction. We bring together the literature on disembodied, text-based chatbots to derive a conceptual model of social characteristics for chatbots. We analyzed 56 papers from various domains to understand how social characteristics can benefit human-chatbot interactions and identify the challenges and strategies to designing them. Additionally, we discussed how characteristics may influence one another. Our results provide relevant opportunities to both researchers and designers to advance human-chatbot interactions.

Posted Content
TL;DR: A novel method which pretrains the response selection model on large general-domain conversational corpora and fine-tunes the pretrained model for the target dialogue domain, relying only on the small in-domain dataset to capture the nuances of the given dialogue domain is proposed.
Abstract: Despite their popularity in the chatbot literature, retrieval-based models have had modest impact on task-oriented dialogue systems, with the main obstacle to their application being the low-data regime of most task-oriented dialogue tasks. Inspired by the recent success of pretraining in language modelling, we propose an effective method for deploying response selection in task-oriented dialogue. To train response selection models for task-oriented dialogue tasks, we propose a novel method which: 1) pretrains the response selection model on large general-domain conversational corpora; and then 2) fine-tunes the pretrained model for the target dialogue domain, relying only on the small in-domain dataset to capture the nuances of the given dialogue domain. Our evaluation on six diverse application domains, ranging from e-commerce to banking, demonstrates the effectiveness of the proposed training method.

Journal ArticleDOI
TL;DR: The proposed framework enables a smooth human–robot interaction that supports the efficient implementation of the chatbot healthcare service and proposes a chatbot-based healthcare service with a knowledge base for cloud computing.
Abstract: With the recent increase in the interest of individuals in health, lifecare, and disease, hospital medical services have been shifting from a treatment focus to prevention and health management. The medical industry is creating additional services for health- and life-promotion programs. This change represents a medical-service paradigm shift due to the prolonged life expectancy, aging, lifestyle changes, and income increases, and consequently, the concept of the smart health service has emerged as a major issue. Due to smart health, the existing health-promotion medical services that typically have been operated by large hospitals have been developing into remote medical-treatment services where personal health records are used in small hospitals; moreover, a further expansion has been occurring in the direction of u-Healthcare in which health conditions are continuously monitored in the everyday lives of the users. However, as the amount of data is increasing and the medical-data complexity is intensifying, the limitations of the previous approaches are increasingly problematic; furthermore, since even the same disease can show different symptoms depending on the personal health conditions, lifestyle, and genome information, universal healthcare is not effective for some patients, and it can even generate severe side effects. Thus, research on the AI-based healthcare that is in the form of mining-based smart health, which is a convergence technology of the 4IR, is actively being carried out. Particularly, the introduction of various smart medical equipment for which healthcare big data and a running machine have been combined and the expansion of the distribution of smartphone wearable devices have led to innovations such as personalized diagnostic and treatment services and chronic-disease management and prevention services. In addition, various already launched applications allow users to check their own health conditions and receive the corresponding feedback in real time. Based on these innovations, the preparation of a way to determine a user’s current health conditions, and to respond properly through contextual feedback in the case of unsound health conditions, is underway. However, since the previously made healthcare-related applications need to be linked to a wearable device, and they provide medical feedback to users based solely on specific biometric data, inaccurate information can be provided. In addition, the user interfaces of some healthcare applications are very complicated, causing user inconvenience regarding the attainment of desired information. Therefore, we propose a chatbot-based healthcare service with a knowledge base for cloud computing. The proposed method is a mobile health service in the form of a chatbot for the provision of fast treatment in response to accidents that may occur in everyday life, and also in response to changes of the conditions of patients with chronic diseases. A chatbot is an intelligent conversation platform that interacts with users via a chatting interface, and since its use can be facilitated by linkages with the major social network service messengers, general users can easily access and receive various health services. The proposed framework enables a smooth human–robot interaction that supports the efficient implementation of the chatbot healthcare service. The design of the framework comprises the following four levels: data level, information level, knowledge level, and service level.

Proceedings ArticleDOI
02 May 2019
TL;DR: QuizBot, a dialogue-based agent that helps students learn factual knowledge in science, safety, and English vocabulary, and suggests that educational chatbot systems may have beneficial use, particularly for learning outside of traditional settings.
Abstract: Advances in conversational AI have the potential to enable more engaging and effective ways to teach factual knowledge. To investigate this hypothesis, we created QuizBot, a dialogue-based agent that helps students learn factual knowledge in science, safety, and English vocabulary. We evaluated QuizBot with 76 students through two within-subject studies against a flashcard app, the traditional medium for learning factual knowledge. Though both systems used the same algorithm for sequencing materials, QuizBot led to students recognizing (and recalling) over 20% more correct answers than when students used the flashcard app. Using a conversational agent is more time consuming to practice with, but in a second study, of their own volition, students spent 2.6x more time learning with QuizBot than with flashcards and reported preferring it strongly for casual learning. Our results in this second study showed QuizBot yielded improved learning gains over flashcards on recall. These results suggest that educational chatbot systems may have beneficial use, particularly for learning outside of traditional settings.

Book ChapterDOI
27 Mar 2019
TL;DR: This paper aims to discuss chatbots classification, their design techniques used in earlier and modern chatbots and how the two main categories of chatbots handle conversation context.
Abstract: A chatbot can be defined as a computer program, designed to interact with users using natural language or text in a way that the user thinks he is having dialogue with a human. Most of the chatbots utilise the algorithms of artificial intelligence (AI) in order to generate required response. Earlier chatbots merely created an illusion of intelligence by employing much simpler pattern matching and string processing design techniques for their interaction with users using rule-based and generative-based models. However, with the emergence of new technologies more intelligent systems have emerged using complex knowledge-based models. This paper aims to discuss chatbots classification, their design techniques used in earlier and modern chatbots and how the two main categories of chatbots handle conversation context.

Journal ArticleDOI
TL;DR: The consent chatbot presents an engaging alternative to deliver content challenging to comprehend in traditional paper or in‐person consent and the cascade and follow‐up chatbots may be acceptable, user‐friendly, scalable approaches to manage ancillary genetic counseling tasks.
Abstract: A barrier to incorporating genomics more broadly is limited access to providers with genomics expertise. Chatbots are a technology-based simulated conversation used in scaling communications. Geisinger and Clear Genetics, Inc. have developed chatbots to facilitate communication with participants receiving clinically actionable genetic variants from the MyCode® Community Health Initiative (MyCode® ). The consent chatbot walks patients through the consent allowing them to opt to receive more or less detail on key topics (goals, benefits, risks, etc.). The follow-up chatbot reminds participants of suggested actions following result receipt and the cascade chatbot can be sent to at-risk relatives by participants to share their genetic test results and facilitate cascade testing. To explore the acceptability, usability, and understanding of the study consent, post-result follow-up and cascade testing chatbots, we conducted six focus groups with MyCode® participants. Sixty-two individuals participated in a focus group (n = 33 consent chatbot, n = 29 follow-up and cascade chatbot). Participants were mostly female (n = 42, 68%), Caucasian (n = 58, 94%), college-educated (n = 33,53%), retirees (n = 38, 61%), and of age 56 years or older (n = 52, 84%). Few participants reported that they knew what a chatbot was (n = 10, 16%), and a small number reported that they had used a chatbot (n = 5, 8%). Qualitative analysis of transcripts and notes from focus groups revealed four main themes: (a) overall impressions, (b) suggested improvements, (c) concerns and limitations, and (d) implementation. Participants supported using chatbots to consent for genomics research and to interact with healthcare providers for care coordination following receipt of genomic results. Most expressed willingness to use a chatbot to share genetic information with relatives. The consent chatbot presents an engaging alternative to deliver content challenging to comprehend in traditional paper or in-person consent. The cascade and follow-up chatbots may be acceptable, user-friendly, scalable approaches to manage ancillary genetic counseling tasks.

Proceedings ArticleDOI
10 Sep 2019
TL;DR: Usability test outcomes confirm what is already known about chatbots - that they are highly usable but conventional methods for assessing usability and user experience may not be as accurate when applied to chatbots.
Abstract: Chatbots are becoming increasingly popular as a human-computer interface. The traditional best practices normally applied to User Experience (UX) design cannot easily be applied to chatbots, nor can conventional usability testing techniques guarantee accuracy. WeightMentor is a bespoke self-help motivational tool for weight loss maintenance. This study addresses the following four research questions: How usable is the WeightMentor chatbot, according to conventional usability methods?; To what extend will different conventional usability questionnaires correlate when evaluating chatbot usability?; And how do they correlate to a tailored chatbot usability survey score?; What is the optimum number of users required to identify chatbot usability issues?; How many task repetitions are required for a first-time chatbot users to reach optimum task performance (i.e. efficiency based on task completion times)? This paper describes the procedure for testing the WeightMentor chatbot, assesses correlation between typical usability testing metrics, and suggests that conventional wisdom on participant numbers for identifying usability issues may not apply to chatbots. The study design was a usability study. WeightMentor was tested using a pre-determined usability testing protocol, evaluating ease of task completion, unique usability errors and participant opinions on the chatbot (collected using usability questionnaires). WeightMentor usability scores were generally high, and correlation between questionnaires was strong. The optimum number of users for identifying chatbot usability errors was 26, which challenges previous research. Chatbot users reached optimum proficiency in tasks after just one repetition. Usability test outcomes confirm what is already known about chatbots - that they are highly usable (due to their simple interface and conversation-driven functionality) but conventional methods for assessing usability and user experience may not be as accurate when applied to chatbots.

Journal ArticleDOI
TL;DR: A conversational sequence for a brief motivational interview to be delivered by a Web-based text messaging application (chatbot) and to investigate its conversational experience with graduate students in their coping with stress.
Abstract: Background: In addition to addiction and substance abuse, motivational interviewing (MI) is increasingly being integrated in treating other clinical issues such as mental health problems. Most of the many technological adaptations of MI, however, have focused on delivering the action-oriented treatment, leaving its relational component unexplored or vaguely described. This study intended to design a conversational sequence that considers both technical and relational components of MI for a mental health concern. Objective: This case study aimed to design a conversational sequence for a brief motivational interview to be delivered by a Web-based text messaging application (chatbot) and to investigate its conversational experience with graduate students in their coping with stress. Methods: A brief conversational sequence was designed with varied combinations of MI skills to follow the 4 processes of MI. A Web-based text messaging application, Bonobot, was built as a research prototype to deliver the sequence in a conversation. A total of 30 full-time graduate students who self-reported stress with regard to their school life were recruited for a survey of demographic information and perceived stress and a semistructured interview. Interviews were transcribed verbatim and analyzed by Braun and Clarke’s thematic method. The themes that reflect the process of, impact of, and needs for the conversational experience are reported. Results: Participants had a high level of perceived stress (mean 22.5 [SD 5.0]). Our findings included the following themes: Evocative Questions and Cliched Feedback; Self-Reflection and Potential Consolation; and Need for Information and Contextualized Feedback. Participants particularly favored the relay of evocative questions but were less satisfied with the agent-generated reflective and affirming feedback that filled in-between. Discussing the idea of change was a good means of reflecting on themselves, and some of Bonobot’s encouragements related to graduate school life were appreciated. Participants suggested the conversation provide informational support, as well as more contextualized feedback. Conclusions: A conversational sequence for a brief motivational interview was presented in this case study. Participant feedback suggests sequencing questions and MI-adherent statements can facilitate a conversation for stress management, which may encourage a chance of self-reflection. More diversified sequences, along with more contextualized feedback, should follow to offer a better conversational experience and to confirm any empirical effect.

Journal ArticleDOI
John Powell1
TL;DR: It is argued that many medical decisions require value judgements and the doctor-patient relationship requires empathy and understanding to arrive at a shared decision, often handling large areas of uncertainty and balancing competing risks.
Abstract: Over the next decade, one issue which will dominate sociotechnical studies in health informatics is the extent to which the promise of artificial intelligence in health care will be realized, along with the social and ethical issues which accompany it. A useful thought experiment is the application of the Turing test to user-facing artificial intelligence systems in health care (such as chatbots or conversational agents). In this paper I argue that many medical decisions require value judgements and the doctor-patient relationship requires empathy and understanding to arrive at a shared decision, often handling large areas of uncertainty and balancing competing risks. Arguably, medicine requires wisdom more than intelligence, artificial or otherwise. Artificial intelligence therefore needs to supplement rather than replace medical professionals, and identifying the complementary positioning of artificial intelligence in medical consultation is a key challenge for the future. In health care, artificial intelligence needs to pass the implementation game, not the imitation game.

Journal ArticleDOI
30 Sep 2019
TL;DR: The addition of a supportive chatbot to a popular smoking cessation app more than doubled user engagement and there is low quality evidence that the addition also increased self-reported smoking cessation.
Abstract: ObjectiveThe objective of this study was to assess whether a version of the Smoke Free app with a supportive chatbot powered by artificial intelligence (versus a version without the chatbot) led to...

Proceedings ArticleDOI
08 Jan 2019
TL;DR: A classification system for SPAs is introduced based on a systematic literature review and a cluster analysis reveals five SPA archetypes: Adaptive Voice (Vision) Assistants, Chatbot Assistant, Embodied Virtual assistants, Passive Pervasive Assistants and Natural Conversation Assistants.
Abstract: The digital age has yielded systems that increasingly reduce the complexity of our everyday lives. As such, smart personal assistants such as Amazon’s Alexa or Apple’s Siri combine the comfort of intuitive natural language interaction with the utility of personalized and situation-dependent information and service provision. However, research on SPAs is becoming increasingly complex and opaque. To reduce complexity, this paper introduces a classification system for SPAs. Based on a systematic literature review, a cluster analysis reveals five SPA archetypes: Adaptive Voice (Vision) Assistants, Chatbot Assistants, Embodied Virtual Assistants, Passive Pervasive Assistants, and Natural Conversation Assistants.

Proceedings ArticleDOI
23 Apr 2019
TL;DR: This research intends to apply the concepts of natural language processing and machine learning to create a chatbot application that can be of great use to people in conducting daily check-ups, makes people aware of their health status and encourages people to make proper measures to remain healthy.
Abstract: Hospitals are the most widely used means by which a sick person gets medical check-ups, disease diagnosis and treatment recommendation. This has been a practice by almost all the people over the world. People consider it as the most reliable means to check their health status. The proposed system is to create an alternative to this conventional method of visiting a hospital and making an appointment with a doctor to get diagnosis. This research intends to apply the concepts of natural language processing and machine learning to create a chatbot application. People can interact with the chatbot just like they do with another human and through a series of queries, chatbot will identify the symptoms of the user and thereby, predicts the disease and recommends treatment. This system can be of great use to people in conducting daily check-ups, makes people aware of their health status and encourages people to make proper measures to remain healthy. According to this research, such a system is not widely used and people are less aware of it. Executing this proposed framework can help people avoid the time-consuming method of visiting hospitals by using this free of cost application, wherever they are.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: A unified framework for human evaluation of chatbots that augments existing tools and provides a web-based hub for researchers to share and compare their dialog systems and open-source baseline models and evaluation datasets are introduced.
Abstract: Open-domain dialog systems (i.e. chatbots) are difficult to evaluate. The current best practice for analyzing and comparing these dialog systems is the use of human judgments. However, the lack of standardization in evaluation procedures, and the fact that model parameters and code are rarely published hinder systematic human evaluation experiments. We introduce a unified framework for human evaluation of chatbots that augments existing tools and provides a web-based hub for researchers to share and compare their dialog systems. Researchers can submit their trained models to the ChatEval web interface and obtain comparisons with baselines and prior work. The evaluation code is open-source to ensure standardization and transparency. In addition, we introduce open-source baseline models and evaluation datasets. ChatEval can be found at https://chateval.org.

Journal ArticleDOI
07 Nov 2019
TL;DR: This work prototyped their chatbot as an interactive question-answering application and analyzed users’ interaction patterns, perceptions, and contexts of use to understand the potential of chatbots for breastfeeding education by conducting an Wizard-of-Oz experiment.
Abstract: Use of chatbots in different spheres of life is continuously increasing since a couple of years. We attempt to understand the potential of chatbots for breastfeeding education by conducting an Wizard-of-Oz experiment with 22 participants. Our participants included breastfeeding mothers and community health workers from the slum areas of Delhi, India. We prototyped our chatbot as an interactive question-answering application and analyzed users' interaction patterns, perceptions, and contexts of use. The chatbot use cases emerged primarily as the first line of support. The participants, especially the mothers, were enthusiastic with the opportunity to ask questions and get reliable answers. We also observed the influencing role of female relative, e.g. mothers-in-law, in breastfeeding practices. Our analysis of user information-seeking suggests that a majority of questions (88%) are of nature that can be answered by a chatbot application. We further observe that the queries are embedded deeply into myths and existing belief systems. Therefore requiring the designers to focus on subtle aspects for providing information such as positive reinforcement and contextual sensitivity. Further, we discuss, different societal and ethical issues associated with Chatbot usage for a public health topic such as breastfeeding education.

Journal ArticleDOI
TL;DR: A novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation, which shows that near human-like dialogue policies can be induced and generalisation to unseen data is a difficult problem.

Journal Article
TL;DR: A survey with professionals familiar with interacting with chatbots in a work environment shows a significant effect of anthropomorphic design features on perceived usefulness, with a strength four times the size of the effect of functional chatbot features.
Abstract: Information technology is rapidly changing the way how people collaborate in enterprises. Chatbots integrated into enterprise collaboration systems can strengthen collaboration culture and help reduce work overload. In light of a growing usage of chatbots in enterprise collaboration systems, we examine the influence of anthropomorphic and functional chatbot design features on user acceptance. We conducted a survey with professionals familiar with interacting with chatbots in a work environment. The results show a significant effect of anthropomorphic design features on perceived usefulness, with a strength four times the size of the effect of functional chatbot features. We suggest that researchers and practitioners alike dedicate priorities to anthropomorphic design features with the same magnitude as common for functional design features in chatbot design and research.

Proceedings ArticleDOI
27 May 2019
TL;DR: This paper presents the experience in implementing a chatbot for expert recommendation tasks for Pharo, and reports on a preliminary evaluation for which the recommendation system was welcomed, though the conversational behavior was not; users expected a fully conversational chatbot, capable of following the conversation flow that the user handles.
Abstract: This paper presents our experience in implementing a chatbot for expert recommendation tasks. The chatbot was developed for the Pharo software ecosystem, and is integrated with the Discord chat service, which is used by the Pharo Community. We also report on a preliminary evaluation for which; the recommendation system was welcomed, though the conversational behavior was not; users expected a fully conversational chatbot, capable of following the conversation flow that the user handles. We discuss that such expectations might be hard to meet because of the uncanny valley effect.

Journal ArticleDOI
TL;DR: Chatbots are artificial intelligence–driven programs that interact with people that can be used for screening, treatment adherence and follow-up and deployed as text-based services on a website or mobile applications.