scispace - formally typeset
Search or ask a question
Author

Claudia D'Adamo

Bio: Claudia D'Adamo is an academic researcher from Wheaton College (Massachusetts). The author has contributed to research in topics: Peer tutor & Intelligent tutoring system. The author has an hindex of 1, co-authored 1 publications receiving 46 citations.

Papers
More filters
Proceedings ArticleDOI
05 May 2012
TL;DR: Treating her as a partner, primarily through aligning oneself with Stacy using pronouns like you or the authors rather than she or it significantly correlates with student learning, as do playful face-threatening comments such as teasing, while elaborate explanations of Stacy's behavior in the third-person and formal tutoring statements reduce learning gains.
Abstract: Understanding how children perceive and interact with teachable agents (systems where children learn through teaching a synthetic character embedded in an intelligent tutoring system) can provide insight into the effects of so-cial interaction on learning with intelligent tutoring systems. We describe results from a think-aloud study where children were instructed to narrate their experience teaching Stacy, an agent who can learn to solve linear equations with the student's help. We found treating her as a partner, primarily through aligning oneself with Stacy using pronouns like you or we rather than she or it significantly correlates with student learning, as do playful face-threatening comments such as teasing, while elaborate explanations of Stacy's behavior in the third-person and formal tutoring statements reduce learning gains. Additionally, we found that the agent's mistakes were a significant predictor for students shifting away from alignment with the agent.

56 citations


Cited by
More filters
Proceedings ArticleDOI
08 Jun 2018
TL;DR: A study with 16 first-time chatbot users interacting with eight chatbots over multiple sessions on the Facebook Messenger platform revealed that users preferred chatbots that provided either a 'human-like' natural language conversation ability, or an engaging experience that exploited the benefits of the familiar turn-based messaging interface.
Abstract: Text messaging-based conversational agents (CAs), popularly called chatbots, received significant attention in the last two years. However, chatbots are still in their nascent stage: They have a low penetration rate as 84% of the Internet users have not used a chatbot yet. Hence, understanding the usage patterns of first-time users can potentially inform and guide the design of future chatbots. In this paper, we report the findings of a study with 16 first-time chatbot users interacting with eight chatbots over multiple sessions on the Facebook Messenger platform. Analysis of chat logs and user interviews revealed that users preferred chatbots that provided either a 'human-like' natural language conversation ability, or an engaging experience that exploited the benefits of the familiar turn-based messaging interface. We conclude with implications to evolve the design of chatbots, such as: clarify chatbot capabilities, sustain conversation context, handle dialog failures, and end conversations gracefully.

213 citations

01 Jan 2001
TL;DR: In this paper, the authors developed computer programs called PALs (Personal A_ssistants for L_earning) in which computers and students alternately coach each other.
Abstract: Our attempts to improve physics instruction have led us to analyze thought processes needed to apply scientific principles to problems—and to recognize that reliable performance requires the basic cognitive functions of deciding, implementing, and assessing. Using a reciprocal-teaching strategy to teach such thought processes explicitly, we have developed computer programs called PALs (P_ersonal A_ssistants for L_earning) in which computers and students alternately coach each other. These computer-implemented tutorials make it practically feasible to provide students with individual guidance and feedback ordinarily unavailable in most courses. We constructed PALs specifically designed to teach the application of Newton’s laws. In a comparative experimental study these computer tutorials were found to be nearly as effective as individual tutoring by expert teachers—and considerably more effective than the instruction provided in a well-taught physics class. Furthermore, almost all of the students using the PALs perceived them as very helpful to their learning. These results suggest that the proposed instructional approach could fruitfully be extended to improve instruction in various practically realistic contexts.

140 citations

Journal ArticleDOI
TL;DR: The overlap between HCI and sense of agency for computer input modalities and system feedback, computer assistance, and joint actions between humans and computers is explored.
Abstract: The sense of agency is the experience of controlling both one's body and the external environment. Although the sense of agency has been studied extensively, there is a paucity of studies in applied "real-life" situations. One applied domain that seems highly relevant is human-computer-interaction (HCI), as an increasing number of our everyday agentive interactions involve technology. Indeed, HCI has long recognized the feeling of control as a key factor in how people experience interactions with technology. The aim of this review is to summarize and examine the possible links between sense of agency and understanding control in HCI. We explore the overlap between HCI and sense of agency for computer input modalities and system feedback, computer assistance, and joint actions between humans and computers. An overarching consideration is how agency research can inform HCI and vice versa. Finally, we discuss the potential ethical implications of personal responsibility in an ever-increasing society of technology users and intelligent machine interfaces.

134 citations

Proceedings ArticleDOI
19 Apr 2018
TL;DR: By studying a field deployment of a Human Resource chatbot, data is reported on users' interest areas in conversational interactions to inform the development of CAs, and rich signals in Conversational interactions are highlighted for inferring user satisfaction with the instrumental usage and playful interactions with the agent.
Abstract: Many conversational agents (CAs) are developed to answer users' questions in a specialized domain. In everyday use of CAs, user experience may extend beyond satisfying information needs to the enjoyment of conversations with CAs, some of which represent playful interactions. By studying a field deployment of a Human Resource chatbot, we report on users' interest areas in conversational interactions to inform the development of CAs. Through the lens of statistical modeling, we also highlight rich signals in conversational interactions for inferring user satisfaction with the instrumental usage and playful interactions with the agent. These signals can be utilized to develop agents that adapt functionality and interaction styles. By contrasting these signals, we shed light on the varying functions of conversational interactions. We discuss design implications for CAs, and directions for developing adaptive agents based on users' conversational behaviors.

73 citations

Proceedings ArticleDOI
04 Jun 2016
TL;DR: A 17-day field study of a prototype of a personal AI agent that helps employees find work-related information is conducted and it is found that user differences in social-agent orientation and aversion to agent proactive interactions can be inferred from behavioral signals.
Abstract: Personal agent software is now in daily use in personal devices and in some organizational settings. While many advocate an agent sociality design paradigm that incorporates human-like features and social dialogues, it is unclear whether this is a good match for professionals who seek productivity instead of leisurely use. We conducted a 17-day field study of a prototype of a personal AI agent that helps employees find work-related information. Using log data, surveys, and interviews, we found individual differences in the preference for humanized social interactions (social-agent orientation), which led to different user needs and requirements for agent design. We also explored the effect of agent proactive interactions and found that they carried the risk of interruption, especially for users who were generally averse to interruptions at work. Further, we found that user differences in social-agent orientation and aversion to agent proactive interactions can be inferred from behavioral signals. Our results inform research into social agent design, proactive agent interaction, and personalization of AI agents.

66 citations