scispace - formally typeset
Search or ask a question
Author

Cristen Torrey

Other affiliations: Adobe Systems
Bio: Cristen Torrey is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Social robot & Personal robot. The author has an hindex of 12, co-authored 23 publications receiving 1070 citations. Previous affiliations of Cristen Torrey include Adobe Systems.

Papers
More filters
Journal ArticleDOI
TL;DR: The authors studied whether an embodied human-like robot would elicit stronger anthropomorphic interactions than would a software agent, and whether physical presence moderated this effect, finding that participants were more engaged, disclosed less undesirable behavior, and forgot more with the robot versus the agent, while they ate less and anthropomorphized most with the collocated robot.
Abstract: People’s physical embodiment and presence increase their salience and importance. We predicted people would anthropomorphize an embodied humanoid robot more than a robot–like agent, and a collocated more than a remote robot. A robot or robot–like agent interviewed participants about their health. Participants were either present with the robot/agent, or interacted remotely with the robot/agent projected life–size on a screen. Participants were more engaged, disclosed less undesirable behavior, and forgot more with the robot versus the agent. They ate less and anthropomorphized most with the collocated robot. Participants interacted socially and attempted conversational grounding with the robot/agent though aware it was a machine. Basic questions remain about how people resolve the ambiguity of interacting with a humanlike nonhuman. By virtue of our shared global fate and similar DNA, we humans increasingly appreciate our similarity to nature’s living things. At the same time, we want machines, animals, and plants to meet our needs. Both impulses perhaps motivate the increasing development of humanlike robots and software agents. In this article, we examine social context moderation of anthropometric interactions between people and humanlike machines. We studied whether an embodied humanlike robot would elicit stronger anthropomorphic interactions than would a software agent, and whether physical presence moderated this effect. At the outset, robots and agents differ from ordinary computer programs in that they have autonomy, interact with the environment, and initiate tasks (Franklin & Graesser, 1996). The marriage of artificial intelligence and computer science has made possible robots and agents with humanlike capabilities, such as lifelike gestures and speech. Typically, “robot” refers to a physically–embodied system whereas “agent” refers to a software system. Examples of humanlike robots are NASA’s Robonaut—a humanoid that can hand tools to an astronaut (robonaut.jsc.nasa.gov/robonaut.html), Honda’s Asimo, and Hiroshi Ishiguro’s

304 citations

Proceedings ArticleDOI
10 Mar 2007
TL;DR: Tradeoffs for HRI research of using collocated robots, remote robots, and computer agents as proxies of robots are discussed and a few behavioral and large attitude differences are found.
Abstract: HRI researchers interested in social robots have made large investments in humanoid robots. There is still sparse evidence that peoples' responses to robots differ from their responses to computer agents, suggesting that agent studies might serve to test HRI hypotheses. To help us understand the difference between people's social interactions with an agent and a robot, we experimentally compared people's responses in a health interview with (a) a computer agent projected either on a computer monitor or life-size on a screen, (b) a remote robot projected life-size on a screen, or (c) a collocated robot in the same room. We found a few behavioral and large attitude differences across these conditions. Participants forgot more and disclosed least with the collocated robot, next with the projected remote robot, and then with the agent. They spent more time with the collocated robot and their attitudes were most positive toward that robot. We discuss tradeoffs for HRI research of using collocated robots, remote robots, and computer agents as proxies of robots.

283 citations

Proceedings ArticleDOI
04 Apr 2009
TL;DR: The description of people learning how allows us to elaborate existing understandings of information-seeking behavior by considering how search originates and is evaluated in knowledge domains involving physical objects and physical processes.
Abstract: Communicating the subtleties of a craft technique, like putting a zipper into a garment or throwing a clay pot, can be challenging even when working side by side. Yet How-To content - including text, images, animations, and videos - is available online for a wide variety of crafts. We interviewed people engaged in various crafts to investigate how online resources contributed to their craft practice. We found that participants sought creative inspiration as well as technical clarification online. In this domain, keyword search can be difficult, so supplemental strategies are used. Participants sought information iteratively, because they often needed to enact their knowledge in order to evaluate it. Our description of people learning how allows us to elaborate existing understandings of information-seeking behavior by considering how search originates and is evaluated in knowledge domains involving physical objects and physical processes.

123 citations

Proceedings ArticleDOI
03 Mar 2013
TL;DR: It is found that when robot and human helpers used a hedge or discourse markers, they seemed more considerate and likeable, and less controlling, which suggests that communication strategies derived from speech used when people help each other in natural settings can be effective for planning the help dialogues of robotic assistants.
Abstract: With advances in robotics, robots can give advice and help using natural language The field of HRI, however, has not yet developed a communication strategy for giving advice effectively Drawing on literature in politeness and informal speech, we propose options for a robot's help-giving speech-using hedges or discourse markers, both of which can mitigate the commanding tone implied in direct statements of advice To test these options, we experimentally compared two help-giving strategies depicted in videos of human and robot helpers We found that when robot and human helpers used a hedge or discourse markers, they seemed more considerate and likeable, and less controlling The robot that used discourse markers had even more impact than the human helper The findings suggest that communication strategies derived from speech used when people help each other in natural settings can be effective for planning the help dialogues of robotic assistants

90 citations

Proceedings ArticleDOI
02 Mar 2006
TL;DR: This work suggests adaptation in human-robot interaction has consequences for both task performance and social cohesion, and suggests that people may be more sensitive to social relations with robots when under task or time pressure.
Abstract: Human-robot interaction could be improved by designing robots that engage in adaptive dialogue with users. An adaptive robot could estimate the information needs of individuals and change its dialogue to suit these needs. We test the value of adaptive robot dialogue by experimentally comparing the effects of adaptation versus no adaptation on information exchange and social relations. In Experiment 1, a robot chef adapted to novices by providing detailed explanations of cooking tools; doing so improved information exchange for novice participants but did not influence experts. Experiment 2 added incentives for speed and accuracy and replicated the results from Experiment 1 with respect to information exchange. When the robot's dialogue was adapted for expert knowledge (names of tools rather than explanations), expert participants found the robot to be more effective, more authoritative, and less patronizing. This work suggests adaptation in human-robot interaction has consequences for both task performance and social cohesion. It also suggests that people may be more sensitive to social relations with robots when under task or time pressure.

81 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated; there was little evidence for effects of human-related factors.
Abstract: Objective: We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI)Background: To date, reviews of trust in HRI have been

1,255 citations

Journal ArticleDOI
TL;DR: It was found that the effect size of human tutoring was much lower than previously thought, and the effect sizes of intelligent tutoring systems were nearly as effective as human tutors.
Abstract: This article is a review of experiments comparing the effectiveness of human tutoring, computer tutoring, and no tutoring. “No tutoring” refers to instruction that teaches the same content without tutoring. The computer tutoring systems were divided by their granularity of the user interface interaction into answer-based, step-based, and substep-based tutoring systems. Most intelligent tutoring systems have step-based or substep-based granularities of interaction, whereas most other tutoring systems (often called CAI, CBT, or CAL systems) have answer-based user interfaces. It is widely believed as the granularity of tutoring decreases, the effectiveness increases. In particular, when compared to No tutoring, the effect sizes of answer-based tutoring systems, intelligent tutoring systems, and adult human tutors are believed to be d = 0.3, 1.0, and 2.0 respectively. This review did not confirm these beliefs. Instead, it found that the effect size of human tutoring was much lower: d = 0.79. Moreover, the eff...

1,018 citations

Journal ArticleDOI
15 Aug 2018
TL;DR: The potential of social robots in education is reviewed, the technical challenges are discussed, and how the robot’s appearance and behavior affect learning outcomes are considered.
Abstract: Social robots can be used in education as tutors or peer learners. They have been shown to be effective at increasing cognitive and affective outcomes and have achieved outcomes similar to those of human tutoring on restricted tasks. This is largely because of their physical presence, which traditional learning technologies lack. We review the potential of social robots in education, discuss the technical challenges, and consider how the robot's appearance and behavior affect learning outcomes.

747 citations

Journal ArticleDOI
TL;DR: In this paper, the authors highlight some of the challenges to hazards and disaster poli..., highlighting the accelerating disaster losses coupled with the increasing frequency of billion-dollar disaster events, such as the recent Hurricane Sandy.
Abstract: Escalating disaster losses coupled with the increasing frequency of billion-dollar disaster events, such as the recent Hurricane Sandy, highlight some of the challenges to hazards and disaster poli...

708 citations

Journal ArticleDOI
29 Mar 2009
TL;DR: In this article, the Reader-to-Leader Framework is proposed to understand what motivates technology-mediated social participation and improve user interface design and social support for companies, government agencies, and non-governmental organizations.
Abstract: Billions of people participate in online social activities. Most users participate as readers of discussion boards, searchers of blog posts, or viewers of photos. A fraction of users become contributors of user-generated content by writing consumer product reviews, uploading travel photos, or expressing political opinions. Some users move beyond such individual efforts to become collaborators, forming tightly connected groups with lively discussions whose outcome might be a Wikipedia article or a carefully edited YouTube video. A small fraction of users becomes leaders, who participate in governance by setting and upholding policies, repairing vandalized materials, or mentoring novices. We analyze these activities and offer the Reader-to-Leader Framework with the goal of helping researchers, designers, and managers understand what motivates technology-mediated social participation. This will enable them to improve interface design and social support for their companies, government agencies, and non-governmental organizations. These improvements could reduce the number of failed projects, while accelerating the application of social media for national priorities such as healthcare, energy sustainability, emergency response, economic development, education, and more.

672 citations