scispace - formally typeset
Search or ask a question

Showing papers on "Social robot published in 2015"


Journal ArticleDOI
TL;DR: The state of the art in continuum robot manipulators and systems intended for application to interventional medicine are described, and relevant research in design, modeling, control, and sensing for continuum manipulators are discussed.
Abstract: In this paper, we describe the state of the art in continuum robot manipulators and systems intended for application to interventional medicine. Inspired by biological trunks, tentacles, and snakes, continuum robot designs can traverse confined spaces, manipulate objects in complex environments, and conform to curvilinear paths in space. In addition, many designs offer inherent structural compliance and ease of miniaturization. After decades of pioneering research, a host of designs have now been investigated and have demonstrated capabilities beyond the scope of conventional rigid-link robots. Recently, we have seen increasing efforts aimed at leveraging these qualities to improve the frontiers of minimally invasive surgical interventions. Several concepts have now been commercialized, which are inspiring and enabling a current paradigm shift in surgical approaches toward flexible access routes, e.g., through natural orifices such as the nose. In this paper, we provide an overview of the current state of this field from the perspectives of both robotics science and medical applications. We discuss relevant research in design, modeling, control, and sensing for continuum manipulators, and we highlight how this work is being used to build robotic systems for specific surgical procedures. We provide perspective for the future by discussing current limitations, open questions, and challenges.

986 citations


Journal ArticleDOI
Jamy Li1
TL;DR: Qualitative assessment of the direction of quantitative effects demonstrated that robots were more persuasive and perceived more positively when physically present in a user?s environment than when digitally-displayed on a screen either as a video feed of the same robot or as a virtual character analog.
Abstract: The effects of physical embodiment and physical presence were explored through a survey of 33 experimental works comparing how people interacted with physical robots and virtual agents. A qualitative assessment of the direction of quantitative effects demonstrated that robots were more persuasive and perceived more positively when physically present in a user?s environment than when digitally-displayed on a screen either as a video feed of the same robot or as a virtual character analog; robots also led to better user performance when they were collocated as opposed to shown via video on a screen. However, participants did not respond differently to physical robots and virtual agents when both were displayed digitally on a screen - suggesting that physical presence, rather than physical embodiment, characterizes people?s responses to social robots. Implications for understanding psychological response to physical and virtual agents and for methodological design are discussed. Survey identified 33 works exploring user responses to physical robots and virtual agents.Robot agents had greater influence when physically present than telepresent.No differences were found between physical robots displayed on a screen and virtual agents that looked similar.Physical presence, but not physical embodiment alone, resulted in more favorable responses from participants.

389 citations


Journal ArticleDOI
TL;DR: A review from sociological concepts to social robotics and human-aware navigation, and recent robotic experiments focusing on the way social conventions and robotics must be linked are presented.
Abstract: In the context of a growing interest in modelling human behavior to increase the robots' social abilities, this article presents a survey related to socially-aware robot navigation. It presents a review from sociological concepts to social robotics and human-aware navigation. Social cues, signals and proxemics are discussed. Socially aware behavior in terms of navigation is tackled also. Finally, recent robotic experiments focusing on the way social conventions and robotics must be linked is presented.

287 citations


Journal ArticleDOI
TL;DR: It is concluded that a cooperation model is critical for safe and efficient robot navigation in dense human crowds and the salient characteristics of nearly any dynamic navigation algorithm.
Abstract: We consider the problem of navigating a mobile robot through dense human crowds. We begin by exploring a fundamental impediment to classical motion planning algorithms called the “freezing robot problem”: once the environment surpasses a certain level of dynamic complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place or performs unnecessary maneuvers to avoid collisions. We argue that this problem can be avoided if the robot anticipates human cooperation, and accordingly we develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a “multiple goal” extension that models the goal-driven nature of human decision making. We validate this model with an empirical study of robot navigation in dense human crowds 488 runs, specifically testing how cooperation models effect navigation performance. The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 0.8 humans/m2, while a state-of-the-art non-cooperative planner exhibits unsafe behavior more than three times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our non-cooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.

258 citations


Book ChapterDOI
24 Jun 2015
TL;DR: How the SPENCER project advances the fields of detection and tracking of individuals and groups, recognition of human social relations and activities, normative human behavior learning, socially-aware task and motion planning, learning socially annotated maps, and conducting empirical experiments to assess socio-psychological effects of normative robot behaviors is described.
Abstract: We present an ample description of a socially compliant mobile robotic platform, which is developed in the EU-funded project SPENCER. The purpose of this robot is to assist, inform and guide passengers in large and busy airports. One particular aim is to bring travellers of connecting flights conveniently and efficiently from their arrival gate to the passport control. The uniqueness of the project stems from the strong demand of service robots for this application with a large potential impact for the aviation industry on one side, and on the other side from the scientific advancements in social robotics, brought forward and achieved in SPENCER. The main contributions of SPENCER are novel methods to perceive, learn, and model human social behavior and to use this knowledge to plan appropriate actions in real-time for mobile platforms. In this paper, we describe how the project advances the fields of detection and tracking of individuals and groups, recognition of human social relations and activities, normative human behavior learning, socially-aware task and motion planning, learning socially annotated maps, and conducting empirical experiments to assess socio-psychological effects of normative robot behaviors.

240 citations


Proceedings ArticleDOI
02 Mar 2015
TL;DR: It was found that children interacting with a robot using social and adaptive behaviours in addition to the teaching strategy did not learn a significant amount, indicating that while the presence of a physical robot leads to improved learning, caution is required when applying social behaviour to a robot in a tutoring context.
Abstract: Social robots are finding increasing application in the domain of education, particularly for children, to support and augment learning opportunities. With an implicit assumption that social and adaptive behaviour is desirable, it is therefore of interest to determine precisely how these aspects of behaviour may be exploited in robots to support children in their learning. In this paper, we explore this issue by evaluating the effect of a social robot tutoring strategy with children learning about prime numbers. It is shown that the tutoring strategy itself leads to improvement, but that the presence of a robot employing this strategy amplifies this effect, resulting in significant learning. However, it was also found that children interacting with a robot using social and adaptive behaviours in addition to the teaching strategy did not learn a significant amount. These results indicate that while the presence of a physical robot leads to improved learning, caution is required when applying social behaviour to a robot in a tutoring context. Categories and Subject Descriptors H.1.2 [Models and Principles]: User/Machine Systems

214 citations


Journal ArticleDOI
TL;DR: A long-term explorative study has been conducted by installing a social robot for health promotion in elderly people's own homes, providing an in-depth understanding of the factors that influence the acceptance of and relationship-building with social robots in domestic environments.

206 citations


Proceedings ArticleDOI
02 Mar 2015
TL;DR: The first comparison of people's moral judgments about human and robot agents is reported, finding that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a “utilitarian” choice), and they were blamed more than their human counterparts when they did not make that choice.
Abstract: Moral norms play an essential role in regulating human interaction With the growing sophistication and proliferation of robots, it is important to understand how ordinary people apply moral norms to robot agents and make moral judgments about their behavior We report the first comparison of people's moral judgments (of permissibility, wrongness, and blame) about human and robot agents Two online experiments (total N =316) found that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a “utilitarian” choice), and they were blamed more than their human counterparts when they did not make that choice Though the utilitarian sacrifice was generally seen as permissible for human agents, they were blamed more for choosing this option than for doing nothing These results provide a first step toward a new field of Moral HRI, which is well placed to help guide the design of social robots Categories and Subject Descriptors I29 [Artificial Intelligence] Robotics K41 [Computers and Society] Public Policy Issues, Ethics

201 citations


Journal ArticleDOI
TL;DR: This paper introduces a set of metrics useful in direct, face to face scenarios, based on the behaviors analysis of the human partners, and shows how such metrics are useful to assess how the robot is perceived by humans and how this perception changes according to the behaviors shown by the social robot.
Abstract: To interact and cooperate with humans in their daily-life activities, robots should exhibit human-like " intelligence ". This skill will substantially emerge from the interconnection of all the algorithms used to ensure cognitive and interaction capabilities. While new robotics technologies allow us to extend such abilities, their evaluation for social interaction is still challenging. The quality of a human-robot interaction can not be reduced to the evaluation of the employed algorithms: we should integrate the engagement information that naturally arises during interaction in response to the robot's behaviors. In this paper we want to show a practical approach to evaluate the engagement aroused during interactions between humans and social robots. We will introduce a set of metrics useful in direct, face to face scenarios, based on the behaviors analysis of the human partners. We will show how such metrics are useS. M. Anzalone · M. Chetouani Sorbonne Universites , ful to assess how the robot is perceived by humans and how this perception changes according to the behaviors shown by the social robot. We discuss experimental results obtained in two human-interaction studies, with the robots Nao and iCub respectively.

172 citations


Proceedings ArticleDOI
02 Mar 2015
TL;DR: A novel robotic partner which children can teach handwriting is presented, which relies on the learning by teaching paradigm to build an interaction, so as to stimulate meta-cognition, empathy and increased self-esteem in the child user.
Abstract: This article presents a novel robotic partner which children can teach handwriting. The system relies on the learning by teaching paradigm to build an interaction, so as to stimulate meta-cognition, empathy and increased self-esteem in the child user. We hypothesise that use of a humanoid robot in such a system could not just engage an unmotivated student, but could also present the opportunity for children to experience physically-induced benefits encountered during human-led handwriting interventions, such as motor mimicry. By leveraging simulated handwriting on a synchronised tablet display, a NAO humanoid robot with limited fine motor capabilities has been configured as a suitably embodied handwriting partner. Statistical shape models derived from principal component analysis of a dataset of adult-written letter trajectories allow the robot to draw purposefully deformed letters. By incorporating feedback from user demonstrations, the system is then able to learn the optimal parameters for the appropriate shape models. Preliminary in situ studies have been conducted with primary school classes to obtain insight into children's use of the novel system. Children aged 6-8 successfully engaged with the robot and improved its writing to a level which they were satisfied with. The validation of the interaction represents a significant step towards an innovative use for robotics which addresses a widespread and socially meaningful challenge in education.

163 citations


Journal ArticleDOI
07 Jun 2015
TL;DR: The ROSPLAN framework is described, an architecture for embedding task planning into ROS systems and a case study in autonomous robotics is provided, involving autonomous underwater vehicles in scenarios that demonstrate the flexibility and robustness of the approach.
Abstract: The Robot Operating System (ROS) is a set of software libraries and tools used to build robotic systems. ROS is known for a distributed and modular design. Given a model of the environment, task planning is concerned with the assembly of actions into a structure that is predicted to achieve goals. This can be done in a way that minimises costs, such as time or energy. Task planning is vital in directing the actions of a robotic agent in domains where a causal chain could lock the agent into a dead-end state. Moreover, planning can be used in less constrained domains to provide more intelligent behaviour. This paper describes the ROSPLAN framework, an architecture for embedding task planning into ROS systems. We provide a description of the architecture and a case study in autonomous robotics. Our case study involves autonomous underwater vehicles in scenarios that demonstrate the flexibility and robustness of our approach.

Journal ArticleDOI
TL;DR: A role adaptation method for human-robot shared control is proposed such that the robot is able to adjust its own role according to the human's intention to lead or follow, which is inferred through the measured interaction force.
Abstract: In this paper, we propose a role adaptation method for human–robot shared control. Game theory is employed for fundamental analysis of this two-agent system. An adaptation law is developed such that the robot is able to adjust its own role according to the human's intention to lead or follow, which is inferred through the measured interaction force. In the absence of human interaction forces, the adaptive scheme allows the robot to take the lead and complete the task by itself. On the other hand, when the human persistently exerts strong forces that signal an unambiguous intent to lead, the robot yields and becomes the follower. Additionally, the full spectrum of mixed roles between these extreme scenarios is afforded by continuous online update of the control that is shared between both agents. Theoretical analysis shows that the resulting shared control is optimal with respect to a two-agent coordination game. Experimental results illustrate better overall performance, in terms of both error and effort, compared with fixed-role interactions.

Proceedings ArticleDOI
02 Mar 2015
TL;DR: A statistical model of occurrence of children's abuse is developed and enabled the robot to predict the possibility of an abuse situation and escape before it happens and it is demonstrated that with the model the robot successfully lowered the occurrence of abuse in a real shopping mall.
Abstract: Social robots working in public space often stimulate children's curiosity. However, sometimes children also show abusive behavior toward robots. In our case studies, we observed in many cases that children persistently obstruct the robot's activity. Some actually abused the robot by saying bad things, and at times even kicking or punching the robot. We developed a statistical model of occurrence of children's abuse. Using this model together with a simulator of pedestrian behavior, we enabled the robot to predict the possibility of an abuse situation and escape before it happens. We demonstrated that with the model the robot successfully lowered the occurrence of abuse in a real shopping mall. Categories and Subject Descriptors H.5.2 [Information Interfaces and Presentation]: User Interfaces - Interaction styles; I.2.9 [Artificial Intelligence]: Robotics, General Terms Design, Experimentation, Human Factors.

Proceedings ArticleDOI
28 Dec 2015
TL;DR: This paper is a case report that explains the developmental process of the application that contains three educational programs that children can select in interacting with Pepper, a personal robot developed by SoftBank Robotics Corp. and Aldebaran Robotics SAS.
Abstract: An educational use of Pepper, a personal robot that was developed by SoftBank Robotics Corp. and Aldebaran Robotics SAS, is described. Applying the two concepts of care-receiving robot (CRR) and total physical response (TPR) into the design of an educational application using Pepper, we offer a scenario in which children learn together with Pepper at their home environments from a human teacher who gives a lesson from a remote classroom. This paper is a case report that explains the developmental process of the application that contains three educational programs that children can select in interacting with Pepper. Feedbacks and knowledge obtained from test trials are also described.

Journal ArticleDOI
TL;DR: In this article, a review article provides an overview of the efforts made on tackling this demanding task and discusses how these findings can be synthesized in computer graphics and can be utilized in the domains of Human-Robot Interaction and Human-Computer Interaction for allowing humans to interact with virtual agents and other artificial entities.
Abstract: A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: 'The face is the portrait of the mind; the eyes, its informers'. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We begin with a discussion of the movement of the eyeballs, eyelids and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye gaze, during the expression of emotion or during conversation. We discuss how these findings are synthesized in computer graphics and can be utilized in the domains of Human-Robot Interaction and Human-Computer Interaction for allowing humans to interact with virtual agents and other artificial entities. We conclude with a summary of guidelines for animating the eye and head from the perspective of a character animator.

Proceedings ArticleDOI
02 Mar 2015
TL;DR: It is shown that some of the child’s curiosity measures are significantly higher after interacting with a curious robot, compared to a non-curious one, while others do not.
Abstract: Curiosity is key to learning, yet school children show wide variability in their eagerness to acquire information. Recent research suggests that other people have a strong influence on children’s exploratory behavior. Would a curious robot elicit children’s exploration and the desire to find out new things? In order to answer this question we designed a novel experimental paradigm in which a child plays an education tablet app with an autonomous social robot, which is portrayed as a younger peer. We manipulated the robot’s behavior to be either curiosity-driven or not and measured the child’s curiosity after the interaction. We show that some of the child’s curiosity measures are significantly higher after interacting with a curious robot, compared to a non-curious one, while others do not. These results suggest that interacting with an autonomous social curious robot can selectively guide and promote children’s curiosity. Categories and Subject Descriptors K.3.1 [Computers and Education]: Computer Uses in Education; I.2.9 [Artificial Intelligence]: Robotics

Journal ArticleDOI
TL;DR: This literature review attempts to provide an engaged but sober (non-speculative) insight into the societal issues raised by the new robotics: which robot technologies are coming; what are they capable of; and which ethical and regulatory questions will they consequently raise.
Abstract: This article investigates the social significance of robotics for the years to come in Europe and the US by studying robotics developments in five different areas: the home, health care, traffic, the police force, and the army. Our society accepts the use of robots to perform dull, dangerous, and dirty industrial jobs. But now that robotics is moving out of the factory, the relevant question is how far do we want to go with the automation of care for children and the elderly, of killing terrorists, or of making love? This literature review attempts to provide an engaged but sober (non-speculative) insight into the societal issues raised by the new robotics: which robot technologies are coming; what are they capable of; and which ethical and regulatory questions will they consequently raise?

Proceedings ArticleDOI
01 Jun 2015
TL;DR: A survey suggests that the use of robots in classroom has indeed moved from purely technology to education, to encompass new didactic fields, and proposes an educational framework merging the tangibility of robots with the advanced visibility of augmented reality.
Abstract: Can robots in classroom reshape K-12 STEM education, and foster new ways of learning? To sketch an answer, this article reviews, side-by-side, existing literature on robot-based learning activities featuring mathematics and physics (purposefully putting aside the well-studied field of “robots to teach robotics”) and existing robot platforms and toolkits suited for classroom environment (in terms of cost, ease of use, orchestration load for the teacher, etc.). Our survey suggests that the use of robots in classroom has indeed moved from purely technology to education, to encompass new didactic fields. We however identified several shortcomings, in terms of robotic platforms and teaching environments, that contribute to the limited presence of robotics in existing curricula; the lack of specific teacher training being likely pivotal. Finally, we propose an educational framework merging the tangibility of robots with the advanced visibility of augmented reality.

Journal ArticleDOI
TL;DR: The results demonstrate that children overcome strong incorrect biases in the material to be learned, but with no significant differences between embodiment conditions, and suggest that the use of real robots carries an advantage in terms of social presence that could provide educational benefits.
Abstract: The application of social robots to the domain of education is becoming more prevalent. However, there remain a wide range of open issues, such as the effectiveness of robots as tutors on student learning outcomes, the role of social behaviour in teaching interactions, and how the embodiment of a robot influences the interaction. In this paper, we seek to explore children’s behaviour towards a robot tutor for children in a novel guided discovery learning interaction. Since the necessity of real robots (as opposed to virtual agents) in education has not been definitively established in the literature, the effect of robot embodiment is assessed. The results demonstrate that children overcome strong incorrect biases in the material to be learned, but with no significant differences between embodiment conditions. However, the data do suggest that the use of real robots carries an advantage in terms of social presence that could provide educational benefits.

Journal ArticleDOI
TL;DR: This study investigates whether the presence of a social robot and interaction with it raises children’s interest in science and shows that even though Robovie did not influence the science curiosity of the entire class, there were individual increases in the children who asked Robovie science questions.
Abstract: This study investigates whether the presence of a social robot and interaction with it raises children’s interest in science. We placed Robovie, our social robot, in an elementary school science class where children could freely interact with it during their breaks. Robovie was tele-operated and its behaviors were designed to answer any questions related to science. It encouraged the children to ask about science by initiating conversations about class topics. Our result shows that even though Robovie did not influence the science curiosity of the entire class, there were individual increases in the children who asked Robovie science questions.

Journal ArticleDOI
TL;DR: The results suggest that German respondents have neutral attitudes toward education robots, and the data support the notion of relative reluctance to engage in learning processes that include robots.
Abstract: Previous research on attitudes toward robots has emphasized the aspect of cultural differences regarding the acceptance of social robots in everyday life. Existing work has also focused on the importance of various other factors (e.g., demographic variables, interest in science and technology, prior robot experience) that predict robot acceptance. Specific robot types like service or healthcare robots have also been investigated. Nevertheless, more research is needed to substantiate the empirical evidence on the role of culture, robot type, and other predictors when researching attitudes toward robots. We did so by conducting a survey on attitudes toward education robots in the German context. Besides, in the present research, we investigated predictors of attitudes toward education robots. Contrary to previous findings, our results suggest that German respondents have neutral attitudes toward education robots. However, our data support the notion of relative reluctance to engage in learning processes that include robots. Regarding demographic variables and personality dispositions, our results show that gender, age, need for cognition , and technology commitment significantly predicted people’s attitudes. Concerning potential areas of application, respondents could picture using education robots in domains related to science, technology, engineering, and mathematics and rejected education robots in fields of arts and social sciences.

Proceedings ArticleDOI
26 May 2015
TL;DR: This work presents a semi supervised learning approach, where the robot learns its traversability capabilities from a human operating it, and infers a model for the traversability analysis, thereby requiring very little manual effort for the human.
Abstract: The ability to safely navigate is a crucial prerequisite for truly autonomous systems. A robot has to distinguish obstacles from traversable ground. Failing on this task can cause great damage or restrict the robots movement unnecessarily. Due to the security relevance of this problem, great effort is typically spent to design models for individual robots and sensors, and the complexity of such models is correlated to the complexity of the environment and the capabilities of the robot. We present a semi supervised learning approach, where the robot learns its traversability capabilities from a human operating it. From this partially and only positive labeled training data, our approach infers a model for the traversability analysis, thereby requiring very little manual effort for the human. In practical experiments we show that our method can be used for robots that need to reliably navigate on dirt roads as well as for robots that have very restricted traversability capabilities.

Patent
04 Mar 2015
TL;DR: In this article, the authors present a system to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot, such as the current state of the virtual robot or the real world robot.
Abstract: System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.

Proceedings ArticleDOI
02 Mar 2015
TL;DR: The results suggest that people may empathize more with a physical robot than a simulated one, a finding that has important implications on the generalizability and applicability of simulated HRI work.
Abstract: In designing and evaluating human-robot interactions and interfaces, researchers often use a simulated robot due to the high cost of robots and time required to program them. However, it is important to consider how interaction with a simulated robot differs from a real robot; that is, do simulated robots provide authentic interaction? We contribute to a growing body of work that explores this question and maps out simulated-versus-real differences, by explicitly investigating empathy: how people empathize with a physical or simulated robot when something bad happens to it. Our results suggest that people may empathize more with a physical robot than a simulated one, a finding that has important implications on the generalizability and applicability of simulated HRI work. Empathy is particularly relevant to social HRI and is integral to, for example, companion and care robots. Our contribution additionally includes an original and reproducible HRI experimental design to induce empathy toward robots in laboratory settings, and an experimentally validated empathy-measuring instrument from psychology for use with HRI. Categories and Subject Descriptors H.5.2 [User Interfaces]: evaluation/methodology General Terms Experimentation and Human Factors.

Journal ArticleDOI
TL;DR: This research studied the combined and individual contribution of these two persuasive strategies (gestures and gazing) on the persuasiveness of a storytelling robot and presented evidence a robot’s persuAsiveness is increased when gazing is used.
Abstract: Earlier theorizing suggested that an (artificial) agent that combines persuasive strategies will be more persuasive. Therefore, the current research investigated whether a robot that uses two persuasive strategies is more persuasive than a robot that uses only one. Two crucial persuasive strategies that humans use in face-to-face persuasion are gazing and gestures, and therefore we studied the combined and individual contribution of these two persuasive strategies (gestures and gazing) on the persuasiveness of a storytelling robot. A robot told a classical persuasive story about the consequences of lying to forty-eight participants, and was programmed to use (persuasive) gestures (or not) and gazing (or not). Next, we asked participants to evaluate the character in the story thereby assessing the robot’s persuasiveness. Results presented evidence a robot’s persuasiveness is increased when gazing is used. When the robot used gestures, its persuasiveness only increased when it also used gazing. When the robot did not use gazing, using gestures diminished the robot’s persuasiveness. We discuss the implications for theory and design of robots that are more persuasive.

Proceedings ArticleDOI
02 Mar 2015
TL;DR: It is demonstrated preliminarily that children are more eager to emotionally connect with and be physically activated by a robot than a virtual character, illustrating the potential of social robots to provide socio-emotional support during inpatient pediatric care.
Abstract: Children and their parents may undergo challenging experiences when admitted for inpatient care at pediatric hospitals. While most hospitals make efforts to provide socio-emotional support for patients and their families during care, gaps still exist between human resource supply and demand. The Huggable project aims to close this gap by creating a social robot able to mitigate stress, anxiety, and pain in pediatric patients by engaging them in playful interactive activities. In this paper, we introduce a larger experimental design to compare the effects of the Huggable robot to a virtual character on a screen and a plush teddy bear, and provide initial qualitative analyses of patients' and parents' behaviors during intervention sessions collected thus far. We demonstrate preliminarily that children are more eager to emotionally connect with and be physically activated by a robot than a virtual character, illustrating the potential of social robots to provide socio-emotional support during inpatient pediatric care.

Journal ArticleDOI
TL;DR: Results show that users tend to maintain a personal distance when interacting with an embodied robot and that embodiment engages users in maintaining longer interactions.
Abstract: This paper provides the results of various trial experiments in a hotel environment carried out using Sacarino, an interactive bellboy robot We analysed which aspects of the robot design and behaviour are relevant in terms of user engagement and comfort when interacting with our social robot The experiments carried out focused on the influence over the proxemics, duration and effectiveness of the interaction, taking into account three dichotomous factors related with the robot design and behaviour: robot embodiment (with/without robotic body), status of the robot (awake/asleep) and who starts communication (robot/user) Results show that users tend to maintain a personal distance when interacting with an embodied robot and that embodiment engages users in maintaining longer interactions On the other hand, including a greeting model in a robot is useful in terms of engaging users to maintain longer interactions, and that an active-looking robot is more attractive to the participants, producing longer interactions than in the case of a passive-looking robot A Bellboy, social robot interacting guests in a hotel is presentedSocial robot design should take care of target user age that influences use distanceA robotic body encourages remarkably HCI (with respect to common computers)A two-step salutation can attract user attention while avoiding intimidationMultimodal systems are highly recommended in real, noisy environment

Journal ArticleDOI
TL;DR: This work presents an artificial emotional intelligence system for robots, with both a generative and a perceptual aspect, and explores the expressive capabilities of an abstract, faceless, creature-like robot, with very few degrees of freedom.
Abstract: For social robots to respond to humans in an appropriate manner, they need to use apt affect displays, revealing underlying emotional intelligence. We present an artificial emotional intelligence system for robots, with both a generative and a perceptual aspect. On the generative side, we explore the expressive capabilities of an abstract, faceless, creature-like robot, with very few degrees of freedom, lacking both facial expressions and the complex humanoid design found often in emotionally expressive robots. We validate our system in a series of experiments: in one study, we find an advantage in classification for animated vs static affect expressions and advantages in valence and arousal estimation and personal preference ratings for both animated vs static and physical vs on-screen expressions. In a second experiment, we show that our parametrically generated expression variables correlate with the intended user affect perception. Combining the generative system with a perceptual component of natural language sentiment analysis, we show in a third experiment that our automatically generated affect responses cause participants to show signs of increased engagement and enjoyment compared with arbitrarily chosen comparable motion parameters.

Journal ArticleDOI
TL;DR: The video-assisted ethnographic study of persons with dementia shows that PARO is deployed performatively as an occasion for communication and as an interlocutor, and, on the other, it is applied as an observation instrument.
Abstract: Much has been written—not least in this journal—about the potential, the benefits, and the risks of social robotics Our paper is based on the social constructivist perspective that what a technology actually is can be decided only when it is applied Using as an exemplar the robot baby seal PARO, which is deployed in Germany mainly as activation therapy for elderly people with dementia, we begin by briefly explaining why it is by no means clear at the beginning of the development phase what a technology is actually going to be Rather, this is established in the light of, and in coordination with, the context of application We then present some preliminary results from our ongoing study of the way in which this social robot is applied by professional care workers in a nursing home for the elderly The underlying theoretical assumption on which our study is based is that the appearance and the performative deployment of a technical artifact are interdependent Only in combination with experiences—the experiences of others, imparted in diverse forms as knowledge, and first-hand experience of using the technology—are the design and the technical functionality of the device of relevance to its appearance, that is, to what it is regarded as being Our video-assisted ethnographic study of persons with dementia shows that, on the one hand, PARO is deployed performatively as an occasion for communication and as an interlocutor, and, on the other, it is applied as an observation instrument

Proceedings ArticleDOI
17 Dec 2015
TL;DR: It is proposed to teach a robot cooperative behaviors from demonstrations, which are probabilistically encoded by a task-parametrized formulation of a Gaussian mixture model, later used for specifying both the desired state of the robot and an optimal feedback control law that exploits the variability in position, velocity and force spaces observed during the demonstrations.
Abstract: Human-robot collaboration seeks to have humans and robots closely interacting in everyday situations. For some tasks, physical contact between the user and the robot may occur, originating significant challenges at safety, cognition, perception and control levels, among others. This paper focuses on robot motion adaptation to parameters of a collaborative task, extraction of the desired robot behavior, and variable impedance control for human-safe interaction. We propose to teach a robot cooperative behaviors from demonstrations, which are probabilistically encoded by a task-parametrized formulation of a Gaussian mixture model. Such encoding is later used for specifying both the desired state of the robot, and an optimal feedback control law that exploits the variability in position, velocity and force spaces observed during the demonstrations. The whole framework allows the robot to modify its movements as a function of parameters of the task, while showing different impedance behaviors. Tests were successfully carried out in a scenario where a 7 DOF backdrivable manipulator learns to cooperate with a human to transport an object.