scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Social Robotics in 2017"


Book ChapterDOI
22 Nov 2017
TL;DR: There is correlation between the magnitude of an error performed by the robot and the corresponding loss of trust of the human in the robot.
Abstract: Trust is a key factor in human users’ acceptance of robots in a home or human oriented environment. Humans should be able to trust that they can safely interact with their robot. Robots will sometimes make errors, due to mechanical or functional failures. It is therefore important that a domestic robot should have acceptable interactive behaviours when exhibiting and recovering from an error situation. In order to define these behaviours, it is firstly necessary to consider that errors can have different degrees of consequences. We hypothesise that the severity of the consequences and the timing of a robot’s different types of erroneous behaviours during an interaction may have different impacts on users’ attitudes towards a domestic robot. In this study we used an interactive storyboard presenting ten different scenarios in which a robot performed different tasks under five different conditions. Each condition included the ten different tasks performed by the robot, either correctly, or with small or big errors. The conditions with errors were complemented with four correct behaviours. At the end of each experimental condition, participants were presented with an emergency scenario to evaluate their current trust in the robot. We conclude that there is correlation between the magnitude of an error performed by the robot and the corresponding loss of trust of the human in the robot.

48 citations


Book ChapterDOI
22 Nov 2017
TL;DR: A case study aiming at evaluating if the users’ personality and activities they are currently performing affect the perception of comfortable distances of a robot approaching them and determining key factors to model the user and to adapt the robot behavior accordingly.
Abstract: A robot system that is designed to coexist with humans has to adapt its behavior and social interaction parameters (for example, the interaction distances and the speed of movements) not only with respect to the task it is supposed to accomplish, but also to the human users’ habits, actions, and personality. This is particularly relevant in the domain of assistive robotics and when working with vulnerable people. In this work, we are interested in determining key factors to model the user and to adapt the robot behavior accordingly. We provide the first step towards this direction with a case study aiming at evaluating if the users’ personality and activities they are currently performing affect the perception of comfortable distances of a robot approaching them.

46 citations


Book ChapterDOI
22 Nov 2017
TL;DR: It is demonstrated that robot body shape activates stereotypes toward robots, which in turn, deeply impact people’s attitudes and trust toward robots which determine people‘s motivation to engage in HRI.
Abstract: Previous research has shown that gender-related stereotypes are even applied to robots. In HRI, a robot’s appearance, for instance, visual facial gender cues such as hairstyle of a robot have successfully been used to elicit gender-stereotypical judgments about male and female prototypes, respectively. To complement the set of features to visually indicate a robot’s gender, we explored the impact of waist-to-hip ratio (WHR) and shoulder width (SW) in robot prototypes. Specifically, we investigated the effect of male vs. female appearance on perceived robot gender, the attribution of gender stereotypical traits, the robots’ suitability for stereotypical tasks, and participants’ trust toward the robots. Our results have demonstrated that the manipulation of WHR and SW correctly elicited gendered perceptions of the two prototypes. However, the perception of male robot gender did not affect the attribution of agentic traits and cognitive trust. Nevertheless, participants tended to rate the male robot as more suitable for stereotypically male tasks. In line with our predictions, participants preferred to use the female robot shape for stereotypically female tasks. They tended to attribute more communal traits and showed more affective trust toward the robot that was designed with a female torso versus a male robot torso. These results demonstrate that robot body shape activates stereotypes toward robots. These in turn, deeply impact people’s attitudes and trust toward robots which determine people’s motivation to engage in HRI.

40 citations


Book ChapterDOI
22 Nov 2017
TL;DR: The role and benefits of robotic autonomy on both children and therapist are discussed along with the progress that is made on the Kaspar robot’s autonomy towards achieving a semi-autonomous child-robot interaction in a real world setting.
Abstract: This paper gives an overview of the design and development of the humanoid robot Kaspar. Since the first Kaspar robot was developed in 2005, the robotic platform has undergone continuous development driven by the needs of users and technological advancements enabling the integration of new features. We discuss in detail the iterative development of Kaspar’s design and clearly explain the rational of each development, which has been based on the user requirements as well as our years of experience in robot assisted therapy for children with autism, particularly focusing on how the developments benefit the children we work with. Further to this, we discuss the role and benefits of robotic autonomy on both children and therapist along with the progress that we have made on the Kaspar robot’s autonomy towards achieving a semi-autonomous child-robot interaction in a real world setting.

32 citations


Book ChapterDOI
22 Nov 2017
TL;DR: It is claimed, that emotional behaviour in the domain of robotic storytelling should be carefully designed to support the story rather then preventing a good transportation.
Abstract: Social Robots bear great potential to tell stories to human listeners. Through their embodiment and ability to display non-verbal and emotional behaviour, additional modalities can be used to transport the user into the story compared to traditional media such as books or audio books. Based on theoretical knowledge about the design of social robot storytellers and the analysis of human storytellers, we designed the behaviour of an emotional social robot storyteller and compared it to a neutral version of the robotic storyteller and the voice of a human storyteller represented by an audio book. Results suggest that the emotional robot is able to transport the participants equally well as the traditional audio book, while the neutral robot performed worse. We therefore claim, that emotional behaviour in the domain of robotic storytelling should be carefully designed to support the story rather then preventing a good transportation.

30 citations


Book ChapterDOI
22 Nov 2017
TL;DR: The effects of being hugged by a robot to encourage self-disclosure showed that those who were hugged by the robot significantly offered more self- Disclosure thanThose who were not hugged by it.
Abstract: This paper presents the effects of being hugged by a robot to encourage self-disclosure. Physical interactions, which are known to be essential for communication with others, also show the effects of eliciting self-disclosure from the people with whom one is interacting and contribute to the construction of social relationships. Previous research demonstrated that people who touched a robot experienced positive impressions of it without clarifying whether being hugged by a robot elicits self-disclosure from people. We developed a huge, teddy-bear-like robot that can give reciprocal hugs to people and experimentally investigated its effects on self-disclosure. Our experiment results with 32 participants showed that those who were hugged by the robot significantly offered more self-disclosure than those who were not hugged by it. Moreover, people who were hugged by the robot interacted with it longer than those who were not hugged by it. On the other hand, the perceived feelings about the robot were not significantly different between the conditions.

29 citations


Book ChapterDOI
22 Nov 2017
TL;DR: This study examined the influence of an online eye-contact of a humanoid robot on humans’ reception of the robot, and suggested that people are sensitive to the mutual gaze of an artificial agent, they feel more engaged with the robot when a mutual gaze is established, and eye- contact supports attributing human-like characteristics to the robot.
Abstract: Mutual gaze is a key element of human development, and constitutes an important factor in human interactions. In this study, we examined –through analysis of subjective reports– the influence of an online eye-contact of a humanoid robot on humans’ reception of the robot. To this end, we manipulated the robot gaze, i.e., mutual (social) gaze and neutral (non-social) gaze, throughout an experiment involving letter identification. Our results suggest that people are sensitive to the mutual gaze of an artificial agent, they feel more engaged with the robot when a mutual gaze is established, and eye-contact supports attributing human-like characteristics to the robot. These findings are relevant both to the human-robot interaction (HRI) research - enhancing social behavior of robots, and also for cognitive neuroscience - studying mechanisms of social cognition in relatively realistic social interactive scenarios.

27 citations


Book ChapterDOI
22 Nov 2017
TL;DR: The result of this study showed that children’s motivation increased as a result of interacting with the robot and their anxiety reduced because of the friendly atmosphere the robot created.
Abstract: In this paper we aimed to explore young Iranian EFL learners’ attitude towards Robot Assisted Language Learning (RALL) in an English language classroom. To this end, 19 preschool children ranging from 3 to 6 years old were randomly assigned to a RALL group which had a robot as an assistant to the teacher. Their attitude towards the robot was video recorded for one month, during ten sessions of their English classroom course. Their overall attitude was examined focusing on the three factors of anxiety, motivation and interaction based on human-robot interaction (HRI) theory. The result of this study showed that children’s motivation increased as a result of interacting with the robot and their anxiety reduced because of the friendly atmosphere the robot created. The results provided some important insights for using robots in classroom settings for young EFL (English as Foreign Language) students which may be also helpful for the future investigators.

25 citations


Book ChapterDOI
22 Nov 2017
TL;DR: The results show that the interactive mutual gaze model implemented in an embodied agent, the social robot head Furhat, improves social connectedness between robots and users.
Abstract: Mutual gaze is a powerful cue for communicating social attention and intention. A plethora of studies have demonstrated the fundamental roles of mutual gaze in establishing communicative links between humans, and enabling non-verbal communication of social attention and intention. The amount of mutual gaze between two partners regulates human-human interaction and is a sign of social engagement. This paper investigates whether implementing mutual gaze in robotic systems can achieve social effects, thus to improve human robot interaction. Based on insights from existing human face-to-face interaction studies, we implemented an interactive mutual gaze model in an embodied agent, the social robot head Furhat. We evaluated the mutual gaze prototype with 24 participants in three applications. Our results show that our mutual gaze model improves social connectedness between robots and users.

23 citations


Book ChapterDOI
22 Nov 2017
TL;DR: A Context-aware Proxemics Planner is presented which aims to improve a robots’ social behaviour by adapting its distances and orientation to the user in terms of interpersonal space, based on contextual information regarding the task, user and the robot.
Abstract: Home Companion Robots need to be able to support users in their daily living activities and to be socially adaptive. They should take account of users’ individual preferences, environments and social situations in order to behave in a socially acceptable manner and to gain acceptance into the household. They will need to be context-aware, taking account of any relevant contextual information and improve on delivering services by adapting to users’ requirements. We present the design, implementation and technical evaluation of a Context-aware Proxemics Planner which aims to improve a robots’ social behaviour by adapting its distances and orientation to the user in terms of interpersonal space, based on contextual information regarding the task, user and the robot.

22 citations


Book ChapterDOI
22 Nov 2017
TL;DR: This study investigated human users’ perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot, and conducted a questionnaire-based study, where participants rated 20 different scenarios in which a domestic robots made an error.
Abstract: As robots increasingly take part in daily living activities, humans will have to interact with them in domestic and other human-oriented environments. We can expect that domestic robots will exhibit occasional mechanical, programming or functional errors, as occur with other electrical consumer devices. For example, these errors could include software errors, dropping objects due to gripper malfunctions, picking up the wrong object or showing faulty navigational skills due to unclear camera images or noisy laser scanner data respectively. It is therefore important for a domestic robot to have acceptable interactive behaviour when exhibiting and recovering from an error situation. As a first step, the current study investigated human users’ perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot. We conducted a questionnaire-based study, where participants rated 20 different scenarios in which a domestic robot made an error. The potential errors were rated by participants by severity. Our findings indicate that people perceptions of the magnitude of the errors presented in the questionnaire were consistent. We did not find any significant differences in users’ ratings due to age and gender. We clearly identified scenarios that were rated by participants as having limited consequences (“small” errors) and that were rated as having severe consequences (“big” errors). Future work will use these two sets of consistently rated robot error scenarios as baseline scenarios to perform studies with repeated interactions investigating human perceptions of robot tasks and error severity.

Book ChapterDOI
22 Nov 2017
TL;DR: Design strategies for Human Robot Interaction for school-aged autistic children with limited receptive language supported development of a new activity for in facial expression imitation whereby the robot imitates the child’s face to encourage the child to notice facial expressions in a play-based game.
Abstract: We present design strategies for Human Robot Interaction for school-aged autistic children with limited receptive language. Applying these strategies to the DE-ENIGMA project (large EU project addressing emotion recognition in autistic children) supported development of a new activity for in facial expression imitation whereby the robot imitates the child’s face to encourage the child to notice facial expressions in a play-based game. A usability case study with 15 typically-developing children aged 4–6 at an English-language school in the Netherlands was performed to observe the feasibility of the setup and make design revisions before exposing the robot to autistic children.

Book ChapterDOI
22 Nov 2017
TL;DR: This work presents a scheme and an implemented system which allow the robot to elaborate and execute shared plans that are flexible enough to be achieved in collaboration with a human in a smooth and non-intrusive manner.
Abstract: It has been shown that, when a human and a robot have to perform a joint activity together, they need to structure their activity based on a so-called “shared plan”. In this work, we present a scheme and an implemented system which allow the robot to elaborate and execute shared plans that are flexible enough to be achieved in collaboration with a human in a smooth and non-intrusive manner. We identify and analyze the decisions that should preferably be taken at planning time and those that should be better postponed. We also show in which conditions the robot can determine when it has to take the decision by itself or leave it to its human partner. As a consequence, the robot avoids useless communication by smoothly adapting its behavior to the human.

Book ChapterDOI
22 Nov 2017
TL;DR: This paper presents a study where users with different levels of experience were asked to help the robot to learn new objects and evaluated the impact of previous knowledge with robots on handover interactions.
Abstract: Service robots are expected to closely interact with humans in the near future. Their tasks often include delivering and taking objects. Thus, handover scenarios play an important role in human-robot-interaction. A lot of work in this field of research focuses on speed, accuracy and predictability of the robot’s movement during object handover. Those robots need to closely interact with naive users and not only experts. In order to evaluate handover interaction performance between human and robot a force measurement based approach was implemented on the humanoid robot Floka. Different gestures with the second arm were added to analyze the influence on synchronization, predictability, and human acceptance. In this paper we present a study where users with different levels of experience were asked to help the robot to learn new objects. We evaluated the impact of previous knowledge with robots on handover interactions. Disparities in timing, distance, and applied force during handover could be observed. We present an automated annotation pipeline for human-robot-interaction that will be used in future studies. While the commonly used force measurement based approach proved to be a valid starting point, our results show that naive user interaction could benefit from better anticipation.

Book ChapterDOI
22 Nov 2017
TL;DR: A personality survey task is used to assess how humans initially perceive rapport between themselves and a robot and finds that participants preferred it when the robot provided verbal acknowledgments or was more engaging, such asWhen the robot supplemented its speech with iconic gestures.
Abstract: Enhancing the rapport between the human and the robot will be an essential element in successfully developing robotic assistants, especially ones working in our homes. We use a personality survey task to assess how humans initially perceive rapport between themselves and a robot. Robots administering the survey presented relational behaviors designed to improve human-robot rapport. Participants preferred it when the robot provided verbal acknowledgments or was more engaging, such as when the robot supplemented its speech with iconic gestures.

Book ChapterDOI
22 Nov 2017
TL;DR: The results confirm the potential of telepresence robots in assisted living in order to increase the presence of family members to the resident and vice versa and provide insight about how the increased presence offamily members may affect the care work.
Abstract: Elderly people moving into assisted living facilities often face profound changes in their daily routines and social relationships, which may lead to feelings of social isolation and even to depression. Telepresence robots can alleviate this by enabling easily accessible virtual presence of family members and other close ones at the ward. Telepresence robots have been tested in different care environments with often positive responses, but there are still challenges, both technical and non-technical, that hinder the wider adoption of the robots in residential care settings. We seek for more understanding of the non-technical challenges by studying the use of a telepresence robot Double in a residential care facility. In a 12-week field trial, we installed a telepresence robot in a room of a long-term care home resident for communicating with her family members. The qualitative interview data included the perspectives of the resident, her family members and care workers at the ward. The results confirm the potential of telepresence robots in assisted living in order to increase the presence of family members to the resident and vice versa; the study also provides insight about how the increased presence of family members may affect the care work.

Book ChapterDOI
22 Nov 2017
TL;DR: The experiment shows that regardless of the children’s age, they engage easily with the robot while it was talking and moving, however children of different ages have a different perception of the robot when it is idle.
Abstract: This paper presents the results of a singular experiment that has been conducted in a kindergarten in Japan. Four groups of ten children aged 3- to 5-year old interacted freely with the robot Pepper for about 20 min. In the first part of the experiment, the robot introduced itself to the children explaining a few basics. The children were then invited to touch the robot, to dance with it and finally to play with it freely while it was idle. Our experiment shows that regardless of the children’s age, they engage easily with the robot while it was talking and moving, however children of different ages have a different perception of the robot when it is idle. Younger children consider it more as a toy while older children are more likely to attribute a meaning to its idleness.

Book ChapterDOI
22 Nov 2017
TL;DR: A Bayesian probabilistic model that can automatically model and estimate the probability of objects existing in each place using a multimodal spatial concept based on the co-occurrence of objects is proposed.
Abstract: Human support robots need to learn the relationships between objects and places to provide services such as cleaning rooms and locating objects through linguistic communications. In this paper, we propose a Bayesian probabilistic model that can automatically model and estimate the probability of objects existing in each place using a multimodal spatial concept based on the co-occurrence of objects. In our experiments, we evaluated the estimation results for objects by using a word to express their places. Furthermore, we showed that the robot could perform tasks involving cleaning up objects, as an example of the usage of the method. We showed that the robot correctly learned the relationships between objects and places.

Book ChapterDOI
22 Nov 2017
TL;DR: The results presented that service robots are needed as co-workers for decreasing mental workload of workers and for activating the patients.
Abstract: We explored a need for service robots in hospitals and housing services. The methods consisted of a literature review and a cross-sectional survey among health care professionals (n = 224). The survey data was analyzed with a logistic regression model and a factor analysis. The literature review showed that there are only few papers, which discuss service robotics in nursing. The results presented that service robots are needed as co-workers for decreasing mental workload of workers and for activating the patients. Physical workload and age of respondents were non-significant factors in assessing a need for service robots.

Book ChapterDOI
22 Nov 2017
TL;DR: An emotional gesture set to be performed on a pair of humanoid robot platforms, the NAO and the Mini Darwin, to teach five of the six universal emotions and a pilot study with able-bodied adults to validate that the gesture set created was easily recognizable.
Abstract: Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder that is characterized by impaired social interactions and restricted, repetitive behaviors. Children who are considered to be within the spectrum tend to lack the social interaction, facial processing, and emotion recognition and implementation skills that typically developing children possess. These impairments inhibit positive social interactions, relationship building, and effective communication, which could potentially lead to distress and frustration for the child. This study focuses on developing a system to teach five of the six universal emotions. Therefore, we created an emotional gesture set to be performed on a pair of humanoid robot platforms, the NAO and the Mini Darwin. As a step towards reaching that goal of teaching and assessing children with ASD, we conducted a pilot study with able-bodied adults to validate that the gesture set created was easily recognizable. In this pilot study, we asked 137 able-bodied adult participants to watch the system perform gestures that associated to emotions. Then, we asked them to identify the emotion that the system was attempting to portray. Gestures achieved recognition rates ranging in values, with a maximum rate of 96% for sadness and a minimum rate of 57% for happiness.

Book ChapterDOI
22 Nov 2017
TL;DR: An exploratory study with a novel privacy measure is conducted to understand changes to users’ privacy considerations when interacting with an embodied robotic system vs a disembodied system and discusses the idea that embodiment may increase users' risk tolerance and reduce their privacy concerns.
Abstract: As social robots move from the laboratory into public settings the possibility of unwanted intrusion into a user’s personal privacy is magnified. The actual social interaction between human and robot may involve anthropomorphising of the robot by the user, and this may prompt the user to disclose private or sensitive information. To comprehend possible impacts we conducted an exploratory study with a novel privacy measure to understand changes to users’ privacy considerations when interacting with an embodied robotic system vs a disembodied system. In this paper we measure the difference in personal information provided to such systems, and discuss the idea that embodiment may increase users’ risk tolerance and reduce their privacy concerns.

Book ChapterDOI
22 Nov 2017
TL;DR: Results of an interview study carried out with eight retailers and other service providers in a shopping mall, and three shopping mall managers provide insight into their views about potential application roles of social service robot in the mall.
Abstract: Social service robots are gradually entering into shopping malls to provide guidance and information services to the consumer customers. Earlier literature has reported that usually consumers response positively to these robots. Less is known about how do retailers and other business actors in the shopping mall perceive the robots. We present results of an interview study carried out with eight retailers and other service providers in a shopping mall, and three shopping mall managers. The results provide insight into their views about potential application roles of social service robot in the mall: what kind of services the robot could provide to customers besides guiding and information providing, and what kind of opportunities, requirements and constraints a shopping mall sets as a business environment for robot developers and service providers. The capability of the robot to emotionally engage with the customers was seen highly potential in shopping business but balancing entertainment with utility functions is crucial. A Pepper robot was used as a demonstrative platform in the study.

Book ChapterDOI
22 Nov 2017
TL;DR: A motion generation system for humanoid robots to perform interactions with human motion prediction to learn a human motion, a Long Short-Term Memory is trained using a public dataset.
Abstract: An interaction between a robot and a human could be difficult with only reactive mechanisms, especially in a social interaction, because the robot usually needs time to plan its movement. This paper discusses a motion generation system for humanoid robots to perform interactions with human motion prediction. To learn a human motion, a Long Short-Term Memory is trained using a public dataset. The effectiveness of the proposed technique is demonstrated by performing a handshake with a humanoid robot. Instead of following the human palm, the robot learns to predict the hand-meeting point. By using three metrics namely the smoothness, timeliness, and efficiency of the robot movements, the experimental results of various motion plans are compared. The predictive method shows a balanced trade-off point in all the metrics.

Book ChapterDOI
22 Nov 2017
TL;DR: This work presents a methodology for assessing performance, identifying problems, and diagnosing the root causes and influences of different types of failures on the overall performance of a situated interaction system functioning in the wild, applied to a dataset of interactions collected with a robot deployed in a public space inside an office building.
Abstract: Effective situated interaction hinges on the well-coordinated operation of a set of competencies, including computer vision, speech recognition, and natural language, as well as higher-level inferences about turn taking and engagement. Systems often rely on a set of hand-coded and machine-learned components organized into several sensing and decision-making pipelines. Given their complexity and inter-dependencies, developing and debugging such systems can be challenging. “In-the-wild” deployments outside of controlled lab conditions bring further challenges due to unanticipated phenomena, including unexpected interactions such as playful engagements. We present a methodology for assessing performance, identifying problems, and diagnosing the root causes and influences of different types of failures on the overall performance of a situated interaction system functioning in the wild. We apply the methodology to a dataset of interactions collected with a robot deployed in a public space inside an office building. The analyses identify and characterize multiple types of failures, their causes, and their relationship to overall performance. We employ models that predict overall interaction quality from various combinations of failures. Finally, we discuss lessons learned with such a diagnostic methodology for improving situated systems deployed in the wild.

Book ChapterDOI
22 Nov 2017
TL;DR: The study described in this article is part of the Horizon 2020 Babyrobot project, where children with autism are created to playfully explore elements that are important in developing Visual Perspective Taking skills.
Abstract: The study described in this article is part of our contribution to the Horizon 2020 Babyrobot project, where we have created different play scenarios for children with autism to playfully explore elements that are important in developing Visual Perspective Taking (VPT) skills. Individuals with autism often have difficulty with Theory of Mind (TOM) and the understanding that other individuals have their own thoughts, beliefs, plans and perspectives. Visual Perspective Taking is the ability to view the world from another individual’s perspective, e.g. understanding that other individuals have a different line of sight to oneself, and also understanding that two or more people viewing the same object from different points in space might see different things. It is believed that TOM and VPT may share common cognitive processes.

Book ChapterDOI
22 Nov 2017
TL;DR: A study in which the perceived intentionality of a robot and appearance is manipulated (Robovie R2 vs Geminoid HI-2), and how they affected the anthropomorphization of the robots on two dimensions of humanness is measured, and results do not support the proposed two-dimensional model of anthropomorphism.
Abstract: Anthropomorphism plays an important role in human interaction with robots. However, our understanding of this phenomenon is still limited. In the previous research, we proposed to look at the work on dehumanization in order to understand what factors can affect a robot’s anthropomorphism. Moreover, considering that there are two distinct dimensions of humanness, a two-dimensional model of anthropomorphism was proposed. In this paper we present a study in which we manipulated the perceived intentionality of a robot and appearance (Robovie R2 vs Geminoid HI-2), and measured how they affected the anthropomorphization of the robots on two dimensions of humanness. We did not find statistically significant differences in attribution of human traits and mind along two dimensions of humanness. However, after dividing the traits based on their valence, we found that Geminoid HI-2 was attributed significantly more negative human traits than Robovie R2. These results do not support the proposed two-dimensional model of anthropomorphism.

Book ChapterDOI
22 Nov 2017
TL;DR: Whether a low-cost 3D printed prosthetic hand can perform basic grasping tasks and whether the fingertip forces used in grasping various objects are comparable to the grasping forces applied by the hands of 5 research participants is investigated.
Abstract: The advancement in 3D printing technologies appears to be the key toward affordable and functional artificial limbs. The loss of an amputee’s capability to do functional tasks like grasping objects has an obvious effect on that individual’s psychosocial behavior. In this paper, we investigate whether a low-cost 3D printed prosthetic hand can perform basic grasping tasks. We determine whether the fingertip forces used in grasping various objects are comparable to the grasping forces applied by the hands of 5 research participants. We considered 5 different grasps, namely, lateral pinch, spherical, disk, medium wrap, and thumb-index finger grasps for both the prosthetic and human hands. For each grasp, 25 readings for each finger were considered in the analysis. Results show that there were significant differences in the grasping contact forces recorded on the fingers of the prosthetic hand and the human hands. Since this prosthetic hand and similar 3D printed hands may not be able to reach the grasping forces of human hands, the results of this work open the motivation for addressing other requirements of articulated artificial hands for social interactions and gestures.

Book ChapterDOI
22 Nov 2017
TL;DR: A simple but general recipe to apply an acceleration based pedestrian model (“Social Force Model”) to mobile robots and, as a specific example, how to replicate in a group of robots the behaviour of social pedestrian groups is shown.
Abstract: Mobile social robots and (semi-)autonomous small size vehicles such as robotic wheelchairs need to understand and replicate pedestrian behaviour, in order to move safely in the crowd and to interact with, move along with and transport humans. A large amount of research about pedestrian behaviour has been undertaken by the crowd simulation community, but such results cannot be trivially adapted to robot applications. We discuss a simple but general recipe to apply an acceleration based pedestrian model (“Social Force Model”) to mobile robots, and, as a specific example, we show how to replicate in a group of robots the behaviour of social pedestrian groups.

Book ChapterDOI
22 Nov 2017
TL;DR: A novel approach to starting a conversation smoothly by using the cooperative behavior of two robots that tries to fill the blank time until the person is ready to listen by showing an interaction between the robots to attract the person’s attention.
Abstract: In a human-robot conversation, it is difficult for the robot to start the conversation just when the user is ready to listen to the robot, due to recognition technology issues. This paper proposes a novel approach to starting a conversation smoothly by using the cooperative behavior of two robots. In this approach, the two robots try to fill the blank time until the person is ready to listen by showing an interaction between the robots to attract the person’s attention. To evaluate the effectiveness of the approach, we conducted an experiment, which compared the following three methods of starting a conversation: early timing, late timing by one robot, and the proposed method. The results showed that participants almost ready to listen and not feel awkward when interacting with two robots with the proposed method, compared to one robot with early and late timing.

Book ChapterDOI
22 Nov 2017
TL;DR: The results suggest that the experiment has generally been perceived as slightly stressful, but the appearance and behavior of the robot has no effect on the subjective stress level.
Abstract: Hybrid collaboration between human and machine antagonists is currently discussed as the most likely scenario of future manufacturing within the next 10 years because it considers technological developments and preserves human workplaces at the same time. However, not only technical feasibility plays a role in the design of these future collaborations, but also the psychological and social effects must be considered. This paper analyzes the subjective stress level of humans in dependence of the characteristics of robots (2 × 2 design with either an industrial or a humanoid robot that was performing either reliable or faulty). A virtual experiment has been conducted to simulate a collaborative hybrid task, including a pre- and post-survey to test. Results do not show any effect of condition, but significant effects of time. The results suggest that the experiment has generally been perceived as slightly stressful, but the appearance and behavior of the robot has no effect on the subjective stress level.