scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Social Robotics in 2019"


Book ChapterDOI
26 Nov 2019
TL;DR: An interactive robot learning framework using multimodal data from thermal facial images and human gait data for online emotion recognition and a new decision-level fusion method for the multi-modal classification using Random Forest model are proposed.
Abstract: Interaction plays a critical role in skills learning for natural communication. In human-robot interaction (HRI), robots can get feedback during the interaction to improve their social abilities. In this context, we propose an interactive robot learning framework using multimodal data from thermal facial images and human gait data for online emotion recognition. We also propose a new decision-level fusion method for the multimodal classification using Random Forest (RF) model. Our hybrid online emotion recognition model focuses on the detection of four human emotions (i.e., neutral, happiness, angry, and sadness). After conducting offline training and testing with the hybrid model, the accuracy of the online emotion recognition system is more than 10% lower than the offline one. In order to improve our system, the human verbal feedback is injected into the robot interactive learning. With the new online emotion recognition system, a 12.5% accuracy increase compared with the online system without interactive robot learning is obtained.

29 citations


Book ChapterDOI
26 Nov 2019
TL;DR: Results showed that in joint trials, the balloon exploded less often than in individual trials, which suggests that robots can influence human behavior, although this influence is modulated by the attitude toward the robot.
Abstract: Humans are influenced by the presence of other social agents, sometimes performing better, sometimes performing worse than alone. Humans are also affected by how they perceive the social agent. The present study investigated whether individual differences in the attitude toward robots can predict human behavior in human-robot interaction. Therefore, adult participants played a game with the Cozmo robot (Anki Inc., San Francisco), in which their task was to stop a balloon from exploding. In individual trials, only the participants could stop the balloon inflating, while in joint trials also Cozmo could stop it. Results showed that in joint trials, the balloon exploded less often than in individual trials. However participants stopped the balloon earlier in joint than in individual trials, although this was less beneficial for them. This effect of Cozmo joining the game, nevertheless, was influenced by the negative attitude of the participants toward robots. The more negative they were, the less their behavior was influenced by the presence of the robot. This suggests that robots can influence human behavior, although this influence is modulated by the attitude toward the robot.

23 citations


Book ChapterDOI
26 Nov 2019
TL;DR: Evidence is provided that SARs are a favorable alternative to VAs as rehabilitation tools because participants’ performance on the exercise was higher with a SAR than with a VA, which was especially clear under conditions of decreased perceptual information.
Abstract: Long-term motor deficits affect approximately two thirds of stroke survivors, reducing their quality of life. An effective rehabilitation therapy requires intense and repetitive training, which is resource demanding. Virtual Agents (VAs) and Socially Assistive Robots (SARs) offer high intensity, repetitive and reproducible therapy and are thus both promising as rehabilitation tools. In this paper, we compare a SAR and a VA during a rehabilitation task in terms of users’ engagement and movement performance, while leveraging neuroscientific methods to investigate potential differences at the neural level. Results show that our participants’ performance on the exercise was higher with a SAR than with a VA, which was especially clear under conditions of decreased perceptual information. Our participants reported higher levels of engagement with the SAR. Taken together, we provide evidence that SARs are a favorable alternative to VAs as rehabilitation tools.

17 citations


Book ChapterDOI
26 Nov 2019
TL;DR: A four-phase qualitative study to explore what kind of guidance customers need in a shopping mall, which characteristics make human guidance intuitive and effective there, and what aspects of the guidance should be applied to a social robot.
Abstract: Providing guidance to customers in a shopping mall is a suitable task for a social service robot. To be useful for customers, the guidance needs to be intuitive and effective. We conducted a four-phase qualitative study to explore what kind of guidance customers need in a shopping mall, which characteristics make human guidance intuitive and effective there, and what aspects of the guidance should be applied to a social robot. We first interviewed staff working at the information booth of a shopping mall and videotaped demonstrated guidance situations. In a human-human guidance study, ten students conducted seven way-finding tasks each to ask guidance from a human guide. We replicated the study setup to study guidance situations with a social service robot with eight students and four tasks. The robot was controlled using Wizard of Oz technique. The characteristics that make human guidance intuitive and effective, such as estimation of the distance to the destination, appropriate use of landmarks and pointing gestures, appear to have the same impact when a humanoid robot gives the guidance. Based on the results, we identified nine design implications for a social guidance robot in a shopping mall.

17 citations


Book ChapterDOI
26 Nov 2019
TL;DR: The methodological and design details of the proposed intervention protocol for testing the effectiveness of robot (NAO)-based treatment of ASD children compared to conventional human (therapist-based treatment are reported.
Abstract: The effectiveness of social robots in education is typically demonstrated, circumstantially, involving small samples of students [1]. Our interest here is in special education in Greece regarding Autism Spectrum Disorder (ASD) involving large samples of children students. Following a recent work review, this paper reports the specifications of a protocol for testing the effectiveness of robot (NAO)-based treatment of ASD children compared to conventional human (therapist)-based treatment. The proposed protocol has been developed by the collaboration of a clinical scientific team with a technical scientific team. The modular structure of the aforementioned protocol allows for implementing parametrically a number of tools and/or theories such as the theory-of-mind account from psychology; moreover, the engagement of the innovative Lattice Computing (LC) information processing paradigm is considered here toward making the robot more autonomous. This paper focuses on the methodological and design details of the proposed intervention protocol that is underway; the corresponding results will be reported in a future publication.

16 citations


Book ChapterDOI
26 Nov 2019
TL;DR: An exploratory study consisting of a series of Robot-Mediated Therapy sessions utilizing a humanoid NAO robot with five children with a severe form of autism for two weeks, finding no significant progress in their social skills.
Abstract: This paper presents an exploratory study consisting of a series of Robot-Mediated Therapy (RMT) sessions utilizing a humanoid NAO robot with five children with a severe form of autism for two weeks. The focus on RMT for children with low-functioning autism (LFA) was motivated by a relative neglect of Robot-Assisted Therapy efforts to address additional challenges facing individuals with LFA such as impairments of language and intellectual ability. Children aged 4–8 years old attended six 15-min sessions that included different types of applications programmed on the robot. The cumulative results obtained from the observations and interviews of the participants’ parents did not demonstrate significant progress in their social skills. Also, this paper explains the challenges and provides recommendations for further improvements of patient-centered interaction design in the area of RMT tailored for children with a severe form of autism.

11 citations


Book ChapterDOI
26 Nov 2019
TL;DR: It is found that mutual gaze is a better predictor than confirmatory request, gaze away, and goal reference and should be combined with other indicators, such as verbal cues or facial expressions to sufficiently represent assistance needed in the interaction and provide timely assistance.
Abstract: With current growth in social robotics comes a need for well developed and fine tuned agents which respond to the user in a seamless and intuitive manner. Socially assistive robots in particular have become popular for their uses in care for older adults for medication adherence and socializing. Since eye gaze cues are important mediators in human-human interactions, we hypothesize that gaze patterns can be applied to human-robot interactions to identify when the user may need assistance. We reviewed videos (N = 16) of robot supported collaborative work to explore how recognition of gaze patterns for an assistive robot in the context of a medication management task can help predict when a user needs assistance. We found that mutual gaze is a better predictor than confirmatory request, gaze away, and goal reference. While eye gaze serves as an important indicator for need for assistance, it should be combined with other indicators, such as verbal cues or facial expressions to sufficiently represent assistance needed in the interaction and provide timely assistance.

11 citations


Book ChapterDOI
26 Nov 2019
TL;DR: The use of social robots as a means to improve socialization between individuals rather than aiming to replace the human contact is explored, showing a positive attitude towards the robot and the interaction from both age groups.
Abstract: The main objective of this research was to gain insight in the attitude that groups of elderly and young students have towards social robots. A total of 52 participants (24 elderly vs. 28 students) took part in a short-term interaction with a humanoid social robot. In small groups of two to four people, they engaged in a conversation with a Nao robot. Their attitude was measured before and after the interaction using the Unified Theory of Acceptance and Use of Technology (UTAUT) questionnaire. Furthermore, the role of the robot as a facilitator for conversation was assessed by observing the interaction between individuals after the robot was removed. This research explored the use of social robots as a means to improve socialization between individuals rather than aiming to replace the human contact. Results from the questionnaire and an additional observational analysis showed a positive attitude towards the robot and the interaction from both age groups. After the interaction, elderly perceived the robot as significantly more useful than students, which could be assigned to a difference in needs and expectations they had from it. Furthermore, anxiety towards the robot for both groups decreased after the interaction. Future research can investigate the effect of long-term interaction with a similar robot. In the long-term, social robots could possibly be deployed to decrease loneliness, a common issue among elderly.

11 citations


Book ChapterDOI
26 Nov 2019
TL;DR: The present study aims at creating a questionnaire that measures expectations regarding the capabilities of the robot and testing whether these priors modulate the adoption of the intentional stance toward artificial agents, and finds that individual expectations might influence the adopted of mentalistic explanations.
Abstract: Humans predict others’ behavior based on mental state inferences and expectations created on previous interactions. On the brink of the introduction of artificial agents in our social environment, the question of whether humans would use similar cognitive mechanisms to interact with these agents gains relevance. Recent research showed that people could indeed explain the behavior of a robot in mentalistic terms. However, there is scarce evidence regarding how expectations modulate the adoption of these mentalistic explanations. The present study aims at creating a questionnaire that measures expectations regarding the capabilities of the robot and testing whether these priors modulate the adoption of the intentional stance toward artificial agents. We found that individual expectations might influence the adoption of mentalistic explanations. After a show period of observation, participants with higher expectations tended to explain iCub’s behavior in mentalistic terms; meanwhile, participants with lower expectations maintained their mechanistic explanations of behavior. Our findings suggest that expectations about capabilities and purpose of the robot might modulate the adoption of intentional stance toward artificial agents.

11 citations


Book ChapterDOI
26 Nov 2019
TL;DR: A pilot study aimed to evaluate with children the valence of emotional behaviours enhanced with non-verbal sounds, and shows that children aged 3–8 years perceive the robot’s behaviours and the related selected emotional semantic free sounds in terms of different degrees of arousal, valence and dominance.
Abstract: Social Assistive Robots are starting to be widely used in paediatric health-care environments. In this domain, the development of effective strategies to keep the children engaged during the interaction with a social robot is still an open research area. On this subject, some approaches are investigating the combination of distraction strategies, as used in human-human interaction, and the display of emotional behaviours. In this study, we presented the results of a pilot study aimed to evaluate with children the valence of emotional behaviours enhanced with non-verbal sounds. The objective is to endow the NAO robot with emotional-like sounds, selected from a set of para-linguistic behaviours validated by valence. Results show that children aged 3–8 years perceive the robot’s behaviours and the related selected emotional semantic free sounds in terms of different degrees of arousal, valence and dominance: while valence and dominance are clearly perceived by the children, arousal is more difficult to distinguish.

10 citations


Book ChapterDOI
26 Nov 2019
TL;DR: An extensive human-robot collaboration user study involving 50 participants in which the robot purposefully executed erroneous behaviours and annotated the occurrences and the duration of multimodal social signals from the participants during both error-free situations and error situations using an automatic video annotation method based on OpenFace.
Abstract: The capability of differentiating error situations from error-free situations in human-robot collaboration is a mandatory skill for collaborative robots. One of the variables that robots can analyse to differentiate both situations are the social signals from the human interaction partner. We performed an extensive human-robot collaboration user study involving 50 participants in which the robot purposefully executed erroneous behaviours. We annotated the occurrences and the duration of multimodal social signals from the participants during both error-free situations and error situations using an automatic video annotation method based on OpenFace. An analysis of the annotation shows that the participants express more facial expressions, head gestures, and gaze shifts during erroneous situations than in error-free situations. The duration of the facial expressions and gaze shifts is also longer during error situations. Our results additionally show that people look at the robot and the table with a longer duration and look at the objects with a shorter duration in error situations compared to error-free situations. The results of this research are essential for the development of automatic error recognition and error handling in human-robot collaboration.

Book ChapterDOI
26 Nov 2019
TL;DR: Whether the children successfully got acquainted with the robot and to what extent the children bonded withThe robot is explored, with the results of a user study evaluating these patterns and robot behaviors.
Abstract: We are developing a social robot that should autonomously interact long-term with pediatric oncology patients. The child and the robot need to get acquainted with one another before a long-term interaction can take place. We designed five interaction design patterns and two sets of robot behaviors to structure a getting acquainted interaction. We discuss the results of a user study (N = 75, 8–11 y.o.) evaluating these patterns and robot behaviors. Specifically, we are exploring whether the children successfully got acquainted with the robot and to what extent the children bonded with the robot.

Book ChapterDOI
26 Nov 2019
TL;DR: A new approach is presented which integrates uninstructed persons as helpers to open doors and to call and operate elevators and the current implementation status of these two abilities into a robotic application developed for real-world scenarios is presented.
Abstract: The ability to handle closed doors and elevators would extend the applicability of Socially Assistive Robots (SAR) enormously. In this paper, we present a new approach which integrates uninstructed persons as helpers to open doors and to call and operate elevators. The current implementation status of these two abilities into a robotic application developed for real-world scenarios together with first experimental results obtained are presented below.

Book ChapterDOI
26 Nov 2019
TL;DR: The paper introduces the notion of a Boundary-Crossing Robot which refers to the use of AI research and novel technology in symbiotic interaction with human users, especially in the meaning creation processes that make the world sensible and interpretable in the course of everyday activities.
Abstract: The paper introduces the notion of a Boundary-Crossing Robot which refers to the use of AI research and novel technology in symbiotic interaction with human users, especially in the meaning creation processes that make the world sensible and interpretable in the course of everyday activities. Co-evolution of collaboration is considered from the point of view of social robots with dual characteristics as agents and elaborated computers, and the focus is on the robot’s interaction capability. The paper emphasizes important questions related to trust in social encounters with boundary-crossing agents.

Book ChapterDOI
26 Nov 2019
TL;DR: This research tried to study the effect of intention prediction during a human-robot game scenario using the authors' humanoid robotic platform, RASA and indicated a significant difference between the random playing strategy and the other strategy predicting players’ intention during the game.
Abstract: Interaction quality improvement in a social robotic platform can be achieved through intention detection/prediction of the user. In this research, we tried to study the effect of intention prediction during a human-robot game scenario. We used our humanoid robotic platform, RASA. Rock-Paper-Scissors was chosen as our game scenario. In the first step, a Leap Motion sensor and a Multilayer Perceptron Neural Network is used to detect the hand gesture of the human-player. On the next level, in order to study the intention prediction’s effect on our human-robot gaming platform, we implemented two different playing strategies for RASA. One of the strategies was to play randomly, while the other one used Markov Chain model, to predict the next move. Then 32 players with the ages between 20 to 35 were asked to play Rock-Paper-Scissors with RASA for 20 rounds in each strategy mode. Participants did not know about the difference in the robot’s decision-making strategy in each mode and the intelligence of each strategy modes as well as the Acceptance/Attractiveness of the robotic gaming platform were assessed quantitatively through a questionnaire. Finally, paired T-tests indicated a significant difference between the random playing strategy and the other strategy predicting players’ intention during the game.

Book ChapterDOI
26 Nov 2019
TL;DR: In this article, the authors developed a natural talking gesture generation behavior for a humanoid robot by feeding a Generative Adversarial Network (GAN) with human talking gestures recorded by a Kinect.
Abstract: The goal of the system presented in this paper is to develop a natural talking gesture generation behavior for a humanoid robot, by feeding a Generative Adversarial Network (GAN) with human talking gestures recorded by a Kinect. A direct kinematic approach is used to translate from human poses to robot joint positions. The provided videos show that the robot is able to use a wide variety of gestures, offering a non-dreary, natural expression level.

Book ChapterDOI
26 Nov 2019
TL;DR: This paper presents the functional advantages that adaptive Theory of Mind systems would support in robotics and contextualize them in practical applications and suggests directing future research towards the modern cross-talk between the fields of robotics and developmental psychology.
Abstract: Despite the recent advancement in the social robotic field, important limitations restrain its progress and delay the application of robots in everyday scenarios. In the present paper, we propose to develop computational models inspired by our knowledge of human infants’ social adaptive abilities. We believe this may provide solutions at an architectural level to overcome the limits of current systems. Specifically, we present the functional advantages that adaptive Theory of Mind (ToM) systems would support in robotics (i.e., mentalizing for belief understanding, proactivity and preparation, active perception and learning) and contextualize them in practical applications. We review current computational models mainly based on the simulation and teleological theories, and robotic implementations to identify the limitations of ToM functions in current robotic architectures and suggest a possible future developmental pathway. Finally, we propose future studies to create innovative computational models integrating the properties of the simulation and teleological approaches for an improved adaptive ToM ability in robots with the aim of enhancing human-robot interactions and permitting the application of robots in unexplored environments, such as disasters and construction sites. To achieve this goal, we suggest directing future research towards the modern cross-talk between the fields of robotics and developmental psychology.

Book ChapterDOI
26 Nov 2019
TL;DR: The exploratory design and study of a robot math tutor that can provide feedback on specific errors made by children solving basic addition and subtraction problems up to 100 is reported on.
Abstract: We report on the exploratory design and study of a robot math tutor that can provide feedback on specific errors made by children solving basic addition and subtraction problems up to 100. We discuss two interaction design patterns, one for speech recognition of answers when children think aloud, and one for providing error-specific feedback. We evaluate our design patterns and whether our feedback mechanism motivates children and improves their performance at primary schools with children (\(N=41\)) aged 7–9. We did not find any motivational or learning effects of our feedback mechanism but lessons learnt include that the robot can execute our interaction design patterns autonomously, and advanced algorithms for error classification and adaptation to children’s performance levels in our feedback mechanism are needed.

Book ChapterDOI
26 Nov 2019
TL;DR: This study addressed more than 150 clinics and nursing service providers throughout Germany with respect to the benefit of different robot application scenarios, drivers and barriers for the introduction of service robots in healthcare settings as well as estimated time savings.
Abstract: Assistance robots have a large potential to support patients and staff in outpatient and inpatient settings. Despite the need and large potential, the diffusion of robotic applications in the German healthcare sector is only slowly picking up pace. The objective of this study is to shed some light on the reasons and identify measures that support involved stakeholders in closing this gap in the upcoming years. Using an online survey, we addressed more than 150 clinics and nursing service providers throughout Germany with respect to the benefit of different robot application scenarios, drivers and barriers for the introduction of service robots in healthcare settings as well as estimated time savings. Concerning possible application areas, disinfection and cleaning robots are currently perceived to have the highest benefit, whereas the value of robots to support personal hygiene is considered rather low. The greatest drivers for using robot assistants in healthcare settings are their potential to save time for the staff as well as to increase employer attractiveness and higher efficiency in processes. The most frequently cited barriers are financing, data protection, legal obstacles and the importance of human contact. For three selected scenarios: assistance robots as guides, lifting robots and activation and communication robots, we further asked for the expected time savings. The results show differences between clinics as well as inpatient and outpatient nursing services. In order to accelerate the diffusion of robot assistants in Germany, several implications have to be considered: Acceptance and experience are positively correlated i.e. from a political standpoint, research programs are needed to support joint development of robot assistants by research, industry and end users. Legal and financial barriers should be reduced. For manufacturers, creating testing possibilities and close interaction with potential users for the identification of adequate scenarios and clarifying legal questions could prove to be beneficial in terms of a higher acceptance in the market.

Book ChapterDOI
26 Nov 2019
TL;DR: This paper presents an active intention recognition paradigm to perceive, even under sensory constraints, not only the target's position but also the first responder's movements, which can provide information on his/her intentions.
Abstract: Proactively perceiving others’ intentions is a crucial skill to effectively interact in unstructured, dynamic and novel environments. This work proposes a first step towards embedding this skill in support robots for search and rescue missions. Predicting the responders’ intentions, indeed, will enable exploration approaches which will identify and prioritise areas that are more relevant for the responder and, thus, for the task, leading to the development of safer, more robust and efficient joint exploration strategies. More specifically, this paper presents an active intention recognition paradigm to perceive, even under sensory constraints, not only the target’s position but also the first responder’s movements, which can provide information on his/her intentions (e.g. reaching the position where he/she expects the target to be). This mechanism is implemented by employing an extension of Monte-Carlo-based planning techniques for partially observable environments, where the reward function is augmented with an entropy reduction bonus. We test in simulation several configurations of reward augmentation, both information theoretic and not, as well as belief state approximations and obtain substantial improvements over the basic approach.

Book ChapterDOI
26 Nov 2019
TL;DR: The results provide some insights into shared control designs which accommodates the preferences of the older adult users as they interact with robotic aids such as the table setting robot used in this study.
Abstract: This study provides user-studies aimed at exploring factors influencing the interaction between older adults and a robotic table setting assistant. The influence of level of automation (LOA) and level of transparency (LOT) on the quality of the interaction was considered. Results revealed that the interaction effect of LOA and LOT significantly influenced the interaction. A low LOA which required the user to control some of the actions of the robot influenced the older adults to participate more in the interaction when the LOT was high (more information) compared to situations with low LOT (less information) and high LOA (more robot autonomy). Even though, the higher LOA influenced more fluency in the interaction, the lower LOA encouraged a more collaborative form of interaction which is a priority in the design of robotic aids for older adult users. The results provide some insights into shared control designs which accommodates the preferences of the older adult users as they interact with robotic aids such as the table setting robot used in this study.

Book ChapterDOI
26 Nov 2019
TL;DR: This paper focuses on empowering the caregiver to easily teach new board exercises to the robot by providing positive examples, and builds upon the existing framework, in which a robot is employed to provide encouragement and hints while a patient is physically playing a cognitive exercise.
Abstract: Social Assistive Robots are a powerful tool to be used in patients’ cognitive training. The purpose of this study is to evaluate a new methodology to enable caregivers to teach cognitive exercises to the robot in an easy and natural way. We build upon our existing framework, in which a robot is employed to provide encouragement and hints while a patient is physically playing a cognitive exercise. In this paper, we focus on empowering the caregiver to easily teach new board exercises to the robot by providing positive examples.

Book ChapterDOI
26 Nov 2019
TL;DR: A novel framework for teaching sign language to RASA, a humanoid teaching assistant social robot, where the user would wear a motion capture suit and perform a sign multiple times to train a set of parallel Hidden Markov Models to encode each sign.
Abstract: This paper proposes a novel framework for teaching sign language to RASA, a humanoid teaching assistant social robot. The ultimate goal was to design a user-friendly process by which the RASA robot could learn new signs from non-expert users. In the proposed method, the user would wear a motion capture suit and perform a sign multiple times to train a set of parallel Hidden Markov Models to encode each sign. Then, collision avoidance and the sign’s comprehensibility were ensured by utilizing special mapping from the user’s workspace to the robot’s joint space. Lastly, the system’s performance was assessed by teaching 10 Persian Sign Language (PSL) signs to the robot by a teacher and involving some participants familiar with PSL to investigate how distinguishable were the performed signs for them. We observed quite noticeable distinguish rate of –80% and 100% in their first and second guesses for the selected signs, respectively. Moreover, alongside those subjects, a group of participants unfamiliar with PSL was also asked to fill in a questionnaire in order to get all of the participants’ viewpoints regarding the using of social robots in teaching sign languages. We indicated that with the mean score of 4.1 (out of 5), the subjects familiar with PSL believed that the performed signs generated by the robot were very close to being natural.

Book ChapterDOI
26 Nov 2019
TL;DR: The social presence of the robot leads to a better evaluation of self-generated actions and, at the same time, to a reduction of SoA in human-robot interaction (HRI).
Abstract: In near future, robots will become a fundamental part of our daily life; therefore, it appears crucial to investigate how they can successfully interact with humans. Since several studies already pointed out that a robotic agent can influence human’s cognitive mechanisms such as decision-making and joint attention, we focus on Sense of Agency (SoA). To this aim, we employed the Intentional Binding (IB) task to implicitly assess SoA in human-robot interaction (HRI). Participants were asked to perform an IB task alone (Individual condition) or with the Cozmo robot (Social condition). In the Social condition, participants were free to decide whether they wanted to let Cozmo press. Results showed that participants performed the action significantly more often than Cozmo. Moreover, participants were more precise in reporting the occurrence of a self-made action when Cozmo was also in charge of performing the task. However, this improvement in evaluating self-performance corresponded to a reduction in SoA. In conclusion, the present study highlights the double effect of robots as social companions. Indeed, the social presence of the robot leads to a better evaluation of self-generated actions and, at the same time, to a reduction of SoA.

Book ChapterDOI
26 Nov 2019
TL;DR: It was found that the robot was perceived as less warm, competent, agentic, and able to experience than the human, which could be attributed primarily due to the inherent human-like difference in agents.
Abstract: Cheating is a universally salient and disliked behavior. Previous research has shown that a cheating robot dramatically increases perception of its perceived agency. However, this original research did not directly compare human cheating to robot cheating. We examined whether the human and the robot were evaluated differently in terms of reactionary behaviors as well as attribution of mental states and perception of competence, warmth, agency, and capabilities to experience. This study was able to partially recreate the previous study findings [10] showing that participants were highly socially engaged with the cheating robot and showing hostile reactions to the cheating action of the robot. In contrast, these reactions were not observed for the human condition. Additionally, play interactions with the robot were rated as more discomforting compared to the experience with the human player. Finally, it was found that the robot was perceived as less warm, competent, agentic, and able to experience than the human. This result could be attributed primarily due to the inherent human-like difference in agents. Several implications of this study are discussed with respect to the design of robot behavior and human social norms.

Book ChapterDOI
26 Nov 2019
TL;DR: This paper proposes a novel framework to detect human-object interactions in RGB-D video streams based on spatio-temporal and pose information and shows that the system can be used for online human motion prediction in robotic applications.
Abstract: The detection of human-object interactions is a key component in many applications, examples include activity recognition, human intention understanding or the prediction of human movements. In this paper, we propose a novel framework to detect such interactions in RGB-D video streams based on spatio-temporal and pose information. Our system first detects possible human-object interactions using position and pose data of humans and objects. To counter false positive and false negative detections, we calculate the likelihood that such an interaction really occurs by tracking it over subsequent frames. Previous work mainly focused on the detection of specific activities with interacted objects in short prerecorded video clips. In contrast to that, our framework is able to find arbitrary interactions with 510 different objects exploiting the detection capabilities of R-CNNs as well as the Open Image dataset and can be used on online video streams. Our experimental evaluation demonstrates the robustness of the approach on various published videos recorded in indoor environments. The system achieves precision and recall rates of 0.82 on this dataset. Furthermore, we also show that our system can be used for online human motion prediction in robotic applications.

Book ChapterDOI
26 Nov 2019
TL;DR: This work follows a purely data-driven approach based on deep learning architectures which, by not requiring any knowledge either on the nature of the masking noise or on the structure and acoustics of the operation environment, is able to reliably act in previously unexplored acoustic scenes.
Abstract: This paper is about speaker verification and horizontal localisation in the presence of conspicuous noise. Specifically, we are interested in enabling a mobile robot to robustly and accurately spot the presence of a target speaker and estimate his/her position in challenging acoustic scenarios. While several solutions to both tasks have been proposed in the literature, little attention has been devoted to the development of systems able to function in harsh noisy conditions. To address these shortcomings, in this work we follow a purely data-driven approach based on deep learning architectures which, by not requiring any knowledge either on the nature of the masking noise or on the structure and acoustics of the operation environment, it is able to reliably act in previously unexplored acoustic scenes. Our experimental evaluation, relying on data collected in real environments with a robotic platform, demonstrates that our framework is able to achieve high performance both in the verification and localisation tasks, despite the presence of copious noise.

Proceedings Article
01 Jan 2019
TL;DR: In this paper, ongoing work on understandable teams of robots collaborating to solve a common task while communicating their current, suggested or planned actions in natural language to each other is described.
Abstract: In this paper, we describe ongoing work on understandable teams of robots collaborating to solve a common task while communicating their current, suggested or planned actions in natural language to ...

Book ChapterDOI
26 Nov 2019
TL;DR: This paper examines the effects of three different visual conditions (filters) on the remote operator’s ability to discern details while completing a navigation task and finds that a depth image filter was the most effective privacy protector.
Abstract: As robotics technology improves, remotely-operated telepresence robots will become more prevalent in homes and businesses, allowing guests, business partners, and contractors to visit and accomplish tasks without being physically present. These devices raise new privacy concerns: a telepresence robot may be used by a remote operator to spy on the local area, or recorded video may be viewed by a third party. Video filtering is one method of reducing spying ability while still allowing the remote operator to perform their task. In this paper, we examine the effects of three different visual conditions (filters) on the remote operator’s ability to discern details while completing a navigation task. We found that applying such filters protected privacy without significantly affecting the operator’s ability to perform the task, and that a depth image filter was the most effective privacy protector. We also found that the cognitive load of driving the robot has a slight privacy-protecting effect.

Book ChapterDOI
26 Nov 2019
TL;DR: A novel method for telepresence via VR is described, aimed at improving comfort, by accounting for discrepancies between robot and user head pose, through a “decoupled” image projection technique, whereby the user is able to look across captured imagery rendered to a virtual display plane.
Abstract: Telepresence technologies enable users to exhibit a presence in a remote location, through the use of sensors, networks and robotics. State-of-the-art telepresence research swaps conventional desktop monitors for Virtual Reality (VR) headsets, in order to increase the user’s immersion in the remote environment, though often at the cost of increased nausea and oculomotor discomfort. We describe a novel method for telepresence via VR, aimed at improving comfort, by accounting for discrepancies between robot and user head pose. This is achieved through a “decoupled” image projection technique, whereby the user is able to look across captured imagery rendered to a virtual display plane. Evaluated against conventional projection techniques, in a controlled study involving 19 participants, decoupled image projection significantly reduced mean perceived nausea and oculomotor discomfort while also improving immersiveness and the perceived sensation of presence.