scispace - formally typeset
Search or ask a question

Showing papers presented at "Robot and Human Interactive Communication in 2021"


Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, the authors conducted a 2 (high vs. low anthropomorphism) x 4 (trust repair strategies) between-subjects experiment and found that repair strategies are effective relative to ability, integrity and benevolence and the robot's anthropomorphisms.
Abstract: Trust is vital to promoting human and robot collaboration, but like human teammates, robots make mistakes that undermine trust. As a result, a human’s perception of his or her robot teammate’s trustworthiness can dramatically decrease [1], [2], [3], [4]. Trustworthiness consists of three distinct dimensions: ability (i.e. competency), benevolence (i.e. concern for the trustor) and integrity (i.e. honesty) [5], [6]. Taken together, decreases in trustworthiness decreases trust in the robot [7]. To address this, we conducted a 2 (high vs. low anthropomorphism) x 4 (trust repair strategies) between-subjects experiment. Preliminary results of the first 164 participants (between 19 and 24 per cell) highlight which repair strategies are effective relative to ability, integrity and benevolence and the robot’s anthropomorphism. Overall, this paper contributes to the HRI trust repair literature.

27 citations


Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, the authors provide guidelines for long-term co-design for how other researchers can adopt long- term codesign, informed by a 12-month codesign with older adults designing a social social robot, which leveraged human-centered, tactile and experiential design activities, including participatory design.
Abstract: Users can provide valuable insights for designing new technologies like social robots, with the right tools and methodologies. Challenges in inviting users as co-designers of social robots is due to lack of guidelines or methodologies to (1) organize co-design processes and/or (2) engage with people long-term to develop technologies together. The main contribution of this work is to provide guidelines for long- term co-design for how other researchers can adopt long- term co-design, informed by a 12-month co-design with older adults designing a social social robot. We leveraged human- centered, tactile and experiential design activities, including participatory design, based upon the following design principles: scenario specific exploration, long-term lived experiences, supporting multiple design activities, cultivating relationships, and employing divergent and convergent processes. We present seven different sessions across three stages as examples of this methodology that build on each other to engage users as co- designers, successfully deployed in a co-design project of home social robots with 28 older adults. Lastly, we detail 10 long- term divergent-convergent co-design guidelines for designing social robots. We demonstrate the value of leveraging people's lived technology experiences and co-design activities to generate actionable social robot design guidelines, advocating for more applications of the methodology in broader contexts as well.

23 citations


Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, a novel dynamic method based on Behavior Trees (BTs) is proposed to integrate planning and allocation of tasks in mixed human robot teams, suitable for manufacturing environments.
Abstract: This paper proposes a novel dynamic method based on Behavior Trees (BTs) that integrates planning and allocation of tasks in mixed human robot teams, suitable for manufacturing environments. The Behavior Tree formulation allows encoding a single job as a compound of different tasks with temporal and logic constraints. In this way, instead of formulating an offline centralized optimization problem, the role allocation problem is solved with multiple simplified online optimization sub-problems, without complex and cross-schedule task dependencies. These sub-problems are defined as Mixed-Integer Linear Programs (MILPs), that, according to the worker-actions related costs and the workers’ availability, allocate the yet-to-execute tasks among the available workers. To characterize the behavior of the developed method, we opted to perform different simulation experiments, in which the results of the action-worker allocation and the computational complexity are evaluated. The obtained results, due to the nature of the algorithm and to the possibility of simulating the agents’ behavior, illustrate adequately also how the algorithm performs in real experiments.

15 citations


Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, a general Mixed-Integer Linear Programming (MILP) problem is formulated to minimize the overall execution time while optimizing the quality of the executed tasks as well as human and robotic workload.
Abstract: In this work, we address a task allocation problem for human multi-robot settings. Given a set of tasks to perform, we formulate a general Mixed-Integer Linear Programming (MILP) problem aiming at minimizing the overall execution time while optimizing the quality of the executed tasks as well as human and robotic workload. Different skills of the agents, both human and robotic, are taken into account and human operators are enabled to either directly execute tasks or play supervisory roles; moreover, multiple manipulators can tightly collaborate if required to carry out a task. Finally, as realistic in human contexts, human parameters are assumed to vary over time, e.g., due to increasing human level of fatigue. Therefore, online monitoring is required and re-allocation is performed if needed. Simulations in a realistic scenario with two manipulators and a human operator performing an assembly task validate the effectiveness of the approach.

14 citations


Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, the authors investigated the feasibility of integrating wearable sensors and machine learning techniques to detect the occurrence of challenging behaviours in real-time, and found that physiological signals in addition to typical kinetic measures led to more accurate predictions.
Abstract: Autism spectrum disorder is a neurodevelopmental disorder that is characterized by patterns of behaviours and difficulties with social communication and interaction. Children on the spectrum exhibit atypical, restricted, repetitive, and challenging behaviours. In this study, we investigate the feasibility of integrating wearable sensors and machine learning techniques to detect the occurrence of challenging behaviours in real-time. A session of a child with autism interacting with different stimuli groups that included social robots was annotated with observed challenging behaviors. The child wore a wearable device that captured different motion and physiological signals. Different features and machine learning configurations were investigated to identify the most effective combination. Our results showed that physiological signals in addition to typical kinetic measures led to more accurate predictions. The best features and learning model combination achieved an accuracy of 97%. The findings of this work motivate research toward methods of early detection of challenging behaviours, which may enable the timely intervention by caregivers and possibly by social robots.

12 citations


Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, a reinforcement learning-based Tic-tac-toe scenario is presented, where each playing component is presented by an individual drone that has its own mobility and swarm intelligence to win against a human player.
Abstract: Reinforcement learning (RL) methods have been actively applied in the field of robotics, allowing the system itself to find a solution for a task otherwise requiring a complex decision-making algorithm. In this paper, we present a novel RL-based Tic-tac-toe scenario, i.e. SwarmPlay, where each playing component is presented by an individual drone that has its own mobility and swarm intelligence to win against a human player. Thus, the combination of challenging swarm strategy and human-drone collaboration aims to make the games with machines tangible and interactive. Although some research on AI for board games already exists, e.g., chess, the SwarmPlay technology has the potential to offer much more engagement and interaction with the user as it proposes a multi-agent swarm instead of a single interactive robot. We explore user’s evaluation of RL-based swarm behavior in comparison with the game theory-based behavior. The preliminary user study revealed that participants were highly engaged in the game with drones (70% put a maximum score on the Likert scale) and found it less artificial compared to the regular computer-based systems (80%). The affection of the user’s game perception from its outcome was analyzed and put under discussion. User study revealed that SwarmPlay has the potential to be implemented in a wider range of games, significantly improving human-drone interactivity.

12 citations


Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, the authors propose a new psychology-inspired task, gathering perspective-taking, planning, knowledge representation with theory of mind, manipulation, and communication for human-robot interaction.
Abstract: Assessing robotic architecture for Human-Robot Interaction can be challenging due to the number of features a robot has to endow to perform an acceptable interaction. While everyday-inspired tasks are interesting as reflecting a realistic use of such robots, they often contain a lot of unknown and uncontrolled conditions and specific robot behavior can be hard to test. In this paper, we propose a new psychology-inspired task, gathering perspective-taking, planning, knowledge representation with theory of mind, manipulation, and communication. Along with a precise description of the task allowing its replication, we present a cognitive robot architecture able to perform it in its nominal cases. We finally suggest some challenges and evaluations for the Human-Robot Interaction research community, all derived from this easy-to-replicate task.

11 citations


Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, the authors developed a teleoperation framework that enables an experienced human coach to conduct mindfulness training sessions virtually, by replicating their upper-body and head movements onto the Pepper robot, in real-time.
Abstract: Social robots are becoming incorporated in daily human lives, assisting in the promotion of the physical and mental wellbeing of individuals. To investigate the design and use of social robots for delivering mindfulness training, we develop a teleoperation framework that enables an experienced Human Coach (HC) to conduct mindfulness training sessions virtually, by replicating their upper-body and head movements onto the Pepper robot, in real-time. Pepper’s vision is mapped onto a Head-Mounted Display (HMD) worn by the HC and a bidirectional audio pipeline is set up, enabling the HC to communicate with the participants through the robot. To evaluate the participants’ perceptions of the teleoperated Robot Coach (RC), we study the interactions between a group of participants and the RC over 5 weeks and compare these with another group of participants interacting directly with the HC. Growth modelling analysis of this longitudinal data shows that the HC ratings are consistently greater than 4 (on a scale of 1 5) for all aspects while an increase is witnessed in the RC ratings over the weeks, for the Robot Motion and Conversation dimensions. Mindfulness training delivered by both types of coaching evokes positive responses from the participants across all the sessions, with the HC rated significantly higher than the RC on Animacy, Likeability and Perceived Intelligence. Participants’ personality traits such as Conscientiousness and Neuroticism are found to influence their perception of the RC. These findings enable an understanding of the differences between the perceptions of HC and RC delivering mindfulness training, and provide insights towards the development of robot coaches for improving the psychological wellbeing of individuals.

11 citations


Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, a probabilistic generative model is proposed to synthesize and reconstruct long horizon motion sequences conditioned on past information and control signals, such as the path along which an individual is moving.
Abstract: Data-driven approaches for modeling human skeletal motion have found various applications in interactive media and social robotics. Challenges remain in these fields for generating high-fidelity samples and robustly reconstructing motion from imperfect input data, due to e.g. missed marker detection. In this paper, we propose a probabilistic generative model to synthesize and reconstruct long horizon motion sequences conditioned on past information and control signals, such as the path along which an individual is moving. Our method adapts the existing work MoGlow by introducing a new graph-based model. The model leverages the spatial-temporal graph convolutional network (ST-GCN) to effectively capture the spatial structure and temporal correlation of skeletal motion data at multiple scales. We evaluate the models on a mixture of motion capture datasets of human locomotion with foot-step and bone-length analysis. The results demonstrate the advantages of our model in reconstructing missing markers and achieving comparable results on generating realistic future poses. When the inputs are imperfect, our model shows improvements on robustness of generation.

11 citations


Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, an interpretable modular neural framework for modeling the intentions of other observed entities is proposed, and the authors demonstrate the efficacy of their approach with experiments on data from human participants on a search and rescue task in Minecraft.
Abstract: When developing AI systems that interact with humans, it is essential to design both a system that can understand humans, and a system that humans can understand. Most deep network based agent-modeling approaches are 1) not interpretable and 2) only model external behavior, ignoring internal mental states, which potentially limits their capability for assistance, interventions, discovering false beliefs, etc. To this end, we develop an interpretable modular neural framework for modeling the intentions of other observed entities. We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft, and show that incorporating interpretability can significantly increase predictive performance under the right conditions.

10 citations


Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, the authors explored the use of monoscopic and stereoscopic views and display types (immersive and non-immersive VR) for operating vehicles remotely and conducted two user studies to explore their feasibility and advantages.
Abstract: Virtual reality (VR) head-mounted displays (HMD) have recently been used to provide an immersive, first-person vision/view in real-time for manipulating remotely-controlled unmanned ground vehicles (UGV). The teleoperation of UGV can be challenging for operators when it is done in real time. One big challenge is for operators to perceive quickly and rapidly the distance of objects that are around the UGV while it is moving. In this research, we explore the use of monoscopic and stereoscopic views and display types (immersive and non-immersive VR) for operating vehicles remotely. We conducted two user studies to explore their feasibility and advantages. Results show a significantly better performance when using an immersive display with stereoscopic view for dynamic, real-time navigation tasks that require avoiding both moving and static obstacles. The use of stereoscopic view in an immersive display in particular improved user performance and led to better usability.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, the authors present the results of research whose aim was to determine the type and amount of personal information individuals might disclose to robots designed with different visual appearances, such as two humanoid appearing robots and a female android.
Abstract: This paper presents the results of research whose aim was to determine the type and amount of personal information individuals might disclose to robots designed with different visual appearances. The set of images viewed by participants consisted of two humanoid appearing robots and a female android. Further, a human image was used as a control for comparison purposes. For an individual to decide to disclose personal and potentially embarrassing information to a robot serving as a counselor, they must trust that the robot will safeguard their disclosures and be an empathetic listener. In this research 110 participants viewed four images and completed an online survey accessing their attitudes and decision on whether to self-disclose personal information to a robot counselor. Compared to the robot images, the results indicated a strong preference to disclose personal information to a human counselor regardless of the type of information. However, given the type of self-disclosure, the data also showed that participants would, to some extent, disclose to a friendly appearing robot and female android, and more so than to a robot judged to lack affect.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, the authors present the user requirements and challenges that are relevant to older adults who prefer to age healthily at home, and how socially assistive robots (SAR) can be used to help them.
Abstract: With the world population ageing and with the number of healthcare users needing assistance and support increasing, healthcare is becoming more costly and, as such, the need to optimise and support independent living for older people is of paramount importance. This paper reviews the user requirements and challenges that are relevant to older adults who prefer to age healthily at home, and how socially assistive robots (SAR) can be used to help them. The main focus is placed on the social robotic application developed for the H2020 SHAPES project to promote Smart Living Environment for healthy ageing. The solution is based on the newest PAL Robotics’ robot ARI, a high-performance social robot and companion designed for a wide range of multi-modal expressive gestures, gaze and personalised behaviour, which is integrated via several Digital Solutions developed within the SHAPES project to improve human-robot interaction and user acceptability for independent living support tasks. The validation process will take place over the coming months at Clinica Humana (Mallorca, Spain), which is a private clinic that provides hospital care to retirement homes, communities and home-bound patients. A description of the scenario definition is presented in the paper, together with the validation plan that will be executed during the pilot assessments and the measures that will be taken to improve user engagement.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, the authors explore the utility, usability, and accessibility of a tele-operated Stretch robot in the home and present findings from a study (N=18) in which participants used the interface to remotely control the robot.
Abstract: New mobile manipulator platforms, like the Hello Robot Stretch, have made the idea of long-term in-home robotic assistance feasible. However, existing autonomous capabilities for such robots in unstructured, highly-varied environments are still not available. Instead, using robots with human tele-operation can have huge immediate impact. For these robots to serve populations that need them the most, their interfaces need to be accessible to people with mobility limitations. In this paper we explore the utility, usability, and accessibility of a tele-operated Stretch robot in the home. We first describe a browser-based interface for controlling the Stretch robot designed with accessibility in mind. We then present findings from a study (N=18) in which participants used the interface to remotely control the robot to perform realistic tasks in a kitchen, demonstrating the feasibility of tele-operated assistance and revealing challenges and opportunities. Next, we present a study with individuals with mobility limitations (N=3) identifying additional accessibility requirements for the interface. Participants in both studies agree to the utility of the robot despite current limitations.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, the authors used graph division to represent spatial knowledge and transfer learning Diffusion Convo-lutional Recurrent Neural Network (TL-DCRNN) to predict navigation strategy.
Abstract: To build an agent providing assistance to human rescuers in an urban search and rescue task, it is crucial to understand not only human actions but also human beliefs that may influence the decision to take these actions. Developing data-driven models to predict a rescuer’s strategies for navigating the environment and triaging victims requires costly data collection and training for each new environment of interest. Transfer learning approaches can be used to mitigate this challenge, allowing a model trained on a source environment/task to generalize to a previously unseen target environment/task with few training examples. In this paper, we investigate transfer learning (a) from a source environment with smaller number of types of injured victims to one with larger number of victim injury classes and (b) from a smaller and simpler environment to a larger and more complex one for navigation strategy. Inspired by hierarchical organization of human spatial cognition, we used graph division to represent spatial knowledge, and Transfer Learning Diffusion Convo-lutional Recurrent Neural Network (TL-DCRNN), a spatial and temporal graph-based recurrent neural network suitable for transfer learning, to predict navigation. To abstract the rescue strategy from a rescuer’s field-of-view stream, we used attention-based LSTM networks. We experimented on various transfer learning scenarios and evaluated the performance using mean average error. Results indicated our assistant agent can improve predictive accuracy and learn target tasks faster when equipped with transfer learning methods.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, the authors present an approach for visualizing mobile robots through an Augmented Reality headset when there is no line-of-sight visibility between the robot and the human.
Abstract: We present an approach for visualizing mobile robots through an Augmented Reality headset when there is no line-of-sight visibility between the robot and the human. Three elements are visualized in Augmented Reality: 1) Robot’s 3D model to indicate its position, 2) An arrow emanating from the robot to indicate its planned movement direction, and 3) A 2D grid to represent the ground plane. We conduct a user study with 18 participants, in which each participant are asked to retrieve objects, one at a time, from stations at the two sides of a T-junction at the end of a hallway where a mobile robot is roaming. The results show that visualizations improved the perceived safety and efficiency of the task and led to participants being more comfortable with the robot within their personal spaces. Furthermore, visualizing the motion intent in addition to the robot model was found to be more effective than visualizing the robot model alone. The proposed system can improve the safety of automated warehouses by increasing the visibility and predictability of robots.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, the authors explore implicit communication between humans and robots through movement in multi-party (or multi-user) interactions, by considering that legibility depends on all human users involved in the interaction and should take into consideration how each of them perceives the robot's movements from their respective points-of-view.
Abstract: In this work we explore implicit communication between humans and robots—through movement—in multi-party (or multi-user) interactions. In particular, we investigate how a robot can move to better convey its intentions using legible movements in multi-party interactions. Current research on the application of legible movements has focused on single-user interactions, causing a vacuum of knowledge regarding the impact of such movements in multi-party interactions. We propose a novel approach that extends the notion of legible motion to multi-party settings, by considering that legibility depends on all human users involved in the interaction, and should take into consideration how each of them perceives the robot’s movements from their respective points-of-view. We show, through simulation and a user study, that our proposed model of multi-user legibility leads to movements that, on average, optimize the legibility of the motion as perceived by the group of users. Our model creates movements that allow each human to more quickly and confidently understand what are the robot’s intentions, thus creating safer, clearer and more efficient interactions and collaborations.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, the authors present qualitative analysis of discussions with prospective users and experienced coaches regarding the design of robot well-being coaches, and present this data together with tabulated quotes from the participants and coaches, to pave the way towards designing robot coaches that can provide supportive interventions to improve the mental health and wellbeing of their users.
Abstract: Recent research is emerging in the field of Social Robotics where robots have the potential to serve as tools to improve human well-being. However, research exploring the expectations and perceptions of prospective users of such robots, and the professionals who currently deliver these interventions, is limited. In this paper, we present qualitative analysis of discussions with prospective users and experienced coaches regarding the design of robot well-being coaches. We invited participants interested in well-being practices to take-part in a Participatory Design (PD) study, consisting of individual interviews and a focus group discussion (N P = 8). Discussions focused on ideating how a robot could function as a mental well-being coach, based on their experiences with well-being practices. Data triangulation was employed by interviewing three professional coaches as additional sources of information. This resulted in a rich set of data, which we transcribed and analysed using Thematic Analysis (TA). The developed themes regarding robot features, form, behaviours, robot-led well-being practices, and the advantages and disadvantages these could provide, were compiled and are discussed in detail. We present this data together with tabulated quotes from the participants and coaches, to pave the way towards designing robot coaches that can provide supportive interventions to improve the mental health and well-being of their users.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, the authors report a conducted use study on how people perceive different kinds of robot encounters, and the results can contribute to improving a future robot's control to better suit users from different generations.
Abstract: Introducing robots in healthcare facilities and homes may reduce the workload of healthcare personnel while providing the users with better and more available services. It may also contribute to interactions that are engaging and safe against transmitting contagious diseases for senior adults. A major challenge in this regard is to design and adapt the robot’s behavior based on the requirements and preferences of the different users. In this paper, we report a conducted use study on how people perceive different kinds of robot encounters. We had two groups of target users: one with senior residents at a care center and another with young students at a university, which would be representative for the visitors and care volunteers in the facility. Several common scenarios have been created to evaluate the perception of the robot’s behavior by the participants. Two sets of questionnaires were used to collect feedback on the behavior and the general perception of the users about the robot´s different styles of behavior. An exploratory analysis of the effect of age shows that the age of the targeted user group should be considered as one of the main criteria when designing the social parameters of a care robot, as seniors preferred slower speed and closer distance to the robot. The results can contribute to improving a future robot’s control to better suit users from different generations.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, the authors present a theory-driven approach to study the situated trust in human-robot interaction (HRI) by focusing on the experience of vulnerability, which is useful for guiding empirical investigations.
Abstract: Ensuring trust in human-robot interaction (HRI) is considered essential for widespread use of robots in society and everyday life. While the majority of studies use game-based and high-risk scenarios with low familiarity to gain a deeper understanding of human trust in robots, scenarios with more subtle trust violations that could happen in everyday life situations are less often considered. In this paper, we present a theory-driven approach to studying the situated trust in HRI by focusing on the experience of vulnerability. Focusing on vulnerability not only challenges previous work on trust in HRI from a theoretical perspective, but is also useful for guiding empirical investigations. As a first proof-of-concept study, we conducted an interactive online survey that demonstrates that it is possible to measure human experience of vulnerability in the ordinary, mundane, and familiar situation of clothes shopping. We conclude that the inclusion of subtle trust violation scenarios occurring in the everyday life situation of clothes shopping enables a better understanding of situated trust in HRI, which is of special importance when considering more near-future applications of robots.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, the authors introduce the control architecture of a platform aimed at promoting good mental health for workers interacting with collaborative robots (cobots) in order to improve the operator's quality of experience and level of engagement and to minimize his/her psychological strain.
Abstract: This paper introduces the control architecture of a platform aimed at promoting good mental health for workers interacting with collaborative robots (cobots). The platform aim is to render industrial production cells capable of automatically adapting their behavior in order to improve the operator’s quality of experience and level of engagement and to minimize his/her psychological strain. In order to achieve such a goal, an extremely rich and complex framework is required. Starting from the identification of the parameters that could influence the collaboration experience, the envisioned human- driven control structure is presented together with a detailed description of the components required to implement such an automated system. Future works will include proper tuning of control parameters with dedicated experimental sessions, together with the definition of organizational and technical guidelines for the design of a mental-health-friendly cobot-based manufacturing workplace.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, a 2D display-and-mouse interface where points are placed by clicking on an image of the cloth, and a 3D Augmented Reality interface where the chosen points were placed by hand gestures.
Abstract: An appropriate user interface to collect human demonstration data for deformable object manipulation has been mostly overlooked in the literature. We present an inter-action design for demonstrating cloth folding to robots. Users choose pick and place points on the cloth and can preview a visualization of a simulated cloth before real-robot execution. Two interfaces are proposed: A 2D display-and-mouse interface where points are placed by clicking on an image of the cloth, and a 3D Augmented Reality interface where the chosen points are placed by hand gestures. We conduct a user study with 18 participants, in which each user completed two sequential folds to achieve a cloth goal shape. Results show that while both interfaces were acceptable, the 3D interface was more suitable for understanding the task, and the 2D interface was suitable for repetition. Results also found that fold previews improve three key metrics: task efficiency, the ability to predict the final shape of the cloth, and overall user satisfaction.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: This article conducted a systematic quantitative meta-analysis of 13 studies and found that matching personalities between humans and robots promotes robot acceptance, however, the results support the assertion that personality similarity does not promote robot acceptance.
Abstract: Collaborative work between humans and robots holds great potential but, such potential is diminished should humans fail to accept robots as collaborators. One solution is to design robots to have a similar personality to their human collaborators. Typically, this is done by matching the human’s and robot’s personality using one or more of the Big Five Personality (BFI) traits. The results of this matching, however, have been mixed. This makes it difficult to know whether personality similarity promotes robot acceptance. To address this shortcoming, we conducted a systematic quantitative meta- analysis of 13 studies. Overall, the results support the assertion that matching personalities between humans and robots promotes robot acceptance.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, a human motion prediction model for handover operations is proposed, which takes into account the position of the robot's End Effector (REE) and the phase in the handover operation to predict future human poses.
Abstract: This work proposes a human motion prediction model for handover operations. We use in this work, the different phases of the handover operation to improve the human motion predictions. Our attention deep learning based model takes into account the position of the robot’s End Effector (REE) and the phase in the handover operation to predict future human poses. Our model outputs a distribution of possible positions rather than one deterministic position, a key feature in order to allow robots to collaborate with humans. We provide results of the human upper body and the human right hand, also referred as Human End Effector (HEE).The attention deep learning based model has been trained and evaluated with a dataset created using human volunteers and an anthropomorphic robot, simulating handover operations where the robot is the giver and the human the receiver. For each operation, the human skeleton is obtained with an Intel RealSense D435i camera attached inside the robot’s head. The results shown a great improvement of the human’s right hand prediction and 3D body compared with other methods.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, a trust-based affective computational account of scaffolding while performing a sequential visual recall task is presented, where a humanoid robot is used as a caregiver robot to guide a perceptually limited infant robot that performs the same task.
Abstract: Forming trust in a biological or artificial interaction partner that provides reliable strategies and employing the learned strategies to scaffold another agent are critical problems that are often addressed separately in human-robot and robot-robot interaction studies. In this paper, we provide a unified approach to address these issues in robot-robot interaction settings. To be concrete, we present a trust-based affective computational account of scaffolding while performing a sequential visual recalling task. In that, we endow the Pepper humanoid robot with cognitive modules of auto-associative memory and internal reward generation to implement the trust model. The former module is an instance of a cognitive function with an associated neural cost determining the cognitive load of performing visual memory recall. The latter module uses this cost to generate an internal reward signal to facilitate neural cost-based reinforcement learning (RL) in an interactive scenario involving online instructors with different guiding strategies: reliable, less-reliable, and random. These cognitive modules allow the Pepper robot to assess the instructors based on the average cumulative reward it can collect and choose the instructor that helps reduce its cognitive load most as the trustworthy one. After determining the trustworthy instructor, the Pepper robot is recruited to be a caregiver robot to guide a perceptually limited infant robot (i.e., the Nao robot) that performs the same task. In this setting, we equip the Pepper robot with a simple theory of mind module that learns the state-action-reward associations by observing the infant robot’s behavior and guides the learning of the infant robot, similar to when it went through the online agent-robot interactions. The experiment results on this robot-robot interaction scenario indicate that the Pepper robot as a caregiver leverages the decision-making policies – obtained by interacting with the trustworthy instructor– to guide the infant robot to perform the same task efficiently. Overall, this study suggests how robotic-trust can be grounded in human-robot or robot-robot interactions based on cognitive load, and be used as a mechanism to choose the right scaffolding agent for effective knowledge transfer.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, a variable admittance control scheme is proposed, where the damping is adjusted based on the power transmitted from the human to the robot, with the aim of minimizing the energy injected by the human while also allowing her/him to have control over the task.
Abstract: In this work, the problem of cooperative human-robot manipulation of an object with large inertia is addressed, considering the availability of a kinematically controlled industrial robot. In particular, a variable admittance control scheme is proposed, where the damping is adjusted based on the power transmitted from the human to the robot, with the aim of minimizing the energy injected by the human while also allowing her/him to have control over the task. The proposed approach is evaluated via a human-in-the-loop setup and compared to a generic variable damping state-of-the-art method. The proposed approach is shown to achieve significant reduction of the human’s effort and minimization of unintended overshoots and oscillations, which may deteriorate the user’s feeling of control over the task.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, the authors investigated the impact of different levels of error severity on trust and found that small errors significantly affected trust and big errors had an even more adverse impact, while the teachers may ignore small mistakes made by the robot, if it shows significant improvements while practising the task.
Abstract: It is anticipated that intelligent robots will gain the ability to learn from humans how to perform tasks, and will assist them in many contexts such as with household chores in the near future; therefore, people should have the confidence to trust these robots after teaching them how to do a task. Like most machines, robots may sometimes behave in an erroneous manner and such errors can easily undermine trust in the robots, depending on their severity. Nevertheless, when a robot has been taught a task by humans, we hypothesize that the teachers may ignore small mistakes made by the robot, if it shows significant improvements while practising the task. We first conducted a study with 173 participants in which the perceived severity of different robot errors in a household chore (preparing food) was investigated. We then used the results to create scenarios of different levels of severity and conducted a second study with 138 participants to investigate the impact of error severity on trust. Participants remotely taught their preferences in food preparation tasks to robots. Over several practice rounds, robots’ behaviour improved, but the robots made either (a) no errors, (b) a small, or (c) a big error at the end, depending on the experimental condition. Small errors significantly affected trust and big errors had an even more adverse impact. Trust in the robot was found to be correlated with personality traits of the participants as well as with their disposition to trust other people.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this paper, mobile phones are used as a reliable low-cost communication interface, as opposed to the use of specific gadgets or speech and gesture recognition techniques that are highly prone to failure in the presence of noise or occlusions.
Abstract: Long-distance human-robot collaborative tasks require robust forms of knowledge-sharing among agents in order to optimize the performance of the task. In this paper, we propose to take advantage of the proliferation of mobile phones to use them as a reliable low-cost communication interface, as opposed to the use of specific gadgets or speech and gesture recognition techniques that are highly prone to failure in the presence of noise or occlusions. Our interface is focused on search tasks, and it allows the user to share with other agents real-time information such as their position, their intention or even what they would like the other agents to do. To test its acceptability, a user study was conducted with 20 volunteers in a human-human scenario. A second round of experiments with other 30 volunteers was conducted to test different ways to encourage user interaction with our interface. Finally, real-life experiments were also conducted with a robot to apply learned knowledge to the desired scenario. We found a statistically significant improvement in the amount of information exchanged between agents.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, the authors present two Human-Robot Interfaces (HRI) and conduct user studies on an SRM in object handling tasks and assess the performance of teleoperators using spatial input devices and augmented reality-based input devices.
Abstract: Soft robotic manipulators (SRM) are biologically inspired from animal appendages such as elephant trunks and octopuses’ arms. In contrast to traditional rigid robotic manipulators, SRMs are made of flexible material and generate motion through structural deformation. Thus, they more easily adapt to unstructured environments. Teleoperated SRMs can be used in spaces harmful or impractical to humans (i.e., nuclear radiation or minimally invasive surgical sites). Limited research is available on the factors that affect human performance during the teleoperation of SRMs. We present two Human-Robot Interfaces (HRI) and conduct user studies on an SRM in object handling tasks and assess the performance of teleoperators using spatial input devices and augmented reality-based input devices. The user interaction is quantified for two types of controllers (Direct and Indirect) in an immersive pick and place operation. A System Usability Scale (SUS) questionnaire is administered to assess the usability of each HRI. Results suggest that the users perform more effectively, make fewer errors using the Indirect Control HRI and participants rated the Indirect Control HRI as more usable regardless of the hardware device.

Proceedings ArticleDOI
08 Aug 2021
TL;DR: In this article, an interactive robotic platform for teleoperated grasping as an educational tool is presented, which uses the Leap Motion optical gesture tracker to simultaneously control each of the four degrees of freedom (DOF) of a robotic hand and the six-DOF tool pose of a serial manipulator.
Abstract: We present an interactive robotic platform for teleoperated grasping as an educational tool. With this open-source robot application, we engage children and young adults with robotics and make computer science education more vivid. Our teleoperation method uses the Leap Motion optical gesture tracker to simultaneously control each of the four degrees-of-freedom (DOF) of a robotic hand and the six-DOF tool pose of a serial manipulator. A control algorithm is developed to relate the operator’s palm pose to the manipulator’s tool pose. The operator commands the robotic hand with relative finger movements of the thumb, index, and middle finger. We present preliminary results from a pick-and-place demonstration show-cased at a public science fair held at Imperial College London.