scispace - formally typeset
Search or ask a question

Showing papers presented at "Robot and Human Interactive Communication in 2017"


Proceedings ArticleDOI
01 Aug 2017
TL;DR: NICO (Neuro-Inspired COmpanion), a humanoid developmental robot that fills a gap between necessary sensing and interaction capabilities and flexible design, is developed and introduced, making it a novel neuro-cognitive research platform for embodied sensorimotor computational and cognitive models in the context of multimodal interaction.
Abstract: Interdisciplinary research, drawing from robotics, artificial intelligence, neuroscience, psychology, and cognitive science, is a cornerstone to advance the state-of-the-art in multimodal human-robot interaction and neuro-cognitive modeling. Research on neuro-cognitive models benefits from the embodiment of these models into physical, humanoid agents that possess complex, human-like sensorimotor capabilities for multimodal interaction with the real world. For this purpose, we develop and introduce NICO (Neuro-Inspired COmpanion), a humanoid developmental robot that fills a gap between necessary sensing and interaction capabilities and flexible design. This combination makes it a novel neuro-cognitive research platform for embodied sensorimotor computational and cognitive models in the context of multimodal interaction as shown in our results.

66 citations


Proceedings ArticleDOI
30 Aug 2017
TL;DR: It is claimed that personal assistive robots should likewise be culturally competent, aware of general cultural characteristics and of the different forms they take in different individuals, and sensitive to cultural differences while perceiving, reasoning, and acting.
Abstract: Cultural competence is a well known requirement for an effective healthcare, widely investigated in the nursing literature. We claim that personal assistive robots should likewise be culturally competent, aware of general cultural characteristics and of the different forms they take in different individuals, and sensitive to cultural differences while perceiving, reasoning, and acting. Drawing inspiration from existing guidelines for culturally competent healthcare and the state-of-the-art in culturally competent robotics, we identify the key robot capabilities which enable culturally competent behaviours and discuss methodologies for their development and evaluation.

55 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: An approach based on Reinforcement Learning is presented, which gets its reward directly from social signals in real-time during the interaction, to quickly learn about and dynamically address individual human preferences.
Abstract: When looking at Socially Interactive Robots, adaptation to the user's preferences plays an important role in today's Human-Robot Interaction to keep interaction interesting and engaging over a long period of time. Findings indicate an increase in user engagement for robots with adaptive behavior and personality, but also that it depends on the task context whether a similar or opposing robot personality is preferred. We present an approach based on Reinforcement Learning, which gets its reward directly from social signals in real-time during the interaction, to quickly learn about and dynamically address individual human preferences. Our scenario involves a Reeti robot in the role of a story teller talking about the main characters in the novel “Alice's Adventures in Wonderland” by generating descriptions with varying degree of introversion/extraversion. After initial simulation results, an interactive prototype is presented which allows to explore the learning process adapting to the human interaction partner's engagement.

52 citations


Proceedings ArticleDOI
08 Dec 2017
TL;DR: It is learned that adults have a significantly higher tendency to prefer goal-based action explanations than children, and this work is a necessary step in addressing the challenge of providing personalised explanations in human-robot and human-agent interaction.
Abstract: A good explanation takes the user who is receiving the explanation into account. We aim to get a better understanding of user preferences and the differences between children and adults who receive explanations from a robot. We implemented a Nao-robot as a belief-desire-intention (BDI)-based agent and explained its actions using two different explanation styles. Both are based on how humans explain and justify their actions to each other. One explanation style communicates the beliefs that give context information on why the agent performed the action. The other explanation style communicates the goals that inform the user of the agent's desired state when performing the action. We conducted a user study (19 children, 19 adults) in which a Nao-robot performed actions to support type 1 diabetes mellitus management. We investigated the preference of children and adults for goalversus belief-based action explanations. From this, we learned that adults have a significantly higher tendency to prefer goal-based action explanations. This work is a necessary step in addressing the challenge of providing personalised explanations in human-robot and human-agent interaction.

51 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: The current version of the ontology for autonomous robotics as well as on its first implementation successfully validated for a human-robot interaction scenario are reported on, demonstrating the developed ontology's strengths which include semantic interoperability and capability to relate ontologies from different fields for knowledge sharing and interactions.
Abstract: Creating a standard for knowledge representation and reasoning in autonomous robotics is an urgent task if we consider recent advances in robotics as well as predictions about the insertion of robots in human daily life. Indeed, this will impact the way information is exchanged between multiple robots or between robots and humans and how they can all understand it without ambiguity. Indeed, Human Robot Interaction (HRI) represents the interaction of at least two cognition models (Human and Robot). Such interaction informs task composition, task assignment, communication, cooperation and coordination in a dynamic environment, requiring a flexible representation. Hence, this paper presents the IEEE RAS Autonomous Robotics (AuR) Study Group, which is a spin-off of the IEEE Ontologies for Robotics and Automation (ORA) Working Group, and and its ongoing work to develop the first IEEE-RAS ontology standard for autonomous robotics. In particular, this paper reports on the current version of the ontology for autonomous robotics as well as on its first implementation successfully validated for a human-robot interaction scenario, demonstrating the developed ontology's strengths which include semantic interoperability and capability to relate ontologies from different fields for knowledge sharing and interactions.

51 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: This study puts participants into two competing teams, each consisting of two humans and two robots, to examine how people behave toward others depending on Group (ingroup, outgroup) and Agent (human, robot) variables.
Abstract: When it's between a robot on your team and a human member of a competing team, who will you favor? Past research indicates that people favor and behave more morally toward ingroup than outgroup members. Conversely, people typically indicate that they have more moral responsibilities toward humans than nonhumans. This study puts participants into two competing teams, each consisting of two humans and two robots, to examine how people behave toward others depending on Group (ingroup, outgroup) and Agent (human, robot) variables. Measures of behavioral aggression used in previous studies (i.e., noise blasts) and reported liking and anthropomorphism evaluations of humans and robots indicated that participants favored the ingroup over the outgroup, and humans over robots. Group had a greater effect than Agent, so participants preferred ingroup robots to outgroup humans.

47 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: Light is shed on the relevance of the uncanny valley “in the wild” and the unabashed sexualization of female-gendered robots helps situate it with respect to other design challenges for HRI.
Abstract: Towards understanding the public's perception of humanlike robots, we examined commentary on 24 YouTube videos depicting social robots ranging in human similarity — from Honda's Asimo to Hiroshi Ishiguro's Geminoids. In particular, we investigated how people have responded to the emergence of highly humanlike robots (e.g., Bina48) in contrast to those with more prototypically-“robotic” appearances (e.g., Asimo), coding the frequency at which the uncanny valley versus fears of replacement and/or a “technology takeover” arise in online discourse based on the robot's appearance. Here we found that, consistent with Masahiro Mori's theory of the uncanny valley, people's commentary reflected an aversion to highly humanlike robots. Correspondingly, the frequency of uncanny valley-related commentary was significantly higher in response to highly humanlike robots relative to those of more prototypical appearances. Independent of the robots' human similarity, we further observed a moderate correlation to exist between people's explicit fears of a “technology takeover” and their emotional responding towards robots. Finally, through the course of our investigation, we encountered a third and rather disturbing trend — namely, the unabashed sexualization of female-gendered robots. In exploring the frequency at which this sexualization manifests in the online commentary, we found it to exceed that of both the uncanny valley and fears of robot sentience/replacement combined. In sum, these findings help to shed light on the relevance of the uncanny valley “in the wild” and further, they help situate it with respect to other design challenges for HRI.

45 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: A huge, teddy-bear-like robot is developed that can give reciprocal hugs to people and experimentally investigated its effects on their behaviors, showing that those who were hugged by a robot donated more money than those who only hugged the robot, i.e., without a reciprocated hug.
Abstract: This paper presents the effects of being hugged by a robot to encourage prosocial behaviors. In human-human interaction, touches including hugs are essential for communication with others. Touches also show interesting effects, including the “Midas touch,” which encourages prosocial behaviors from the people who have been touched. Previous research demonstrated that people who touched a robot experienced positive impressions of it without clarifying whether being hugged by a robot causes the Midas touch effect, i.e., positively influences engagement in prosocial behaviors. We developed a huge, teddy-bear-like robot that can give reciprocal hugs to people and experimentally investigated its effects on their behaviors. In the experiment, a robot first asked participants to give a hug and then asked them to make charitable donations in two conditions: with or without a reciprocated hug. Our experiment results with 38 participants showed that those who were hugged by a robot donated more money than those who only hugged the robot, i.e., without a reciprocated hug.

43 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: It is demonstrated that providing even a simple, abstracted real-time visualisation of a robot's AI can radically improve the transparency of machine cognition.
Abstract: Deciphering the behaviour of intelligent others is a fundamental characteristic of our own intelligence. As we interact with complex intelligent artefacts, humans inevitably construct mental models to understand and predict their behaviour. If these models are incorrect or inadequate, we run the risk of self deception or even harm. Here we demonstrate that providing even a simple, abstracted real-time visualisation of a robot's AI can radically improve the transparency of machine cognition. Findings from both an online experiment using a video recording of a robot, and from direct observation of a robot show substantial improvements in observers' understanding of the robot's behaviour. Unexpectedly, this improved understanding was correlated in one condition with an increased perception that the robot was ‘thinking’, but in no conditions was the robot's assessed intelligence impacted. In addition to our results, we describe our approach, tools used, implications, and potential future research directions.

41 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: This work proposes two ways of solving the problem of fast and reliable collision detection algorithm using classical analytical approach and learning approach implemented with neural network.
Abstract: High dynamic capabilities of industrial robots make them dangerous for humans and environment. To reduce this factor and advance collaboration between human and manipulator fast and reliable collision detection algorithm is required. To overcome this problem, we present an approach allowing to detect collision, localize action point and classify collision nature. Internal joint torque and encoder measurements were used to determine potential collisions with the robot links. This work proposes two ways of solving this problem: using classical analytical approach and learning approach implemented with neural network. The suggested algorithms were examined on the industrial robotic arm Kuka iiwa LBR 14 R820, ground truth information on the contact nature and its location were obtained with 3D LIDAR and camera.

40 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: An algorithm for enabling a robot to act as the moderator in a group interaction centered around a tablet-based assembly game improves task performance and reduced group cohesion, while the “performance reinforcing” algorithm improved group cohesion and reduced task performance.
Abstract: This paper presents an algorithm for enabling a robot to act as the moderator in a group interaction centered around a tablet-based assembly game. The algorithm uses one of two different objective functions: one intended to be “performance equalizing”, wherein the robot attempts to equalize scoring among users, and another intended to be “performance reinforcing”, wherein the robot attempts to help the group score as many points as possible. In an evaluation study with ten groups of three participants, we found that the “performance equalizing” algorithm improved task performance and reduced group cohesion, while the “performance reinforcing” algorithm improved group cohesion and reduced task performance.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: A survey at the world's first robot hotel recently opened in Japan, which already uses robots for most of the work, discovered that human labor is divided into small tasks, and that robot actions affect human emotional control.
Abstract: Due to the rise of artificial intelligence (AI) technology, discussions are progressing on how robots could replace human labor. Conventional surveys have suggested that human labor is expected to gradually be replaced as tasks become automated. We conducted a survey at the world's first robot hotel recently opened in Japan — called a Henn-na hotel (“strange/change hotel”) in Japanese — which already uses robots for most of the work. We discovered that human labor is divided into small tasks, and that robot actions affect human emotional control. However, the hotel not only divides human work but also reconstructs it from tasks. Moreover, the purpose of reconstruction is not simply for replacement of works. Such modification of task is often observed taking place in humansystem interactions. It is an extremely creative process of labor emerging in this area.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: The design of a wearable robotic forearm that provides the user with an assistive third hand is presented, along with a study of interaction scenarios for the design, and three usability studies are described.
Abstract: This paper presents the design of a wearable robotic forearm that provides the user with an assistive third hand, along with a study of interaction scenarios for the design. Technical advances in sensors, actuators, and materials have made wearable robots feasible for personal use, but the interaction with such robots has not been sufficiently studied. We describe the development of a working prototype along with three usability studies. In an online survey we find that respondents presented with images and descriptions of the device see its use mainly as a functional tool in professional and military contexts. A subsequent contextual inquiry among building construction workers reveals three themes for user needs: extending a worker's reach, enhancing their safety and comfort through bracing and stabilization, and reducing their cognitive load in repetitive tasks. A subsequent laboratry study in which participants wear a working prototype of the robot finds that they prioritize lowered weight and enhanced dexterity, seek adjustable autonomy and transparency of the robot's intent, and prefer a robot that looks distinct from a human arm. These studies inform design implications for further development of wearable robotic arms.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: Results suggest that efficiency is not the most important aspect of performance for users; a personable, expressive robot was found to be preferable over a more efficient one, despite a considerable trade off in time taken to perform the task.
Abstract: Strategies are necessary to mitigate the impact of unexpected behavior in collaborative robotics, and research to develop solutions is lacking. Our aim here was to explore the benefits of an affective interaction, as opposed to a more efficient, less error prone but non-communicative one. The experiment took the form of an omelet-making task, with a wide range of participants interacting directly with BERT2, a humanoid robot assistant. Having significant implications for design, results suggest that efficiency is not the most important aspect of performance for users; a personable, expressive robot was found to be preferable over a more efficient one, despite a considerable trade off in time taken to perform the task. Our findings also suggest that a robot exhibiting human-like characteristics may make users reluctant to `hurt its feelings'; they may even lie in order to avoid this.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: Results show that the proposed framework can correctly recognise human facial expressions with potential to be used in human-robot interaction scenarios.
Abstract: Affective facial expression is a key feature of nonverbal behaviour and is considered as a symptom of an internal emotional state. Emotion recognition plays an important role in social communication: human-to-human and also for human-to-robot. Taking this as inspiration, this work aims at the development of a framework able to recognise human emotions through facial expression for human-robot interaction. Features based on facial landmarks distances and angles are extracted to feed a dynamic probabilistic classification framework. The public online dataset Karolinska Directed Emotional Faces (KDEF) [1] is used to learn seven different emotions (e.g. angry, fearful, disgusted, happy, sad, surprised, and neutral) performed by seventy subjects. A new dataset was created in order to record stimulated affect while participants watched video sessions to awaken their emotions, different of the KDEF dataset where participants are actors (i.e. performing expressions when asked to). Offline and on-the-fly tests were carried out: leave-one-out cross validation tests on datasets and on-the-fly tests with human-robot interactions. Results show that the proposed framework can correctly recognise human facial expressions with potential to be used in human-robot interaction scenarios.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: This paper conducts field sessions in which four designers operated a telepresence robot in a real K-12 classroom and identifies key research challenges and presents design insights meant to inform the HRI community in particular and robot designers in general.
Abstract: Telepresence robots have the potential to improve access to K-12 education for students who are unable to attend school for a variety of reasons. Since previous telepresence research has largely focused on the needs of adult users in workplace settings, it is unknown what challenges must be addressed for these robots to be effective tools in classrooms. In this paper, we seek to better understand how a telepresence robot should function in the classroom when operated by a remote student. Toward this goal, we conducted field sessions in which four designers operated a telepresence robot in a real K-12 classroom. Using the results, we identify key research challenges and present design insights meant to inform the HRI community in particular and robot designers in general.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: Interviews with users provide insight into users' priorities for in-home deployment of socially assistive robots, as well as preferences about the activities, appearance, and behavior of the robot.
Abstract: We present a pilot study of a socially assistive robot interacting with intergenerational groups. The system is designed to improve the social well-being of older adults by supporting interactions within families. Six intergenerational family groups interacted with the robot in four tablet-based games. Users' behavior during the sessions was used to compare the games and understand how members of different generations and different families interact with the robot. Interviews with users provide insight into users' priorities for in-home deployment of socially assistive robots, as well as preferences about the activities, appearance, and behavior of the robot.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: It is found that the presence and quality of sound shapes subjective perception of the KUKA arm, and implications for sound design of interactive systems are discussed.
Abstract: How does a robot's sound shape our perception of it? We overlaid sound from high-end and low-end robot arms on videos of the high-end KUKA youBot desktop robotic arm moving a small block in functional (working in isolation) and social (interacting with a human) contexts. The low-end audio was sourced from an inexpensive OWI arm. Crowdsourced participants watched one video each and rated the robot along dimensions of competence, trust, aesthetic, and human-likeness. We found that the presence and quality of sound shapes subjective perception of the KUKA arm. The presence of any sound reduced human-likeness and aesthetic ratings, however the high-end sound rated better in the competence evaluation in the social context measures when compared to no sound. Overall, the social context increased the perceived competence, trust, aesthetic and human-likeness of the robot. Based on motor sound's significant mixed impact on visual perception of robots, we discuss implications for sound design of interactive systems.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: This work presents and evaluates a fully autonomous robotic system using a novel combination of task-based and chat-style dialogue in order to enhance the user experience with human-robot dialogue systems and employs Reinforcement Learning (RL) to create a scalable and extensible approach to combining chat and task- based dialogue for multimodal systems.
Abstract: Most of today's task-based spoken dialogue systems perform poorly if the user goal is not within the system's task domain. On the other hand, chatbots cannot perform tasks involving robot actions but are able to deal with unforeseen user input. To overcome the limitations of each of these separate approaches and be able to exploit their strengths, we present and evaluate a fully autonomous robotic system using a novel combination of task-based and chat-style dialogue in order to enhance the user experience with human-robot dialogue systems. We employ Reinforcement Learning (RL) to create a scalable and extensible approach to combining chat and task-based dialogue for multimodal systems. In an evaluation with real users, the combined system was rated as significantly more “pleasant” and better met the users' expectations in a hybrid task+chat condition, compared to the task-only condition, without suffering any significant loss in task completion.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: This paper explores how virtual reality mediates human-robot interactions through two user studies and suggests that VR displays can offer users unique perceptual benefits in simulated robotics applications.
Abstract: Interactions with simulated robots are typically presented on screens. Virtual reality (VR) offers an attractive alternative as it provides visual cues that are more similar to the real world. In this paper, we explore how virtual reality mediates human-robot interactions through two user studies. The first study shows that in situations where perception of the robot is challenging, a VR display provides significantly improved performance on a collaborative task. The second study shows that this improved performance is primarily due to stereo cues. Together, the findings of these studies suggest that VR displays can offer users unique perceptual benefits in simulated robotics applications.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: The results demonstrate that the Forward-Back gesture was the clearest way to communicate the robot's intent, however, they also give evidence that there is a communicative trade-off between clarity and politeness, particularly when direct communication has an association with aggression.
Abstract: How could a rearranging chair convince you to let it by? This paper explores how robotic chairs might negotiate passage in shared spaces with people, using motion as an expressive cue. The user study evaluates the efficacy of three gestures at convincing a busy participant to let it by. This within-participants study consisted of three subsequent trials, in which a person is completing a puzzle on a standing desk and a robotic chair approaches to squeeze by. The measure was whether participants moved out of the robot's way or not. People deferred to the robot in slightly less than half the trials as they were engaged in the activity. The main finding, however, is that over-communication cues more blocking behaviors, perhaps because it is annoying or because people want chairs to know their place (socially speaking). The Forward-Back gesture that was most effective at negotiating passage in the first trail was least effective in the second and third trial. The more subtle Pause and the slightly loud but less-aggressive Side-to-Side gesture, were much more likely to be deferred to in later trials, but not a single participant deferred to them in the first trial. The results demonstrate that the Forward-Back gesture was the clearest way to communicate the robot's intent, however, they also give evidence that there is a communicative trade-off between clarity and politeness, particularly when direct communication has an association with aggression. The takeaway for robot design is: be informative initially, but avoid over-communicating later.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: It is suggested that children are the most physically and verbally engaged when interacting with the physically co-present social robot over time than the other two interventions.
Abstract: Children and their parents may undergo challenging experiences when admitted for in-patient care at pediatric hospitals. While most pediatric hospitals make an effort to provide socio-emotional support for patients and their families during care, such as with child life services, gaps still exist between professional resource supply and patient demand. There is an opportunity to apply interactive companion-like technologies as a way to augment and extend professional care teams. To explore the opportunity of social robots to augment child life services, we performed a randomized clinical trial at a local pediatric hospital to investigate how three different companion-like interventions (a plush toy, a virtual character on a screen, and a social robot) affected child-patients physical activity and social engagement — both linked to positive patient outcomes. We recorded video of patients, families and a certified child life specialist with each intervention to gather behavioral data. Our results suggest that children are the most physically and verbally engaged when interacting with the physically co-present social robot over time than the other two interventions. A post-study interview with child life specialists reveals their perspective on potential opportunities for social robots (and other companion-like interventions) to assist them with providing education, diversion, and companionship in the pediatric inpatient care context.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: The paper summarizes the results of the questionnaire surveys conducted by the author's research group, and discusses about the future direction of the research on cultural differences on social acceptance of robots.
Abstract: The paper summarizes the results of the questionnaire surveys conducted by the author's research group, along 1) attitudes toward robots, 2) assumptions and images about robots, 3) anxiety and expectation toward humanoid robots based on the concept of “Frankenstein Syndrome”, and 4) ethical problems related to robots. Then, the paper discusses about the future direction of the research on cultural differences on social acceptance of robots.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: The design and implementation of an autonomous human robot interaction system to engage children in performing several physical exercise motions by providing real-time feedback and guidance is designed and implemented.
Abstract: The main contribution of this study is the design and implementation of an autonomous human robot interaction system to engage children in performing several physical exercise motions by providing real-time feedback and guidance. The system is designed after several preliminary experiments with children and exercise coaches. In order to test the feasibility and the effectiveness of the exercise system across a variety of performance and evaluation measures, an experimental study was conducted with 19 healthy children. The results of the study validate the effectiveness of the system in motivating and helping children to complete physical exercises. The children engaged in physical exercise throughout the interaction sessions and rated the interaction highly in terms of enjoyableness, and rated the robot exercise coach highly in terms of social attraction, social presence, and companionship via a questionnaire answered after each session.

Proceedings ArticleDOI
28 Aug 2017
TL;DR: The results suggest that the criteria used by the human-robot collaborative planner (safety, time-to-collision, directional-costs) are possible good measures for designing acceptable human-aware navigation planners.
Abstract: This paper focuses on requirements for effective human robot collaboration in interactive navigation scenarios We designed several use-cases where humans and robot had to move in the same environment that resemble canonical path-crossing situations These use-cases include open as well as constrained spaces Three different state-of-the-art humanaware navigation planners were used for planning the robot paths during all selected use-cases We compare results of simulation experiments with these human-aware planners in terms of quality of generated trajectories together with discussion on capabilities and limitations of the planners The results show that the human-robot collaborative planner [1] performs better in everyday path-crossing configurations This suggests that the criteria used by the human-robot collaborative planner (safety, time-to-collision, directional-costs) are possible good measures for designing acceptable human-aware navigation planners Consequently, we analyze the effects of these social criteria and draw perspectives on future evolution of human-aware navigation planning methods

Proceedings ArticleDOI
01 Aug 2017
TL;DR: A deeper look is taken into the security issue in human-robot shared environments by surveying existing work and analyzing security issues in the widely used Robot Operation System (ROS), discussing the different layers of security in a robotic network architecture, and proposing several hierarchical security mechanisms.
Abstract: With the growing proliferation of robots in our society comes the natural concern of security. However, this is an often overlooked issue in robotic systems, as the focus is commonly placed in robot functionality and innovation. Unauthorized access to a robot, or a multi-robot network, may seriously compromise the system, potentially leading to unacceptable consequences, such as putting in danger humans that share the environment with the robot(s). In this paper, a deeper look is taken into the security issue in human-robot shared environments by surveying existing work and analyzing security issues in the widely used Robot Operation System (ROS), discussing the different layers of security in a robotic network architecture, and proposing several hierarchical security mechanisms, using the STOP project case study in surveillance robotics.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: A deep learning approach for the detection of the activities of daily living in a home environment starting from the skeleton data of an RGB-D camera and shows that the CNN-LSTM model outperforms the state-of-the-art performance on the CAD-60 dataset.
Abstract: In this work, we propose a deep learning approach for the detection of the activities of daily living (ADL) in a home environment starting from the skeleton data of an RGB-D camera. In this context, the combination of ad hoc features extraction/selection algorithms with supervised classification approaches has reached an excellent classification performance in the literature. Since the recurrent neural networks (RNNs) can learn temporal dependencies from instances with a periodic pattern, we propose two deep learning architectures based on Long Short-Term Memory (LSTM) networks. The first (MT-LSTM) combines three LSTMs deployed to learn different time-scale dependencies from pre-processed skeleton data. The second (CNN-LSTM) exploits the use of a Convolutional Neural Network (CNN) to automatically extract features by the correlation of the limbs in a skeleton 3D-grid representation. These models are tested on the CAD-60 dataset. Results show that the CNN-LSTM model outperforms the state-of-the-art performance with 95.4% of precision and 94.4% of recall.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: This paper forms trust as a Markov decision process whose state space includes physical parameters of the swarm and employs an inverse reinforcement learning algorithm to learn behaviors of the operator from a single demonstration.
Abstract: In this paper, we study the model of human trust where an operator controls a robotic swarm remotely for a search mission. Existing trust models in human-in-the-loop systems are based on task performance of robots. However, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since task performance of swarms is not clearly perceivable by humans. We formulate trust as a Markov decision process whose state space includes physical parameters of the swarm. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from a single demonstration. The learned behaviors are used to predict the trust level of the operator based on the features of the swarm.

Proceedings ArticleDOI
08 Dec 2017
TL;DR: This work compared the engagement of persons with dementia involved in two playful activities, a game-based cognitive stimulation and a robot-based free play, using observational rating scales and electrodermal activity (EDA).
Abstract: The study of engagement is central to improve the quality of care and provide people with dementia with meaningful activities. Current assessment techniques of engagement for people with dementia rely exclusively on behavior observation. However, novel unobtrusive sensing technologies, capable of tracking psychological states during activities, can provide us with a deeper layer of knowledge about engagement. We compared the engagement of persons with dementia involved in two playful activities, a game-based cognitive stimulation and a robot-based free play, using observational rating scales and electrodermal activity (EDA). Results highlight significant differences in observational rating scales and EDA between the two activities and several significant correlations between the items of observational rating scales of engagement and affect, and EDA features.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: This work proposes a method of applying the GQ(λ) reinforcement learning algorithm to a leader-follower formation control scenario on the e-puck robot platform and presents how it modeled a formation control problem as a Markov decision making process.
Abstract: Formation control is an important subtask for autonomous robots. From flying drones to swarm robotics, many applications need their agents to control their group behavior. Especially when moving autonomously in humanrobot teams, motion and formation control of a group of agents is a critical and challenging task. In this work, we propose a method of applying the GQ(λ) reinforcement learning algorithm to a leader-follower formation control scenario on the e-puck robot platform. In order to allow control via classical reinforcement learning, we present how we modeled a formation control problem as a Markov decision making process. This allows us to use the Greedy-GQ(λ) algorithm for learning a leader-follower control law. The applicability and performance of this control approach is investigated in simulation as well as on real robots. In both experiments, the followers are able to move behind the leader. Additionally, the algorithm improves the smoothness of the follower's path online, which is beneficial in the context of human-robot interaction.