scispace - formally typeset
Search or ask a question

Showing papers presented at "Robot and Human Interactive Communication in 2014"


Proceedings ArticleDOI
20 Oct 2014
TL;DR: A typical processing pipeline was implemented on a mobile robot, and a detailed comparison of methods for face segmentation was conducted, which is the key factor for robust pulse rate extraction even, if the subject is moving.
Abstract: Non-contact image photoplethysmography has gained a lot of attention during the last 5 years. Starting with the work of Verkruysse et al. [1], various methods for estimation of the human pulse rate from video sequences of the face under ambient illumination have been presented. Applied on a mobile service robot aimed to motivate elderly users for physical exercises, the pulse rate can be a valuable information in order to adapt to the users conditions. For this paper, a typical processing pipeline was implemented on a mobile robot, and a detailed comparison of methods for face segmentation was conducted, which is the key factor for robust pulse rate extraction even, if the subject is moving. A benchmark data set is introduced focusing on the amount of motion of the head during the measurement.

169 citations


Proceedings ArticleDOI
20 Oct 2014
TL;DR: It is expected children will learn more from a robot that adapts to maintain an equal or greater ability than the children, and that they will copy its stories and narration style more than they would with a robots that does not adapt.
Abstract: Children's oral language skills in preschool can predict their academic success later in life. As such, increasing children's skills early on could improve their success in middle and high school. To this end, we propose that a robotic learning companion could supplement children's early language education. The robot targets both the social nature of language learning and the adaptation necessary to help individual children. The robot is designed as a social character that interacts with children as a peer, not as a tutor or teacher. It will play a storytelling game, during which it will introduce new vocabulary words, and model good story narration skills, such as including a beginning, middle, and end; varying sentence structure; and keeping cohesion across the story. We will evaluate whether adapting the robot's level of language to the child's - so that, as children improve their storytelling skills, so does the robot - influences (i) whether children learn new words from the robot, (ii) the complexity and style of stories children tell, (iii) the similarity of children's stories to the robot's stories. We expect children will learn more from a robot that adapts to maintain an equal or greater ability than the children, and that they will copy its stories and narration style more than they would with a robot that does not adapt (a robot of lesser ability). However, we also expect that playing with a robot of lesser ability could prompt teaching or mentoring behavior from children, which could also be beneficial to language learning.

130 citations


Proceedings ArticleDOI
01 Aug 2014
TL;DR: It is found that children demonstrate a high level of enjoyment when interacting with the robot, and a statistically significant increase in engagement with the system over the duration of the interaction.
Abstract: This paper describes an extended (6-session) interaction between an ethnically and geographically diverse group of 26 first-grade children and the DragonBot robot in the context of learning about healthy food choices. We find that children demonstrate a high level of enjoyment when interacting with the robot, and a statistically significant increase in engagement with the system over the duration of the interaction. We also find evidence of relationship-building between the child and robot, and encouraging trends towards child learning. These results are promising for the use of socially assistive robotic technologies for long-term one-on-one educational interventions for younger children.

99 citations


Proceedings ArticleDOI
20 Oct 2014
TL;DR: An approach that relies on cues from depth perception from RGB-D images, where features related to human body motion are used on multiple learning classifiers in order to recognize human activities on a benchmark dataset, overcomes state of the art methods in terms of precision, recall and overall accuracy.
Abstract: In this work, we propose an approach that relies on cues from depth perception from RGB-D images, where features related to human body motion (3D skeleton features) are used on multiple learning classifiers in order to recognize human activities on a benchmark dataset. A Dynamic Bayesian Mixture Model (DBMM) is designed to combine multiple classifier likelihoods into a single form, assigning weights (by an uncertainty measure) to counterbalance the likelihoods as a posterior probability. Temporal information is incorporated in the DBMM by means of prior probabilities, taking into consideration previous probabilistic inference to reinforce current-frame classification. The publicly available Cornell Activity Dataset [1] with 12 different human activities was used to evaluate the proposed approach. Reported results on testing dataset show that our approach overcomes state of the art methods in terms of precision, recall and overall accuracy. The developed work allows the use of activities classification for applications where the human behaviour recognition is important, such as human-robot interaction, assisted living for elderly care, among others.

88 citations


Proceedings ArticleDOI
20 Oct 2014
TL;DR: This work presents a principled set of motion features based on the Laban Effort system, a widespread and extensively tested acting ontology for the dynamics of “how” the authors enact motion that perform well for analyzing expression in low degree of freedom systems and could be used to help design more effectively expressive mobile robots.
Abstract: There is a saying that 95% of communication is body language, but few robot systems today make effective use of that ubiquitous channel. Motion is an essential area of social communication that will enable robots and people to collaborate naturally, develop rapport, and seamlessly share environments. The proposed work presents a principled set of motion features based on the Laban Effort system, a widespread and extensively tested acting ontology for the dynamics of “how” we enact motion. The features allow us to analyze and, in future work, generate expressive motion using position (x, y) and orientation (theta). We formulate representative features for each Effort and parameterize them on expressive motion sample trajectories collected from experts in robotics and theater. We then produce classifiers for different “manners” of moving and assess the quality of results by comparing them to the humans labeling the same set of paths on Amazon Mechanical Turk. Results indicate that the machine analysis (41.7% match between intended and classified manner) achieves similar accuracy overall compared to a human benchmark (41.2% match). We conclude that these motion features perform well for analyzing expression in low degree of freedom systems and could be used to help design more effectively expressive mobile robots.

87 citations


Proceedings ArticleDOI
20 Oct 2014
TL;DR: Using graded cueing-style feedback resulted in a nondecreasing trend in imitative accuracy when compared to a non-adaptive condition, where participants always received the same, most descriptive feedback whenever they made a mistake.
Abstract: We performed a study that examined the effects of a humanoid robot giving the minimum required feedback - graded cueing - during a one-on-one imitation game played children with autism spectrum disorders (ASD). 12 high-functioning participants with ASD, ages 7 to 10, each played “Copy-Cat” with a Nao robot 5 times over the span of 2.5 weeks. While the graded cueing model was not exercised in its fullest, using graded cueing-style feedback resulted in a nondecreasing trend in imitative accuracy when compared to a non-adaptive condition, where participants always received the same, most descriptive feedback whenever they made a mistake. These trends show promise for future work with robots encouraging autonomy in special needs populations.

77 citations


Proceedings ArticleDOI
20 Oct 2014
TL;DR: A situation assessment reasoner that generates relevant symbolic information from the geometry of the environment with respect to relations between objects and human capabilities and describes here the way the system manages the hypotheses to be able to handle such knowledge in a flexible manner.
Abstract: In daily human interactions, spatial reasoning occupies an important place. In this paper we present a situation assessment reasoner that generates relevant symbolic information from the geometry of the environment with respect to relations between objects and human capabilities. The role of SPARK (SPAtial Reasoning and Knowledge) component is to permanently maintain a state of the world in order to provide a basis for the robot to plan, to act, to react and to interact. More precisely, we describe here the way the system manages the hypotheses to be able to handle such knowledge in a flexible manner. Equipped with such capabilities, a robot that will interact with humans should be able to extract, compute or infer these relations and capabilities in order to communicate and interact efficiently in a natural way. To illustrate our work, we will explain how the robot is able to manage and update agents beliefs and pass Sally-Anne test. This work is part of a broader effort to develop a complete decisional framework for human-robot interactive task achievement.

68 citations


Proceedings ArticleDOI
20 Oct 2014
TL;DR: The model and control of an additional finger, the Sixth-Finger, is presented as a case study of this type of robotic limbs and an object-based mapping algorithm is proposed to control the robotic extra-finger by interpreting the whole hand motion in grasping action.
Abstract: Robotic prosthesis are usually intended as artificial device extensions replacing a missing part of a human body. A new approach regarding robotic limbs is presented here. A modular robot is used not only for replacing a missing part of the body but also as an extra-limb in order to enhance manipulation dexterity and enlarge the workspace of human beings. In this work, the model and control of an additional finger, the Sixth-Finger, is presented as a case study of this type of robotic limbs. The robotic finger has been placed on the wrist opposite to the hand palm. This solution allows to enlarge the hand workspace, increasing the grasp capability of the user. An object-based mapping algorithm is proposed to control the robotic extra-finger by interpreting the whole hand motion in grasping action. A four DoFs modular prototype is presented along with numerical simulations and real experiments. The proposed Sixth-Finger can lead to a wide range of applications in the direction of augmenting human capabilities through wearable robotics.

68 citations


Proceedings ArticleDOI
20 Oct 2014
TL;DR: Teachers saw a role for the tutor in acting as an engaging tool for all, preferably in groups, and gathering information about students' learning progress without taking over the teachers' responsibility for the actual assessment.
Abstract: In this paper, we describe the results of an interview study conducted across several European countries on teachers' views on the use of empathic robotic tutors in the classroom. The main goals of the study were to elicit teachers' thoughts on the integration of the robotic tutors in the daily school practice, understanding the main roles that these robots could play and gather teachers' main concerns about this type of technology. Teachers' concerns were much related to the fairness of access to the technology, robustness of the robot in students' hands and disruption of other classroom activities. They saw a role for the tutor in acting as an engaging tool for all, preferably in groups, and gathering information about students' learning progress without taking over the teachers' responsibility for the actual assessment. The implications of these results are discussed in relation to teacher acceptance of ubiquitous technologies in general and robots in particular.

60 citations


Proceedings ArticleDOI
15 Oct 2014
TL;DR: The state of the art in safety assurance, existing as well as emerging standards in this area, and the need for new approaches to safety assurance in the context of learning machines are discussed.
Abstract: The success of the human-robot co-worker team in a flexible manufacturing environment where robots learn from demonstration heavily relies on the correct and safe operation of the robot. How this can be achieved is a challenge that requires addressing both technical as well as human-centric research questions. In this paper we discuss the state of the art in safety assurance, existing as well as emerging standards in this area, and the need for new approaches to safety assurance in the context of learning machines. We then focus on robotic learning from demonstration, the challenges these techniques pose to safety assurance and outline opportunities to integrate safety considerations into algorithms “by design”. Finally, from a human-centric perspective, we stipulate that, to achieve high levels of safety and ultimately trust, the robotic co-worker must meet the innate expectations of the humans it works with. It is our aim to stimulate a discussion focused on the safety aspects of human-in-the-loop robotics, and to foster multidisciplinary collaboration to address the research challenges identified.

52 citations


Proceedings ArticleDOI
20 Oct 2014
TL;DR: A mathematical model is proposed that enables robots to generate pointing configurations that make the goal object as clear as possible - pointing configuration that are legible.
Abstract: Good communication is critical to seamless human-robot interaction. Among numerous communication channels, here we focus on gestures, and in particular on spacial deixis: pointing at objects in the environment in order to reference them. We propose a mathematical model that enables robots to generate pointing configurations that make the goal object as clear as possible — pointing configurations that are legible. We study the implications of legibility on pointing, e.g. that the robot will sometimes need to trade off efficiency for the sake of clarity. Finally, we test how well our model works in practice in a series of user studies, showing that the resulting pointing configurations make the goal object easier to infer for novice users.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: This study tries to understand how blame attribution after an error impacts user trust and confirms that blame attribution impacts human trust in robots.
Abstract: Trust in automation is a crucial ingredient for successful human robot interaction Both human related and robot related factors influence the user's trust on the robot and it is challenging to characterize each of these factors and study how they affect human trust In this study we try to understand how blame attribution after an error impacts user trust Three different robot personalities were implemented, each assigning blame to either of the user, the robot itself, or the human-robot team Our study results confirm that blame attribution impacts human trust in robots

Proceedings ArticleDOI
25 Aug 2014
TL;DR: Virtual reality may be a good tool to assess the acceptability of human-robot collaboration and draw preliminary results through questionnaires, but that physical experiments are still necessary to a complete study, especially when dealing with physiological measures.
Abstract: This paper focuses on the acceptability of human-robot collaboration in industrial environments. A use case was designed in which an operator and a robot had to work side-by-side on automotive assembly lines, with different levels of co-presence. This use case was implemented both in a physical and in a virtual situation using virtual reality. A user study was conducted with operators from the automotive industry. The operators were asked to assess the acceptability to work side-by-side with the robot through questionnaires, and physiological measures (heart rate and skin conductance) were taken during the user study. The results showed that working close to the robot imposed more constraints on the operators and required them to adapt to the robot. Moreover, an increase in skin conductance level was observed after working close to the robot. Although no significant difference was found in the questionnaires results between the physical and virtual situations, the increase in physiological measures was significant only in the physical situation. This suggests that virtual reality may be a good tool to assess the acceptability of human-robot collaboration and draw preliminary results through questionnaires, but that physical experiments are still necessary to a complete study, especially when dealing with physiological measures.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: A taxonomy of ADL is proposed allowing for their categorization with respect to the most suitable monitoring approach, and a freely available dataset of acceleration data, coming from a wrist-worn wearable device, targeting the recognition of 14 different human activities is presented.
Abstract: The automatic monitoring of specific Activities of Daily Living (ADL) can be a useful tool for Human-Robot Interaction in smart environments and Assistive Robotics applications. The qualitative definition that is given for most ADL and the lack of well-defined benchmarks, however, are obstacles toward the identification of the most effective monitoring approaches for different tasks. The contribution of the article is two-fold: (i) we propose a taxonomy of ADL allowing for their categorization with respect to the most suitable monitoring approach; (ii) we present a freely available dataset of acceleration data, coming from a wrist-worn wearable device, targeting the recognition of 14 different human activities.

Proceedings ArticleDOI
Lanbo She1, Yu Cheng1, Joyce Y. Chai1, Yunyi Jia1, Shaohua Yang1, Ning Xi1 
01 Aug 2014
TL;DR: An approach which allows human partners to teach a robot new high-level actions through natural language instructions that only consists of the desired goal states rather than step-by-step operations (although these operations may be specified by the human in their instructions).
Abstract: Robots often have limited knowledge and need to continuously acquire new knowledge and skills in order to collaborate with its human partners. To address this issue, this paper describes an approach which allows human partners to teach a robot (i.e., a robotic arm) new high-level actions through natural language instructions. In particular, built upon the traditional planning framework, we propose a representation of high-level actions that only consists of the desired goal states rather than step-by-step operations (although these operations may be specified by the human in their instructions). Our empirical results have shown that, given this representation, the robot can reply on automated planning and immediately apply the newly learned action knowledge to perform actions under novel situations.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: This work proposes an interactive, action-based motivation model for HRI that has been implemented in an autonomous robot system and tested during a long-term HRI study and demonstrated that the transfer of interaction patterns from HHI to HRI was successful with participants benefitting from the interaction experience.
Abstract: The topic of motivation is a crucial issue for various human-robot interaction (HRI) scenarios. Interactional aspects of motivation can be studied in human-human interaction (HHI) and build the basis for modeling a robot's interactional conduct. Using an ethnographic approach we explored the factors relevant in the formation of motivation-relevant processes in an indoor-cycling activity. We propose an interactive, action-based motivation model for HRI that has been implemented in an autonomous robot system and tested during a long-term HRI study. The model is based on micro-analyses of human indoor cycling courses and resulted in an adaption of specific dialog patterns for HRI. A qualitative evaluation - accompanied by a quantitative analysis - demonstrated that the transfer of interaction patterns from HHI to HRI was successful with participants benefitting from the interaction experience (e.g., performance, subjective feeling of being motivated).

Proceedings ArticleDOI
20 Oct 2014
TL;DR: This paper addresses the problem of teaching a robot collaborative behaviors from human demonstrations by presenting an approach that combines: probabilistic learning and dynamical systems, to encode the robot's motion along the task.
Abstract: Physical interaction between humans and robots arises a large set of challenging problems involving hardware, safety, control and cognitive aspects, among others. In this context, the cooperative (two or more people/robots) transportation of bulky loads in manufacturing plants is a practical example where these challenges are evident. In this paper, we address the problem of teaching a robot collaborative behaviors from human demonstrations. Specifically, we present an approach that combines: probabilistic learning and dynamical systems, to encode the robot's motion along the task. Our method allows us to learn not only a desired path to take the object through, but also, the force the robot needs to apply to the load during the interaction. Moreover, the robot is able to learn and reproduce the task with varying initial and final locations of the object. The proposed approach can be used in scenarios where not only the path to be followed by the transported object matters, but also the force applied to it. Tests were successfully carried out in a scenario where a 7 DOFs backdrivable manipulator learns to cooperate, with a human, to transport an object while satisfying the position and force constraints of the task.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: The essential contribution of this paper are generic distance-invariant range scan features for people detection in 2D laser range data and the distinction of their walking aids.
Abstract: People detection in 2D laser range data is a popular cue for person tracking in mobile robotics. Many approaches are designed to detect pairs of legs. These approaches perform well in many public environments. However, we are working on an assistance robot for stroke patients in a rehabilitation center, where most of the people need walking aids. These tools occlude or touch the legs of the patients. Thereby, approaches based on pure leg detection fail. The essential contribution of this paper are generic distance-invariant range scan features for people detection in 2D laser range data and the distinction of their walking aids. With these features we trained classifiers for detecting people without walking aids (or with crutches), people with walkers, and people in wheelchairs. Using this approach for people detection, we achieve an F 1 score of 0.99 for people with and without walking aids, and 86% of detections are classified correctly regarding their walking aid. For comparison, using state-of-the-art features of Arras et al. on the same data results in an F 1 score of 0.86 and 57% correct discrimination of walking aids. The proposed detection algorithm takes around 2.5% of the resources of a 2.8 GHz CPU core to process 270° laser range data at an update rate of 10 Hz.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: This paper investigates the specific design considerations and the impressions of long-term care residents, healthcare professionals, and family members on a socially assistive robot designed to autonomously facilitate cognitively and socially stimulating leisure activities.
Abstract: As older adults age, they are more likely to reside in long-term care facilities due to the decline in cognitive and/or physical abilities that prevent them from living independently. With a rapidly aging population there is an increasing demand on long-term care facilities to care for older adults. Such facilities need to provide medical services, assistance in activities of daily living, and scheduled leisure activities to improve health and quality of life. However, as the need for long-term care is increasing, the care workforce is faced with decreasing numbers of healthcare staff and high turnover rates. Our research focuses on the design of socially assistive robots to plan, schedule, and facilitate social and cognitive interventions for residents in long-term care facilities. In this paper, we investigate the specific design considerations and the impressions of long-term care residents, healthcare professionals, and family members on a socially assistive robot designed to autonomously facilitate cognitively and socially stimulating leisure activities. Thematic analysis of focus group sessions conducted at a long-term care facility with the aforementioned individuals revealed important design considerations for the development and integration of a socially assistive robot in long-term care facilities.

Proceedings ArticleDOI
01 Aug 2014
TL;DR: The results confirmed the hypothesis that the use of such motivational cues significantly improves the persuasiveness of the robot and highlighted a higher impact of the verbal cues implementations, which is in contrast with previous studies.
Abstract: This study contribute toward the creation of social robots as personal and public assistants. The ability of the robot to persuade and motivate people to follow a given behavior is of particular relevance in several cases, especially those related to people's health recover and maintenance (e.g., personal trainer, diet coach, etc.). In this paper, we evaluated the effect of a humanoid robot's use of verbal and bodily features and behaviors in motivating 80 children (age: 8–9 years old) in following healthier lifestyles (namely eat more fruit and vegetables). The results confirmed the hypothesis that the use of such motivational cues significantly improves the persuasiveness of the robot. Moreover the results highlighted a higher impact of the verbal cues implementations, which is in contrast with previous studies.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: This paper characterises this potential support in greater detail through a structured collection of perspectives from all stakeholders, namely the diabetic children, their siblings and parents, and the healthcare professionals involved in their diabetes education and care.
Abstract: Being a child with diabetes is challenging: apart from the emotional difficulties of dealing with the disease, there are multiple physical aspects that need to be dealt with on a daily basis. Furthermore, as the children grow older, it becomes necessary to self-manage their condition without the explicit supervision of parents or carers. This process requires that the children overcome a steep learning curve. Previous work hypothesized that a robot could provide a supporting role in this process. In this paper, we characterise this potential support in greater detail through a structured collection of perspectives from all stakeholders, namely the diabetic children, their siblings and parents, and the healthcare professionals involved in their diabetes education and care. A series of brain-storming sessions were conducted with 22 families with a diabetic child (32 children and 38 adults in total) to explore areas in which they expected that a robot could provide support and/or assistance. These perspectives were then reviewed, validated and extended by healthcare professionals to provide a medical grounding. The results of these analyses suggested a number of specific functions that a companion robot could fulfil to support diabetic children in their daily lives.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: A user study putting the users in control of the mechanoid was conducted in a laboratory hallway-like setting and findings align with previously reported personal space zones in human-robot interaction research.
Abstract: Within the scope of the current research the goal was to develop an autonomous transport assistant for hospitals. As a sort of social robots, they need to fulfill two main requirements with respect to their interactive behavior with humans: (1) a high level of safety and (2) a behavior that is perceived as socially proper. One important element includes the characteristics of movement. However, state-of-the-art hospital robots rather focus on safe but not smart maneuvering. Vital motion parameters in human everyday environment are personal space and velocity. The relevance of these parameters has also been reported in existing human-robot interaction research. However, to date, no minimal accepted frontal and lateral distances for human-mechanoid proxemics have been explored. The present work attempts to gain insights into a potential threshold of comfort and additionally, aims to explore a potential interaction of this threshold and the mechanoid's velocity. Therefore, a user study putting the users in control of the mechanoid was conducted in a laboratory hallway-like setting. Findings align with previously reported personal space zones in human-robot interaction research. Minimal accepted frontal and lateral distances were obtained. Furthermore, insights into a potential categorization of the lateral personal space area around a human are discussed for human-robot interaction.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: A machine learning approach which makes use of Dynamic Mode Decomposition (DMD) which is able to extract the dynamics of a nonlinear system and is well suited to separate noise from regular oscillations in sensor readings during cyclic robot movements under different behavior configurations is proposed.
Abstract: In many settings, e.g. physical human-robot interaction, robotic behavior must be made robust against more or less spontaneous application of external forces. Typically, this problem is tackled by means of special purpose force sensors which are, however, not available on many robotic platforms. In contrast, we propose a machine learning approach suitable for more common, although often noisy sensors. This machine learning approach makes use of Dynamic Mode Decomposition (DMD) which is able to extract the dynamics of a nonlinear system. It is therefore well suited to separate noise from regular oscillations in sensor readings during cyclic robot movements under different behavior configurations. We demonstrate the feasibility of our approach with an example where physical forces are exerted on a humanoid robot during walking. In a training phase, a snapshot based DMD model for behavior specific parameter configurations is learned. During task execution the robot must detect and estimate the external forces exerted by a human interaction partner. We compare the DMD-based approach to other interpolation schemes and show that the former outperforms the latter particularly in the presence of sensor noise. We conclude that DMD which has so far been mostly used in other fields of science, particularly fluid mechanics, is also a highly promising method for robotics.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: The robot designed uses a simple spherical morphology together with a collection of autonomous behaviors and controllable modalities and provides a wide range of movement, visual, sound and touch interaction capabilities to encourage the child to learn and play.
Abstract: It is known that children with autism can benefit from interacting with robotic devices. The Centers for Disease Control and Prevention (CDC) in the USA identifies around 1 in 68 American children as being on the autism spectrum. Other countries also have similar prevalence rates. Therefore, providing therapeutic devices is becoming of increasing importance. Here we address the problem of effectively designing and building a robotic device fit for this purpose. The robot we have designed uses a simple spherical morphology together with a collection of autonomous behaviors and controllable modalities. The platform, is robust, simple, safe, and provides a wide range of movement, visual, sound and touch interaction capabilities to encourage the child to learn and play.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: The results show that social framing, in contrast to other methods for getting a person's continued attention, is effective and increases how friendly the robot appears, however, it has little influence on people's willingness to assist the robot.
Abstract: Robots often need to ask humans for help, for instance to complete a human component in a larger task or to recover from an unforeseen error. In this paper, we explore how robots can initiate interactions with people in order to ask for help. We discuss a study in which a robot initiated interaction with a participant by producing either an acoustic signal or a verbal greeting. Thereafter, the robot produced a gesture in order to request help in performing a task. We investigate the effect that social framing by means of a verbal greeting may have on people's attention to the robot, on their recognition of the robot's actions and intention, and on their willingness to help. The results show that social framing, in contrast to other methods for getting a person's continued attention, is effective and increases how friendly the robot appears. However, it has little influence on people's willingness to assist the robot, which rather depends on the activities people are engaged in, and on the readability of the robot's request.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: This paper focuses on tasks that can be represented as a sequence of manipulated objects and performed actions that are of importance for a safe and profitable human-robot cooperation.
Abstract: Task recognition and future human activity prediction are of importance for a safe and profitable human-robot cooperation. In real scenarios, the robot has to extract this information merging the knowledge of the task with contextual information from the sensors, minimizing possible misunderstandings. In this paper, we focus on tasks that can be represented as a sequence of manipulated objects and performed actions. The task is modelled with a Dynamic Bayesian Network (DBN), which takes as input manipulated objects and performed actions. Objects and actions are separately classified starting from RGB-D raw data. The DBN is responsible for estimating the current task, predicting the most probable future pairs of action-object and correcting possible misclassification. The effectiveness of the proposed approach is validated on a case of study, consisting of three typical tasks of a kitchen scenario.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: Inverse reinforcement learning is analyzed as a tool to transfer the typical human navigation behavior to the robot local navigation planner using Observations of real human motion interactions found in one publicly available datasets are employed.
Abstract: Robot navigation in human environments is an active research area that poses serious challenges. Among them, social navigation and human-awareness has gain lot of attention in the last years due to its important role in human safety and robot acceptance. Learning has been proposed as a more principled way of estimating the insights of human social interactions. In this paper, inverse reinforcement learning is analyzed as a tool to transfer the typical human navigation behavior to the robot local navigation planner. Observations of real human motion interactions found in one publicly available datasets are employed to learn a cost function, which is then used to determine a navigation controller. The paper presents an analysis of the performance of the controller behavior in two different scenarios interacting with persons, and a comparison of this approach with a Proxemics-based method.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: It is concluded that dynamic signs are important for information conveyance when the robot is in close proximity to the human but multi-arm gestures are necessary when information must be conveyed across a greater distance.
Abstract: Motivated by the desire to mitigate human casualties in emergency situations, this paper explores various guidance modalities provided by a robotic platform for instructing humans to safely evacuate during an emergency. We focus on physical modifications of the robot, which enables visual guidance instructions, since auditory guidance instructions pose potential problems in a noisy emergency environment. Robotic platforms can convey visual guidance instructions through motion, static signs, dynamic signs, and gestures using single or multiple arms. In this paper, we discuss the different guidance modalities instantiated by different physical platform constructs and assess the abilities of the platforms to convey information related to evacuation. Human-robot interaction studies with 192 participants show that participants were able to understand the information conveyed by the various robotic constructs in 75.8% of cases when using dynamic signs with multi-arm gestures, as opposed to 18.0% when using static signs for visual guidance. Of interest to note is that dynamic signs had equivalent performance to single-arm gestures overall but drastically different performances at the two distance levels tested. Based on these studies, we conclude that dynamic signs are important for information conveyance when the robot is in close proximity to the human but multi-arm gestures are necessary when information must be conveyed across a greater distance.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: A Kinect-based calling gesture recognition scenario for taking order service of an elderly care robot, designed mainly for helping non expert users like elderly to call service robot for their service request.
Abstract: This paper proposes a Kinect-based calling gesture recognition scenario for taking order service of an elderly care robot. The proposed scenarios are designed mainly for helping non expert users like elderly to call service robot for their service request. In order to facilitate elderly service, natural calling gestures are designed to interact with the robot. Our challenge here is how to make the natural calling gesture recognition work in a cluttered and randomly moving objects. In this approach, there are two modes of our calling gesture recognition: Skeleton based gesture recognition and Octree based gesture recognition. Individual people is segmented out from 3D point cloud acquired by Microsoft Kinect, skeleton is generated for each segment, face detection is applied to identify whether the segment is human or not, specific natural calling gestures are designed based on skeleton joints. For the case that user is sitting on a chair or sofa, correct skeleton cannot be generated, Octree based gesture recognition procedure is used to recognize the gesture, in which human segments with head and hand are identified by face detection as well as specific geometrical constrains and skin color evidence. The proposed method has been implemented and tested on “HomeMate”, a service robot developed for elderly care. The performance and results are given.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: The robot's ontology is extended with concepts for representing human-robot interactions as well as the experiences of the robot, and these experiences are extracted and stored in memory and they are used as input for learning methods.
Abstract: Intelligent service robots should be able to improve their knowledge from accumulated experiences through continuous interaction with the environment, and in particular with humans. A human user may guide the process of experience acquisition, teaching new concepts, or correcting insufficient or erroneous concepts through interaction. This paper reports on work towards interactive learning of objects and robot activities in an incremental and open-ended way. In particular, this paper addresses human-robot interaction and experience gathering. The robot's ontology is extended with concepts for representing human-robot interactions as well as the experiences of the robot. The human-robot interaction ontology includes not only instructor teaching activities but also robot activities to support appropriate feedback from the robot. Two simplified interfaces are implemented for the different types of instructions including the teach instruction, which triggers the robot to extract experiences. These experiences, both in the robot activity domain and in the perceptual domain, are extracted and stored in memory, and they are used as input for learning methods. The functionalities described above are completely integrated in a robot architecture, and are demonstrated in a PR2 robot.