scispace - formally typeset
Search or ask a question

Showing papers presented at "Robot and Human Interactive Communication in 2010"


Proceedings ArticleDOI
11 Oct 2010
TL;DR: The experiment reported in this paper investigated the effect of varying a robot's head position on the interpretation, Valence, Arousal and Stance of emotional key poses and found that participants were better than chance level in interpreting the key poses.
Abstract: In order for robots to be socially accepted and generate empathy it is necessary that they display rich emotions. For robots such as Nao, body language is the best medium available given their inability to convey facial expressions. Displaying emotional body language that can be interpreted whilst interacting with the robot should significantly improve its sociability. This research investigates the creation of an Affect Space for the generation of emotional body language to be displayed by robots. To create an Affect Space for body language, one has to establish the contribution of the different positions of the joints to the emotional expression. The experiment reported in this paper investigated the effect of varying a robot's head position on the interpretation, Valence, Arousal and Stance of emotional key poses. It was found that participants were better than chance level in interpreting the key poses. This finding confirms that body language is an appropriate medium for robot to express emotions. Moreover, the results of this study support the conclusion that Head Position is an important body posture variable. Head Position up increased correct identification for some emotion displays (pride, happiness, and excitement), whereas Head Position down increased correct identification for other displays (anger, sadness). Fear, however, was identified well regardless of Head Position. Head up was always evaluated as more highly Aroused than Head straight or down. Evaluations of Valence (degree of negativity to positivity) and Stance (degree to which the robot was aversive to approaching), however, depended on both Head Position and the emotion displayed. The effects of varying this single body posture variable were complex.

143 citations


Proceedings ArticleDOI
03 Dec 2010
TL;DR: It is the authors' opinion that if not tackled appropriately, abuses towards robots may become a serious hindrance to their future deployment, and safety, Hence, the necessity to tackle this issue with dedicated solutions during the early phases of design.
Abstract: This paper describes and discusses the preliminary results of a behavioural study on robot social acceptability, which was carried out during a public demonstration in South Korea. Data was collected by means of direct observation of people behaviour during interaction with robots. The most interesting result to emerge is that of young people: they tended to react to the robots presence with extreme curiosity and, quite often, to treat them aggressively. In this paper, the word bullying is used to describe any kind of improper and violent behaviour, intended to cause damages or impede the robot operation. It is the authors' opinion that if not tackled appropriately, abuses towards robots may become a serious hindrance to their future deployment, and safety. Hence, the necessity to tackle this issue with dedicated solutions during the early phases of design.

99 citations


Proceedings ArticleDOI
11 Oct 2010
TL;DR: In this paper, effort sharing policies are systematically derived from the geometric and dynamic task properties, resulting in unilateral and balanced effort distributions that are evaluated within a novel hierarchical motion generation and control framework.
Abstract: Physical cooperation with humans greatly enhances the capabilities of robotic systems when leaving standardized industrial settings. In particular, manipulation of bulky objects in narrow environments requires cooperating partners. Actuation redundancies arising in joint manipulation impose the question of load sharing among the interacting partners. In this paper, effort sharing policies are systematically derived from the geometric and dynamic task properties. Three policies are intuitively identified, resulting in unilateral and balanced effort distributions. These policies are evaluated within a novel hierarchical motion generation and control framework. The synthesized system is successfully validated in a three-degrees-of-freedom planar tracking experiment. This evaluation shows an interdependency of the load sharing strategy and the resulting task performance.

84 citations


Proceedings ArticleDOI
11 Oct 2010
TL;DR: A robot that provided emotional feedback during the interaction was perceived to be superior to a robot that responded neutrally, highlighting the importance of the interplay of form and function in the attribution of humanness to robots.
Abstract: We examined the effects of a robots' nonverbal response on evaluations of anthropomorphism and other dimensions (e.g., liking, closeness, pleasantness of human-robot interaction) in a case study. Our work both conceptually replicates and extends previous research: On the one hand, we replicated previous findings and generalized them to a different robot type, the iCat. On the other hand, our work extends existing research in that it includes a wider range of dependent variables, with a particular focus on perceptions of anthropomorphism. Taken together, most of our results support the experimental hypotheses for the dependent measures: That is, a robot that provided emotional feedback during the interaction was perceived to be superior to a robot that responded neutrally. Thus, our findings highlight the importance of the interplay of form and function in the attribution of humanness to robots.

82 citations


Proceedings ArticleDOI
11 Oct 2010
TL;DR: The design and implementation of a socially assistive robot that monitors the performance of a user during a seated arm exercise scenario, with the purpose of providing motivation to the user to complete the task and to improve performance are described.
Abstract: We describe the design and implementation of a socially assistive robot that monitors the performance of a user during a seated arm exercise scenario, with the purpose of providing motivation to the user to complete the task and to improve performance. The visual arm pose recognition procedure used by the robot in tracking user performance, the three exercise games, and the methodology behind the human-robot interaction dialogue are presented. A two-condition experimental study was conducted with elderly participants to test the feasibility and effectiveness of the robot exercise system, the results of which demonstrate the viability and usefulness of the system in motivating exercise among elderly users.

78 citations


Proceedings ArticleDOI
11 Oct 2010
TL;DR: A manual that shows effective methods for robot therapy is developed and its effectiveness is evaluated in a preliminary study with one caregiver and two patients.
Abstract: Robot therapy is expected to have psychological, physiological and social effects similar to animal therapy. The use of a therapeutic seal robot, Paro, in various facilities for the elderly is spreading around the world. However, caregivers use Paro freely, and the ways in which they use Paro differ among them. Therefore, the effects are influenced by their skills. A manual that shows effective ways to use Paro is needed. In this paper, such a manual that shows effective methods for robot therapy is developed and its effectiveness is evaluated in a preliminary study with one caregiver and two patients.

76 citations


Proceedings ArticleDOI
11 Oct 2010
TL;DR: Both pre-interaction emotions and attitudes towards robots, as well as experience with the robot, are important areas to monitor and address in influencing acceptance of healthcare robots in retirement village residents and staff.
Abstract: This study investigated whether attitudes and emotions towards robots predicted acceptance of a healthcare robot in a retirement village population. Residents (n = 32) and staff (n = 21) at a retirement village interacted with a robot for approximately 30 minutes. Prior to meeting the robot, participants had their heart rate and blood pressure measured. The robot greeted the participants, assisted them in taking their vital signs, performed a hydration reminder, told a joke, played a music video, and asked some questions about falls and medication management. Participants were given two questionnaires; one before and one after interacting with the robot. Measures included in both questionnaires were the Robot Attitude Scale (RAS) and the Positive and Negative Affect Schedule (PANAS). After using the robot, participants rated the overall quality of the robot interaction. Both residents and staff reported more favourable attitudes (p < .05) and decreases in negative affect (p < .05) towards the robot after meeting it, compared with before meeting it. Pre-interaction emotions and robot attitudes, combined with post-interaction changes in emotions and robot attitudes, were highly predictive of participants' robot evaluations (R = .88, p < .05). The results suggest both pre-interaction emotions and attitudes towards robots, as well as experience with the robot, are important areas to monitor and address in influencing acceptance of healthcare robots in retirement village residents and staff. The results support an active cognition model that incorporates a feedback loop based on re-evaluation after experience.

72 citations


Proceedings ArticleDOI
11 Oct 2010
TL;DR: This work investigated and implemented an assembly task where the robot acts as assistant for the human and uses a Bayesian estimation framework to predict assembly duration in order to deliver the next part just in time.
Abstract: When we have to physically interact with a robot, the benchmark for natural and efficient performance is our experience of daily interactions with other humans. This goal is still far despite significant advances in human-robot interaction. While considerable progress is made in various areas ranging from improving the hardware over safety measures to better sensor systems, the research on basic mechanisms of interaction and its technical implementation is still in its infancy. In the following, we give an overview of our own work aiming at improving human-robot interaction and joint action. When humans jointly collaborate to achieve a common goal, the actions of each partner need to be properly coordinated to assure a smooth and efficient workflow. This includes timing of the actions, but also, in the case of physical interaction, the spatial coordination. We thus first investigated how a simple physical interaction, a hand-over task between two humans without verbal communication, is achieved. Our results with a human as receiver and both humans and robots as delivering agent show that both the position and the kinematics of the partner's movement are used to increase the confidence in predicting hand-over in time and space. These results underline that for successful joint action the robot must act predictably for the human partner. However, in a more realistic scenario, robot and human constitute a dynamic system with each agent predicting and reacting to the actions and intentions of the other. We therefore investigated and implemented an assembly task where the robot acts as assistant for the human. Using a Bayesian estimation framework, the robot predicts assembly duration in order to deliver the next part just in time.

70 citations


Proceedings ArticleDOI
11 Oct 2010
TL;DR: This work proposes a set of strategies that allow a robot to identify the referent when the human partner refers to an object giving incomplete information, i.e. an ambiguous description, and proposes the use of an ontology to store and reason on the robot's knowledge to ease clarification, and therefore, improve interaction.
Abstract: In human-robot interaction, a robot must be prepared to handle possible ambiguities generated by a human partner. In this work we propose a set of strategies that allow a robot to identify the referent when the human partner refers to an object giving incomplete information, i.e. an ambiguous description. Moreover, we propose the use of an ontology to store and reason on the robot's knowledge to ease clarification, and therefore, improve interaction. We validate our work through both simulation and two real robotic platforms performing two tasks: a daily-life situation and a game.

69 citations


Proceedings ArticleDOI
11 Oct 2010
TL;DR: The industrial design of the social robot ‘Flobi’ appears as a cartoon-like character and has a ‘hole-free’ design without any visible conjunctions, and the structural design implements exchangeable modular parts.
Abstract: This paper introduces the industrial design of the social robot ‘Flobi’. In total, three key concepts influenced the industrial design: First, the robot head of Flobi appears as a cartoon-like character and has a ‘hole-free’ design without any visible conjunctions. Second, Flobi has dynamic features to display not only primary emotions, but also shame, a typical secondary emotion. Third, the structural design implements exchangeable modular parts. Through modular design, the underlying hardware is quickly accessible and the visual features of the robot (e.g., hairstyle, facial features) can be altered easily. A first study demonstrated the successful implementation of Flobi's dynamic features, whereas a second study demonstrates that the exchangeable hair modules influence gender-schematic perceptions of the robot.

67 citations


Proceedings ArticleDOI
11 Oct 2010
TL;DR: A motion rendering system that modifies arbitrary basic movements of a certain real HFR to add the target emotion at intended strength and the results of experiments suggest that the method succeeded in adding a target emotion to arbitrary movements.
Abstract: A method for adding a target emotion to arbitrary body movements of a human form robot (HFR) is developed. The additional emotion is pleasure, anger, sadness or relaxation. This paper proposes a motion rendering system that modifies arbitrary basic movements of a certain real HFR to add the target emotion at intended strength. The system is developed on the assumption that movements can be emotive by processed on the basis of the correlations between movement features and expressed emotions. The movement features based on Laban movement analysis (LMA) are adopted. An experiment using a real HFR are conducted to test how well our system adds a target emotion to arbitrary movements at intended strength. The results of experiments suggest that our method succeeded in adding a target emotion to arbitrary movements.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: This work presents an investigation of probabilistic modeling for inferring grasp stability based on learning from examples and classification of a grasp as stable or unstable before applying further actions on it, e.g. lifting.
Abstract: In this paper, the problem of learning grasp stability in robotic object grasping based on tactile measurements is studied. Although grasp stability modeling and estimation has been studied for a long time, there are few robots today able of demonstrating extensive grasping skills. The main contribution of the work presented here is an investigation of probabilistic modeling for inferring grasp stability based on learning from examples. The main objective is classification of a grasp as stable or unstable before applying further actions on it, e.g. lifting. The problem cannot be solved by visual sensing which is typically used to execute an initial robot hand positioning with respect to the object. The output of the classification system can trigger a regrasping step if an unstable grasp is identified. An off-line learning process is implemented and used for reasoning about grasp stability for a three-fingered robotic hand using Hidden Markov models. To evaluate the proposed method, experiments are performed both in simulation and on a real robot system.

Proceedings ArticleDOI
12 Sep 2010
TL;DR: This paper introduces a robot navigation approach that takes into account human-centered requirements and the collaborative nature of the interaction between the human and the robot.
Abstract: Robot path planning has traditionally concentrated on collision-free paths. For robots that collaborate closely with humans, however, the situation is different in two respects: 1) the humans in the robot's environment are not randomly moving objects, but cognitive beings who can deliberately make way for a robot to pass and 2) the quality of a navigation plan depends less on quantitative efficiency criteria, but rather on the acceptance of humans. In this paper, we introduce a robot navigation approach that takes into account human-centered requirements and the collaborative nature of the interaction between the human and the robot.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: Design challenges in augmenting a humanoid robot with tactile sensors specifically for interaction with children with autism are presented.
Abstract: The work presented in this paper is part of our investigation in the ROBOSKIN project. The project aims to develop and demonstrate a range of new robot capabilities based on the tactile feedback provided by a robotic skin. One of the project's objectives is to improve human-robot interaction capabilities in the application domain of robot-assisted play. This paper presents design challenges in augmenting a humanoid robot with tactile sensors specifically for interaction with children with autism. It reports on a preliminary study that includes requirements analysis based on a case study evaluation of interactions of children with autism with the child-sized, minimally expressive robot KASPAR. This is followed by the implementation of initial sensory capabilities on the robot that were then used in experimental investigations of tactile interaction with children with autism.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: The results of 2 tests conducted are described and analyzed so as to understand some of the optimal features that should characterize the robot voice and graphical-based user interfaces.
Abstract: Human-robot interaction (HRI) takes place especially through interfaces. The design of such interfaces is a very delicate and crucial phase because it influences the robot accessibility and usability by the user. In this paper, we describe and analyze the results of 2 tests conducted so as to understand some of the optimal features that should characterize the robot voice and graphical-based user interfaces. Our test platform is an assistive robot developed for the elderly with mild cognitive impairments. Therefore, the user interfaces must be clear and simple. The ambiguities must be eliminated so as to facilitate the use of the robot and hence not to discourage the elderly population to use new technologies.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: The development of an adaptive therapeutic platform which integrates information deriving from wearable sensors carried by a patient or subject as well as sensors placed in the therapeutic ambient is described.
Abstract: People with autism are known to possess deficits in processing emotional states, both their own and of others. A humanoid robot, FACE (Facial Automation for Conveying Emotions), capable of expressing and conveying emotions and empathy has been constructed to enable autistic children and adults to better deal with emotional and expressive information. We describe the development of an adaptive therapeutic platform which integrates information deriving from wearable sensors carried by a patient or subject as well as sensors placed in the therapeutic ambient. Through custom developed control and data processing algorithms the expressions and movements of FACE are then tuned and modulated to harmonize with the feelings of the subject postulated by their physiological and behavioral correlates. Preliminary results demonstrating the potential of adaptive therapy are presented.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: A controlled multimodal training system, for transferring the motor and cognitive skills involved in industrial maintenance and Assembly and bi-manual coordination skills, is presented.
Abstract: Industrial Maintenance and Assembly is a very complex task involving both cognitive skills (procedural skills) and motor skills (fine motor control and bi-manual coordination skills). This paper presents a controlled multimodal training system, for transferring the motor and cognitive skills involved in these tasks. The new platform provides different multimodal aids and learning strategies that help and guide the users during their training process. One of the main features of this system is its flexibility to adapt itself to the task demands and to the users' preferences and needs supporting different configurations. To address bi-manual operations the platform offers different alternatives, one of them is a set-up composed of a haptic device to track the motion of the operator's dominant hand and simulate the physical interaction within the virtual environment, together with a marker-less motion capture system to track the motion of the other hand in real time.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: The developed concept of an ergonomic vibrotactile feedback device for the human arm can be used for a large spectrum of applications and a wide range of arm diameters since vibration segments are self-aligning to their intended positions.
Abstract: This paper presents an ergonomic vibrotactile feedback device for the human arm. Due to the developed concept, the device can be used for a large spectrum of applications and a wide range of arm diameters since vibration segments are self-aligning to their intended positions. Furthermore, the device improves user convenience and movement capability as it is battery powered and controlled through a wireless communication interface. Vibrotactile stimuli are used to give collision feedback or guidance information to the human arm when interacting with a Virtual Reality scenario. The usefulness of this device has been shown in a Virtual Reality automotive assembly verification and a telerobotic system.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: This paper presents results from a video human-robot interaction study in which participants viewed a video in which an appearance-constrained Pioneer robot used dog-inspired affective cues to communicate affinity and relationship with its owner and a guest using proxemics, body movement and orientation and camera orientation.
Abstract: This paper presents results from a video human-robot interaction (VHRI) study in which participants viewed a video in which an appearance-constrained Pioneer robot used dog-inspired affective cues to communicate affinity and relationship with its owner and a guest using proxemics, body movement and orientation and camera orientation. The findings suggest that even with the limited modalities for non-verbal expression offered by a Pioneer robot, which does not have a dog-like appearance, these cues were effective for non-verbal affective communication.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person, and reveals tendencies to treat the android robot as social agent.
Abstract: In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an ‘incorporated identity’ towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.

Proceedings ArticleDOI
01 Sep 2010
TL;DR: A SAR architecture is developed that facilitates multiple task-oriented interactions between a user and a robot agent and accommodates a variety of inputs, tasks, and interaction modalities that are used to provide relevant, real-time feedback to the participant.
Abstract: New approaches to rehabilitation and health care have developed due to advances in technology and human robot interaction (HRI). Socially assistive robotics (SAR) is a subcategory of HRI that focuses on providing assistance through hands-off interactions. We have developed a SAR architecture that facilitates multiple task-oriented interactions between a user and a robot agent. The architecture accommodates a variety of inputs, tasks, and interaction modalities that are used to provide relevant, real-time feedback to the participant. We have implemented the architecture and validated its technological feasibility in a small pilot study in which a SAR agent led three post-stroke individuals through an exercise scenario. In the following, we present our architecture design, and the results of the feasibility study.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: This paper presents some preliminary results on RemoTouch, a system allowing to perform experiences of remote touch that consists of an avatar equipped with an instrumented glove and a user wearing tactile displays allowing to feel the remote tactile interaction.
Abstract: This paper presents some preliminary results on RemoTouch, a system allowing to perform experiences of remote touch The system consists of an avatar equipped with an instrumented glove and a user wearing tactile displays allowing to feel the remote tactile interaction The main features of RemoTouch are that it is a wearable system and that a human avatar is used to collect remote tactile interaction data New paradigms of tactile communication can be designed around the RemoTouch system Two simple experiences are reported to show the potential of the proposed remote touch architecture

Proceedings ArticleDOI
11 Oct 2010
TL;DR: This paper explores the relationship between capabilities of robots portrayed in popular science fiction films and students' expectations about a real robot, and how an empirical evaluation of cultural artifacts can inform the study of human-robot interaction.
Abstract: Because interacting with a robot is a novel experience for most adults, expectations about a robot's capabilities must come from sources other than past experiences. This paper explores the relationship between capabilities of robots portrayed in popular science fiction films and students' expectations about a real robot. A content analysis of 12 American science fiction films showed that fictional robots reliably display cognitive capabilities, but do not consistently exhibit many humanlike social behaviors. Survey data collected from students follow the same basic patterns: people expect robots to have humanlike cognitive capabilities, but not social capabilities. The results are discussed in terms of how an empirical evaluation of cultural artifacts can inform the study of human-robot interaction.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: The overall system can provide both cutaneous and kinesthetic feedback and improve the fidelity of the haptic interaction and the performance of the cutaneous device in a task of contour following has been evaluated.
Abstract: A new prototype of portable device for haptic interaction with virtual environments is presented. It is a lightweight interface for the fingertips, designed for providing cutaneous feedback and displaying the contact ? non contact transition in highly immersive virtual environments. The second version of the interface features a force sensor for controlling the force on the fingertip during contact, assuring a better haptic feedback. In this paper the kinematics, the mechanical design and the improved control system are described. The device has been mounted on a kinesthetic haptic interface which tracks its position: in this configuration, the overall system can provide both cutaneous and kinesthetic feedback and improve the fidelity of the haptic interaction. Finally the performance of the cutaneous device in a task of contour following has been evaluated.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: This paper investigated the suitability of Support Vector Machine (SVM) classifiers for identification of locomotion intentions from surface electromyography (sEMG) data, and a phase-dependent approach was employed in order to contextualize muscle activation signals.
Abstract: The next generation of tools for rehabilitation robotics requires advanced human-robot interfaces able to activate the device as soon as patient's motion intention is raised. This paper investigated the suitability of Support Vector Machine (SVM) classifiers for identification of locomotion intentions from surface electromyography (sEMG) data. A phase-dependent approach, based on foot contact and foot push off events, was employed in order to contextualize muscle activation signals. Good accuracy is demonstrated on experimental data from three healthy subjects. Classification has also been tested for different subsets of EMG features and muscles, aiming to identify a minimal setup required for the control of an EMG-based exoskeleton for rehabilitation purposes.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: This paper characterizes the Hokuyo UBG-04LX-F01 and analyzes the effect of the target properties, such as incidence angle, color, brightness, and material, by measuring the distance to a target.
Abstract: This paper presents a characterization of the Hokuyo UBG-04LX-F01 laser rangefinder (LRF). The Hokuyo LRFs are suitable for a small mobile robot due to their small size and light weight. In particular, the scan frequency of Hokuyo UBG-04LX-F01 is higher than those of the previous LRFs in its price range. However, there is no research on characterizing this LRF for the convenience of practical use. Therefore, this paper characterizes the Hokuyo UBG-04LX-F01 and analyzes the effect of the target properties, such as incidence angle, color, brightness, and material, by measuring the distance to a target. The experimental results show that the measurement error is strongly influenced by the incidence angle and the brightness of the target surface. From the experimental results, a calibration model is also proposed to measure the accurate distance to a target.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: This work proposes a robot control architecture building upon the Articulated Communicator Engine (ACE) that was developed to allow virtual agents to flexibly realize planned multi-modal behavior representations on the spot.
Abstract: The generation of communicative, speech-accompanying robot gesture is still largely unexplored. We present an approach to enable the humanoid robot ASIMO to flexibly produce speech and co-verbal gestures at run-time, while not being limited to a pre-defined repertoire of motor actions. Since much research has already been dedicated to this challenge within the domain of virtual conversational agents, we build upon the experience gained from the development of a speech and gesture production model used for the virtual human Max. We propose a robot control architecture building upon the Articulated Communicator Engine (ACE) that was developed to allow virtual agents to flexibly realize planned multi-modal behavior representations on the spot. Our approach tightly couples ACE with ASIMO's perceptuo-motor system, combining conceptual representation and planning with motor control primitives for speech and arm movements of a physical robot body. First results of both gesture production and speech synthesis using ACE and the MARY text-to-speech system are presented and discussed.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: A layered system is proposed, which is inspired from tactile sensing in humans for building artificial somatosensory maps in robots, and validated in simulation to validate the approach.
Abstract: In this paper a framework for representing tactile information in robots is discussed. Control models exploiting tactile sensing are fundamental in social Human-Robot interaction tasks. Difficulties arising in rendering the sense of touch in robots are at different levels: both representation and computational issues must be considered. A layered system is proposed, which is inspired from tactile sensing in humans for building artificial somatosensory maps in robots. Experiments in simulation are used to validate the approach.

Proceedings ArticleDOI
11 Oct 2010
TL;DR: This paper proposes a model of episodic memory and integrate it with a decision making module based on a Hierarchical Task Network (HTN) planner and presents a prototype implementation demonstrating the preliminary results.
Abstract: In this paper, we address the question of how to create episodic memory based long-term affective interactions with a human-like robot. A key challange for long-term interaction is the recall of past important events during conversation. We suggest that episodic memory is a core concept for realizing this intelligence. In this paper, we propose a model of episodic memory and integrate it with a decision making module based on a Hierarchical Task Network (HTN) planner. Plans generated by the HTN planner are executed by a Finite-state-machine (FSM) based dialogue system in order to produce appropriate responses. Finally, we present a prototype implementation demonstrating the preliminary results we obtained.

Proceedings ArticleDOI
Hiroki Kawasaki1, Hiroyuki Iizuka1, Shin Okamoto1, Hideyuki Ando1, Taro Maeda1 
11 Oct 2010
TL;DR: This paper proposes an approach for sharing first-person perspectives to establish collaboration and to transmit a skill from one to another and developed a view-sharing system to realize such interaction.
Abstract: When two distant persons establish cooperative interaction, how to share their motions and sensations and how to adjust or revise their motions are important for cooperation In this paper, we propose an approach for sharing first-person perspectives to establish collaboration and to transmit a skill from one to another We developed a view-sharing system to realize such interaction To investigate the fundamental behavioral property of humans with our system, we examined a simple collaborated behavior and found that a situation where subjects only see their partner's view improves velocity following By exploiting this property, we transmitted the basic skill required for the Diabolo trick to non-skilled persons Through our system, the performances of non-skilled persons with the assistance of a skilled person surpassed individual performances