scispace - formally typeset
Search or ask a question

Showing papers by "Keisuke Nakamura published in 2022"



Proceedings ArticleDOI
29 Aug 2022
TL;DR: The experimental results show that with the proposed method, Haru can obtain a similar performance to learning from explicit feedback, eliminating the need for human users to get familiar with training interface in advance and resulting in an unobtrusive learning process.
Abstract: Social robots that are able to express emotions can potentially improve human’s well-being. Whether and how they can learn from interactions between them and human being in a natural way will be key to their success and acceptance by ordinary people. In this paper, we proposed to shape social robot Haru affective behaviors with predicted continuous rewards based on received implicit facial feedback via human-centered reinforcement learning. The implicit facial feedback was estimated with the valence and arousal of received implicit facial feedback using Russell’s circumplex model, which can provide a more accurate estimation of the subtle psychological changes of human user, resulting in more effective robot behavior learning. The whole experiment is conducted on the desktop robot Haru, which is primarily used to study emotional interactions with human in different scenarios. Our experimental results show that with our proposed method, Haru can obtain a similar performance to learning from explicit feedback, eliminating the need for human users to get familiar with training interface in advance and resulting in an unobtrusive learning process.

Book ChapterDOI
TL;DR: In this article , a people perception software architecture and its implementation, focused on the information of interest from the point of view of a social robot, is presented, where the key modules employed to get the different people features, such as the body parts location, the face and hands information, and the speech, from a set of possible devices and configurations are described.
Abstract: This article presents a people perception software architecture and its implementation, focused on the information of interest from the point of view of a social robot. The key modules employed to get the different people features, such as the body parts location, the face and hands information, and the speech, from a set of possible devices and configurations are described. The association and combination of these features using a temporal and geometric fusion system are explained in detail. A high-level interface for Human-Robot interaction using the resulting information from the fused people is proposed. The paper presents experimental results evaluating the relevant aspects of the system.

Book ChapterDOI
TL;DR: In this article , the authors proposed an empathic and adaptive framework for robot's storytelling that facilitates social robot Haru to learn from human teachers, which can make use of human teachers as integral to the design of the system and provide a personalized storytelling experience.
Abstract: AbstractIn previous studies of applying storytelling to robotics, the emotions and actions of robots are usually pre-determined, resulting in a homogeneous storytelling style for the robot. In this paper, we propose an empathic and adaptive framework for robot’s storytelling that facilitates social robot Haru to learn from human teachers. In this framework, the robot Haru performs empathic storytelling based on the human teacher’s voice, and then changes its narrative styles (e.g., featured by pitch, emotion, action, etc.) to capture the listener’s attention. The whole experiment was conducted on social robot Haru. Haru’s communicative modality involves face and body movements, sound voice and non-verbal sound, which have great potentials for storytelling. The affective robot for storytelling was compared to a neural one and human teachers. Preliminary results show the social robot for storytelling can make use of human teachers as integral to the design of the system and provide a personalized storytelling experience. Moreover, participants had positive attitudes toward storytelling by an affective robot compared to a neutral one.KeywordsStorytellingSocial robotEmpathyHuman-robot interaction

Proceedings ArticleDOI
23 Oct 2022
TL;DR: In this paper , a human-in-the-loop reinforcement learning mechanism was proposed to help robots learn emotional behavior by learning from implicit feedback of facial features, which can quickly understand and dynamically adapt to individual preferences and obtain a similar performance to learning from explicit feedback.
Abstract: We propose a human-in-the-loop reinforcement learning mechanism to help robots learn emotional behavior. Unlike the previous methods of providing explicit feedback via pressing keyboard buttons or mouse clicks, we provide a more natural way for ordinary people to train social robots how to perform social tasks according to their preferences - facial expressions. The whole experiment is carried out on the desktop robot Haru, which is mainly used for the research of emotion and empathy participation. Our experimental results show that through learning from implicit feedback of facial features, Haru can quickly understand and dynamically adapt to individual preferences, and obtain a similar performance to learning from explicit feedback. In addition, we observe that the recognition error of human feedback will cause a “temporary regress” of the robot's learning performance, which is more obvious at the beginning of the training process. This phenomenon is shown to be correlated with the accuracy of recognizing negative implicit feedback.


Proceedings ArticleDOI
23 May 2022
TL;DR: A 3- stage signalling framework to trigger a social robot's bottom- up reactive behavior inspired by a biological model that evolves primarily on the knowledge ontology that defines the characteristics of the social robot and the querying mechanism that correlates the perceived stimuli with the ontology to trigger the reactive behavior.
Abstract: This paper describes the development of a 3- stage signalling framework to trigger a social robot's bottom- up reactive behavior inspired by a biological model. In the first stage, low-level firing of stimuli due to external sources is constructed through perception grounding. This is followed by a saliency classifier which fires-up high level salient signals that require attention and are used to trigger the robot's reactive behavior. The whole framework evolves primarily on the knowledge ontology that defines the characteristics of the social robot and the querying mechanism that correlates the perceived stimuli with the ontology to trigger the reactive behavior. We evaluated the performance of our system with timing metrics and we achieved good results for our application.