scispace - formally typeset
Search or ask a question

Showing papers on "Humanoid robot published in 2008"


Journal ArticleDOI
TL;DR: This paper examines learning of complex motor skills with human-like limbs, and combines the idea of modular motor control by means of motor primitives as a suitable way to generate parameterized control policies for reinforcement learning with the theory of stochastic policy gradient learning.

921 citations


Proceedings ArticleDOI
14 Oct 2008
TL;DR: The appearance design, the mechanisms, the electrical systems, specifications, and features upgraded from its prototype are introduced and HRP-2 is a humanoid robotics platform developed in phase two of HRP.
Abstract: In this paper, the development of humanoid robot HRP-3 is presented. HRP-3, which stands for Humanoid Robotics Platform-3, is a human-size humanoid robot developed as the succeeding model of HRP-2. One of features of HRP-3 is that its main mechanical and structural components are designed to prevent the penetration of dust or spray. Another is that its wrist and hand are newly designed to improve manipulation. Software for a humanoid robot in a real environment is also improved. We also include information on mechanical features of HRP-3 and together with the newly developed hand. Also included are the technologies implemented in HRP-3 prototype. Electrical features and some experimental results using HRP-3 are also presented.

716 citations


Proceedings ArticleDOI
19 Aug 2008
TL;DR: The iCub is a humanoid robot for research in embodied cognition that will be able to crawl on all fours and sit up to manipulate objects and its hands have been designed to support sophisticate manipulation skills.
Abstract: We report about the iCub, a humanoid robot for research in embodied cognition. At 104 cm tall, the iCub has the size of a three and half year old child. It will be able to crawl on all fours and sit up to manipulate objects. Its hands have been designed to support sophisticate manipulation skills. The iCub is distributed as Open Source following the GPL/FDL licenses. The entire design is available for download from the project homepage and repository (http://www.robotcub.org). In the following, we will concentrate on the description of the hardware and software systems. The scientific objectives of the project and its philosophical underpinning are described extensively elsewhere [1].

573 citations


Journal ArticleDOI
TL;DR: The proposed network model, coordinating the physical body of a humanoid robot through high-dimensional sensori-motor control, also successfully situated itself within a physical environment and suggests that it is not only the spatial connections between neurons but also the timescales of neural activity that act as important mechanisms leading to functional hierarchy in neural systems.
Abstract: It is generally thought that skilled behavior in human beings results from a functional hierarchy of the motor control system, within which reusable motor primitives are flexibly integrated into various sensori-motor sequence patterns. The underlying neural mechanisms governing the way in which continuous sensori-motor flows are segmented into primitives and the way in which series of primitives are integrated into various behavior sequences have, however, not yet been clarified. In earlier studies, this functional hierarchy has been realized through the use of explicit hierarchical structure, with local modules representing motor primitives in the lower level and a higher module representing sequences of primitives switched via additional mechanisms such as gate-selecting. When sequences contain similarities and overlap, however, a conflict arises in such earlier models between generalization and segmentation, induced by this separated modular structure. To address this issue, we propose a different type of neural network model. The current model neither makes use of separate local modules to represent primitives nor introduces explicit hierarchical structure. Rather than forcing architectural hierarchy onto the system, functional hierarchy emerges through a form of self-organization that is based on two distinct types of neurons, each with different time properties (“multiple timescales”). Through the introduction of multiple timescales, continuous sequences of behavior are segmented into reusable primitives, and the primitives, in turn, are flexibly integrated into novel sequences. In experiments, the proposed network model, coordinating the physical body of a humanoid robot through high-dimensional sensori-motor control, also successfully situated itself within a physical environment. Our results suggest that it is not only the spatial connections between neurons but also the timescales of neural activity that act as important mechanisms leading to functional hierarchy in neural systems.

481 citations


Book ChapterDOI
08 Dec 2008
TL;DR: This paper extends previous work on policy learning from the immediate reward case to episodic reinforcement learning, resulting in a general, common framework also connected to policy gradient methods and yielding a novel algorithm for policy learning that is particularly well-suited for dynamic motor primitives.
Abstract: Many motor skills in humanoid robotics can be learned using parametrized motor primitives as done in imitation learning. However, most interesting motor learning problems are high-dimensional reinforcement learning problems often beyond the reach of current methods. In this paper, we extend previous work on policy learning from the immediate reward case to episodic reinforcement learning. We show that this results in a general, common framework also connected to policy gradient methods and yielding a novel algorithm for policy learning that is particularly well-suited for dynamic motor primitives. The resulting algorithm is an EM-inspired algorithm applicable to complex motor learning tasks. We compare this algorithm to several well-known parametrized policy search methods and show that it outperforms them. We apply it in the context of motor learning and show that it can learn a complex Ball-in-a-Cup task using a real Barrett WAM™ robot arm.

411 citations


Journal ArticleDOI
TL;DR: It is shown that by leveraging advances in robotics, an interface based on EEG can be used to command a partially autonomous humanoid robot to perform complex tasks such as walking to specific locations and picking up desired objects.
Abstract: We describe a brain-computer interface for controlling a humanoid robot directly using brain signals obtained non-invasively from the scalp through electroencephalography (EEG). EEG has previously been used for tasks such as controlling a cursor and spelling a word, but it has been regarded as an unlikely candidate for more complex forms of control owing to its low signal-to-noise ratio. Here we show that by leveraging advances in robotics, an interface based on EEG can be used to command a partially autonomous humanoid robot to perform complex tasks such as walking to specific locations and picking up desired objects. Visual feedback from the robot's cameras allows the user to select arbitrary objects in the environment for pick-up and transport to chosen locations. Results from a study involving nine users indicate that a command for the robot can be selected from four possible choices in 5 s with 95% accuracy. Our results demonstrate that an EEG-based brain-computer interface can be used for sophisticated robotic interaction with the environment, involving not only navigation as in previous applications but also manipulation and transport of objects.

388 citations


Journal ArticleDOI
TL;DR: This work presents a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots and demonstrates successful learning in the real world by having an humanoid robot interacting with objects.
Abstract: Affordances encode relationships between actions, objects, and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of a robot with the environment, a key step to understand the world properties and develop social skills. We present a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots. Since learning is based on a probabilistic model, the approach is able to deal with uncertainty, redundancy, and irrelevant information. We demonstrate successful learning in the real world by having an humanoid robot interacting with objects. We illustrate the benefits of the acquired knowledge in imitation games.

385 citations


Journal ArticleDOI
TL;DR: The authors studied whether an embodied human-like robot would elicit stronger anthropomorphic interactions than would a software agent, and whether physical presence moderated this effect, finding that participants were more engaged, disclosed less undesirable behavior, and forgot more with the robot versus the agent, while they ate less and anthropomorphized most with the collocated robot.
Abstract: People’s physical embodiment and presence increase their salience and importance. We predicted people would anthropomorphize an embodied humanoid robot more than a robot–like agent, and a collocated more than a remote robot. A robot or robot–like agent interviewed participants about their health. Participants were either present with the robot/agent, or interacted remotely with the robot/agent projected life–size on a screen. Participants were more engaged, disclosed less undesirable behavior, and forgot more with the robot versus the agent. They ate less and anthropomorphized most with the collocated robot. Participants interacted socially and attempted conversational grounding with the robot/agent though aware it was a machine. Basic questions remain about how people resolve the ambiguity of interacting with a humanlike nonhuman. By virtue of our shared global fate and similar DNA, we humans increasingly appreciate our similarity to nature’s living things. At the same time, we want machines, animals, and plants to meet our needs. Both impulses perhaps motivate the increasing development of humanlike robots and software agents. In this article, we examine social context moderation of anthropometric interactions between people and humanlike machines. We studied whether an embodied humanlike robot would elicit stronger anthropomorphic interactions than would a software agent, and whether physical presence moderated this effect. At the outset, robots and agents differ from ordinary computer programs in that they have autonomy, interact with the environment, and initiate tasks (Franklin & Graesser, 1996). The marriage of artificial intelligence and computer science has made possible robots and agents with humanlike capabilities, such as lifelike gestures and speech. Typically, “robot” refers to a physically–embodied system whereas “agent” refers to a software system. Examples of humanlike robots are NASA’s Robonaut—a humanoid that can hand tools to an astronaut (robonaut.jsc.nasa.gov/robonaut.html), Honda’s Asimo, and Hiroshi Ishiguro’s

304 citations


BookDOI
28 Apr 2008
TL;DR: A Unified Framework for Whole-Body Humanoid Robot Control with Multiple Constraints and Contacts for Robots in Dynamic Environments.
Abstract: Adaptive Multiple Resources Consumption Control for an Autonomous Rover.- Adaptive Snake Robot Locomotion: A Benchmarking Facility for Experiments.- Architecture for Neuronal Cell Control of a Mobile Robot.- The Ares Robot: Case Study of an Affordable Service Robot.- Balancing the Information Gain Against the Movement Cost for Multi-robot Frontier Exploration.- Compiling POMDP Models for a Multimodal Service Robot from Background Knowledge.- Constraint Based Object State Modeling.- A COTS-Based Mini Unmanned Aerial Vehicle (SR-H3) for Security, Environmental Monitoring and Surveillance Operations: Design and Test.- Eyes-Neck Coordination Using Chaos.- Formation Graphs and Decentralized Formation Control of Multi Vehicles with Kinematics Constraints.- Global Urban Localization of an Outdoor Mobile Robot with Genetic Algorithms.- Grip Force Control Using Vision-Based Tactile Sensor for Dexterous Handling.- HNG: A Robust Architecture for Mobile Robots Systems.- Information Relative Map Going Toward Constant Time SLAM.- Measuring Motion Expressiveness in Wheeled Mobile Robots.- Modeling, Simulation and Control of Pneumatic Jumping Robot.- Multilayer Perceptron Adaptive Dynamic Control of Mobile Robots: Experimental Validation.- Path Planning and Tracking Control for an Automatic Parking Assist System.- Performance Evaluation of Ultrasonic Arc Map Processing Techniques by Active Snake Contours.- Planning Robust Landmarks for Sensor Based Motion.- Postural Control on a Quadruped Robot Using Lateral Tilt: A Dynamical System Approach.- Propose of a Benchmark for Pole Climbing Robots.- Rat's Life: A Cognitive Robotics Benchmark.- Reactive Trajectory Deformation to Navigate Dynamic Environments.- Recovery in Autonomous Robot Swarms.- Robot Force/Position Tracking on a Surface of Unknown Orientation.- Scalable Operators for Feature Extraction on 3-D Data.- Semi-autonomous Learning of an RFID Sensor Model for Mobile Robot Self-localization.- A Simple Visual Navigation System with Convergence Property.- Stability of On-Line and On-Board Evolving of Adaptive Collective Behavior.- A Unified Framework for Whole-Body Humanoid Robot Control with Multiple Constraints and Contacts.- Visual Approaches for Handle Recognition.- Visual Top-Down Attention Framework for Robots in Dynamic Environments.- Visual Topological Mapping.- 3D Mapping and Localization Using Leveled Map Accelerated ICP.

301 citations


Journal ArticleDOI
TL;DR: Experimental results indicated that there is a relationship between negative attitudes and emotions, and communication avoidance behavior, which have important implications for robotics design.
Abstract: When people interact with communication robots in daily life, their attitudes and emotions toward the robots affect their behavior. From the perspective of robotics design, we need to investigate the influences of these attitudes and emotions on human-robot interaction. This paper reports our empirical study on the relationships between people's attitudes and emotions, and their behavior toward a robot. In particular, we focused on negative attitudes, anxiety, and communication avoidance behavior, which have important implications for robotics design. For this purpose, we used two psychological scales that we had developed: negative attitudes toward robots scale (NARS) and robot anxiety scale (RAS). In the experiment, subjects and a humanoid robot are engaged in simple interactions including scenes of meeting, greeting, self-disclosure, and physical contact. Experimental results indicated that there is a relationship between negative attitudes and emotions, and communication avoidance behavior. A gender effect was also suggested.

283 citations


Proceedings ArticleDOI
10 Oct 2008
TL;DR: A novel artificial skin for covering the whole body of a humanoid robot that provides pressure measurements and shape information about the contact surfaces between the robot and the environment and can adaptively reduce its spatial resolution, improving the response time.
Abstract: A novel artificial skin for covering the whole body of a humanoid robot is presented. It provides pressure measurements and shape information about the contact surfaces between the robot and the environment. The system is based on a mesh of sensors interconnected in order to form a networked structure. Each sensor has 12 capacitive taxels, has a triangular shape and is supported by a flexible substrate in order to conform to smooth curved surfaces. Three communications ports placed along the sides of each sensor sides allow communications with adjacent sensors. The tactile measurements are sent to embed microcontroller boards using serial bus communication links. The system can adaptively reduce its spatial resolution, improving the response time. This feature is very useful for detecting the first contact very rapidly, at a lower spatial resolution, and then increase the spatial resolution in the region of contact for accurate reconstruction of the contact pressure distribution.

Proceedings ArticleDOI
15 Aug 2008
TL;DR: This study explores how a robotpsilas physical or virtual presence affects unconscious human perception of the robot as a social partner by collaborating on simple book-moving tasks with either a physically present humanoid robot or a video-displayed robot.
Abstract: This study explores how a robotpsilas physical or virtual presence affects unconscious human perception of the robot as a social partner. Subjects collaborated on simple book-moving tasks with either a physically present humanoid robot or a video-displayed robot. Each task examined a single aspect of interaction: greetings, cooperation, trust, and personal space. Subjects readily greeted and cooperated with the robot in both conditions. However, subjects were more likely to fulfill an unusual instruction and to afford greater personal space to the robot in the physical condition than in the video-displayed condition. The same tendencies occurred when the virtual robot was supplemented by disambiguating 3-D information.

Journal ArticleDOI
TL;DR: This system allows a robot to learn a simple goal-directed gesture and correctly reproduce it despite changes in the initial conditions and perturbations in the environment and provides a solution to the inverse kinematics problem when dealing with a redundant manipulator.
Abstract: We present a system for robust robot skill acquisition from kinesthetic demonstrations. This system allows a robot to learn a simple goal-directed gesture and correctly reproduce it despite changes in the initial conditions and perturbations in the environment. It combines a dynamical system control approach with tools of statistical learning theory and provides a solution to the inverse kinematics problem when dealing with a redundant manipulator. The system is validated on two experiments involving a humanoid robot: putting an object into a box and reaching for and grasping an object.

Journal ArticleDOI
TL;DR: It is demonstrated that an appropriate feedback controller can be acquired within a few thousand trials by numerical simulations and the controller obtained in numerical simulation achieves stable walking with a physical robot in the real world.
Abstract: In this paper we describe a learning framework for a central pattern generator (CPG)-based biped locomotion controller using a policy gradient method. Our goals in this study are to achieve CPG-based biped walking with a 3D hardware humanoid and to develop an efficient learning algorithm with CPG by reducing the dimensionality of the state space used for learning. We demonstrate that an appropriate feedback controller can be acquired within a few thousand trials by numerical simulations and the controller obtained in numerical simulation achieves stable walking with a physical robot in the real world. Numerical simulations and hardware experiments evaluate the walking velocity and stability. The results suggest that the learning algorithm is capable of adapting to environmental changes. Furthermore, we present an online learning scheme with an initial policy for a hardware robot to improve the controller within 200 iterations.

Journal ArticleDOI
TL;DR: The YARP robot software architecture, which helps organize communication between sensors, processors, and actuators so that loose coupling is encouraged, making gradual system evolution much easier, and is designed to play well with other architectures.

Proceedings ArticleDOI
15 Aug 2008
TL;DR: A simple case of physical human-robot interaction, a hand-over task, with the same task done by a robot and a human is compared to provide the background for implementing effective joint-action strategies in humanoid robot systems.
Abstract: In many future joint-action scenarios, humans and robots will have to interact physically in order to successfully cooperate. Ideally, seamless human-robot interaction should not require training for the human, but should be intuitively simple. Nonetheless, seamless interaction and cooperation involve some degree of learning and adaptation. Here, we report on a simple case of physical human-robot interaction, a hand-over task. Even such a basic task as manually handing over an object from one agent to another requires that both partners agree upon certain basic prerequisites and boundary conditions. While some of them are negotiated explicitly, e.g. by verbal communication, others are determined indirectly and adaptively in the course of the cooperation. In the present study, we compared human-human hand-over interaction with the same task done by a robot and a human. To evaluate the importance of biological motion, the robot human interaction was tested with two different velocity profiles: a conventional trapezoidal velocity profile in joint coordinates and a minimum-jerk profile of the end-effector. Our results show a significantly shorter reaction time for minimum jerk profiles, which decreased over the first three hand-overs. The results of our comparison provide the background for implementing effective joint-action strategies in humanoid robot systems.

Proceedings ArticleDOI
14 Oct 2008
TL;DR: It is shown that it is possible to allow on top of that a continuous adaptation of the positions of the foot steps, allowing the generation of stable walking gaits even in the presence of strong perturbations.
Abstract: Building on previous propositions to generate walking gaits online through the use of linear model predictive control, the goal of this paper is to show that it is possible to allow on top of that a continuous adaptation of the positions of the foot steps, allowing the generation of stable walking gaits even in the presence of strong perturbations, and that this additional adaptation requires only a minimal modification of the previous schemes, especially maintaining the same linear model predictive form. Simulation results are proposed then on the HRP-2 humanoid robot, showing a significant improvement over the previous schemes.

Proceedings ArticleDOI
19 Aug 2008
TL;DR: The prototype of a new computer simulator for the humanoid robot iCub, developed as part of a joint effort with the European project "ITALK" on the integration and transfer of action and language knowledge in cognitive robots.
Abstract: This paper presents the prototype of a new computer simulator for the humanoid robot iCub. The iCub is a new open-source humanoid robot developed as a result of the "RobotCub" project, a collaborative European project aiming at developing a new open-source cognitive robotics platform. The iCub simulator has been developed as part of a joint effort with the European project "ITALK" on the integration and transfer of action and language knowledge in cognitive robots. This is available open-source to all researchers interested in cognitive robotics experiments with the iCub humanoid platform.

Proceedings ArticleDOI
19 May 2008
TL;DR: The design and experimental validation of an anthropomorphic underactuated robotic hand with 15 degrees of freedom and a single actuator is presented and the results demonstrate the feasibility of a humanoid hand with many degrees offreedom and one single degree of actuation.
Abstract: This paper presents the design and experimental validation of an anthropomorphic underactuated robotic hand with 15 degrees of freedom and a single actuator. First, the force transmission design of underactuated fingers is revisited. An optimal geometry of the tendon-driven fingers is then obtained. Then, underactuation between the fingers is addressed using differential mechanisms. Tendon routings are proposed and verified experimentally. Finally, a prototype of a 15-degree-of-freedom hand is built and tested. The results demonstrate the feasibility of a humanoid hand with many degrees of freedom and one single degree of actuation.

Proceedings ArticleDOI
19 May 2008
TL;DR: The influence of robot mass and velocity during blunt unconstrained impacts with humans during collisions at typical robot velocities is shown and this injury mechanism which is more probable in robotics is evaluated in detail.
Abstract: Accidents occurring with classical industrial robots often lead to fatal injuries. Presumably, this is to a great extent caused by the possibility of clamping the human in the confined workspace of the robot. Before generally allowing physical cooperation of humans and robots in future applications it is therefore absolutely crucial to analyze this extremely dangerous situation. In this paper we will investigate many aspects relevant to this sort of injury mechanisms and discuss the importance to domestic environments or production assistants. Since clamped impacts are intrinsically more dangerous than free ones it is fundamental to discuss and evaluate metrics to ensure safe interaction if clamping is possible. We compare various robots with respect to their injury potential leading to a main safety requirement of robot design: Reduce the intrinsic injury potential of a robot by reducing its weight.

Proceedings ArticleDOI
14 Oct 2008
TL;DR: The centroidal momentum of a humanoid robot is the sum of the individual link momenta, after projecting each to the robotpsilas Center of Mass (CoM).
Abstract: The centroidal momentum of a humanoid robot is the sum of the individual link momenta, after projecting each to the robotpsilas Center of Mass (CoM). Centroidal momentum is a linear function of the robotpsilas generalized velocities and the centroidal momentum matrix is the matrix form of this function. This matrix has been called both a Jacobian matrix and an inertia matrix by others. We show that it is actually a product of a Jacobian and an inertia matrix.

Proceedings ArticleDOI
19 May 2008
TL;DR: A modular and distributed software architecture which is capable of fusing visual and acoustic saliency maps into one egocentric frame of reference endows the iCub with an emergent exploratory behavior reacting to combined visual and auditory saliency.
Abstract: This work presents a multimodal bottom-up attention system for the humanoid robot iCub where the robot's decisions to move eyes and neck are based on visual and acoustic saliency maps We introduce a modular and distributed software architecture which is capable of fusing visual and acoustic saliency maps into one egocentric frame of reference This system endows the iCub with an emergent exploratory behavior reacting to combined visual and auditory saliency The developed software modules provide a flexible foundation for the open iCub platform and for further experiments and developments, including higher levels of attention and representation of the peripersonal space

Book ChapterDOI
20 Oct 2008
TL;DR: A method of computing efficient and natural-looking motions for humanoid robots walking on varied terrain using a small set of high-quality motion primitives that have been generated offline to derive a sampling strategy for a probabilistic, sample-based planner.
Abstract: This paper presents a method of computing efficient and natural-looking motions for humanoid robots walking on varied terrain. It uses a small set of high-quality motion primitives (such as a fixed gait on flat ground) that have been generated offline. But rather than restrict motion to these primitives, it uses them to derive a sampling strategy for a probabilistic, sample-based planner. Results in simulation on several different terrains demonstrate a reduction in planning time and a marked increase in motion quality.

Posted Content
TL;DR: The autonomous humanoid robot called NAO that is built by the French company Aldebaran-Robotics is an open and easy-to-handle platform where the user can change all the embedded system software or just add some applications to make the robot adopt specific behaviours.
Abstract: This article presents the design of the autonomous humanoid robot called NAO that is built by the French company Aldebaran-Robotics. With its height of 0.57 m and its weight about 4.5 kg, this innovative robot is lightweight and compact. It distinguishes itself from its existing Japanese, American, and other counterparts thanks to its pelvis kinematics design, its proprietary actuation system based on brush DC motors, its electronic, computer and distributed software architectures. This robot has been designed to be affordable without sacrificing quality and performance. It is an open and easy-to-handle platform where the user can change all the embedded system software or just add some applications to make the robot adopt specific behaviours. The robot's head and forearms are modular and can be changed to promote further evolution. The comprehensive and functional design is one of the reasons that helped select NAO to replace the AIBO quadrupeds in the 2008 RoboCup standard league.

Proceedings ArticleDOI
14 Oct 2008
TL;DR: The design and experimental results of the latest walking robot dasiaFlamepsila and the design of the next robot in line dasiaTUlippsila are presented, focusing on the mechanical implementation of series elastic actuation, which is ideal for Limit Cycle Walkers since it offers high controllability without having the actuator dominating the system dynamics.
Abstract: The concept of dasiaLimit Cycle Walkingpsila in bipedal robots removes the constraint of dynamic balance at every instance during gait. We hypothesize that this is crucial for the development of increasingly versatile and energy-effective humanoid robots. It allows the application of a wide range of gaits and it allows a robot to utilize its natural dynamics in order to reduce energy use. This paper presents the design and experimental results of our latest walking robot dasiaFlamepsila and the design of our next robot in line dasiaTUlippsila. The focus is on the mechanical implementation of series elastic actuation, which is ideal for Limit Cycle Walkers since it offers high controllability without having the actuator dominating the system dynamics. Walking experiments show the potential of our robots, showing good walking performance, though using simple control.

Journal ArticleDOI
TL;DR: It is demonstrated how antagonistic pneumatic actuators can be utilized to achieve three dynamic locomotion modes (walking, jumping, and running) in a biped robot and it is concluded that the antagonists are superior candidates for constructing a human-like dynamic locomotor.

Proceedings ArticleDOI
19 May 2008
TL;DR: The simulation results using OpenHRP platform, which is a dynamical simulator for humanoid robot motions, have pointed out that the imitated motions preserve the salient characteristics of the original human captured motion.
Abstract: In this paper, the imitation of human captured motions by a humanoid robot is considered. The main objective is to reproduce an imitated motion which should be as close as possible to the original human captured motion. To achieve this goal, the imitation problem is formulated as an optimization problem and the physical limits of the humanoid robot are considered as constraints. The optimization problem is then solved recursively by using an efficient dynamics algorithm, which allows the calculation of the gradient function with respect to the control parameters analytically. The simulation results using OpenHRP platform, which is a dynamical simulator for humanoid robot motions, have pointed out that the imitated motions preserve the salient characteristics of the original human captured motion. Moreover the optimization procedure converges well thanks to the analytical calculation of the gradient function.

Journal ArticleDOI
TL;DR: Different appearances of real humanoid robots did not affect participant verbal behaviors, but they did affect such nonverbal behaviors as distance and delay of response, and these differences are explained by impressions and attributions.
Abstract: Identifying the extent to which the appearance of a humanoid robot affects human behavior toward it is important. We compared participant impressions of and behaviors toward two real humanoid robots in simple human-robot interactions. These two robots, which have different appearances but are controlled to perform the same recorded utterances and motions, are adjusted by a motion-capturing system. We conducted an experiment with 48 human participants who individually interacted with the two robots and also with a human for reference. The results revealed that different appearances did not affect participant verbal behaviors, but they did affect such nonverbal behaviors as distance and delay of response. These differences are explained by two factors: impressions and attributions.

Journal ArticleDOI
TL;DR: A new humanoid robot currently being developed for applications in human-centred environments is presented, consisting of a motion planner for the generation of collision-free paths and a vision system for the recognition and localization of a subset of household objects as well as a grasp analysis component which provides the most feasible grasp configurations for each object.

Journal ArticleDOI
TL;DR: An adaptation mechanism based on reinforcement learning that reads subconscious body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interactions is proposed.
Abstract: Human beings subconsciously adapt their behaviors to a communication partner in order to make interactions run smoothly. In human-robot interactions, not only the human but also the robot is expected to adapt to its partner. Thus, to facilitate human-robot interactions, a robot should be able to read subconscious comfort and discomfort signals from humans and adjust its behavior accordingly, just like a human would. However, most previous research works expected the human to consciously give feedback, which might interfere with the aim of interaction. We propose an adaptation mechanism based on reinforcement learning that reads subconscious body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interactions. The mechanism uses gazing at the robot's face and human movement distance as subconscious body signals that indicate a human's comfort and discomfort. A pilot study with a humanoid robot that has ten interaction behaviors has been conducted. The study result of 12 subjects suggests that the proposed mechanism enables autonomous adaptation to individual preferences. Also, detailed discussion and conclusions are presented.