scispace - formally typeset
Search or ask a question

Showing papers on "Humanoid robot published in 2007"


Journal ArticleDOI
TL;DR: The first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches is presented.
Abstract: We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera

3,772 citations


Journal ArticleDOI
01 Apr 2007
TL;DR: A programming-by-demonstration framework for generically extracting the relevant features of a given task and for addressing the problem of generalizing the acquired knowledge to different contexts is presented.
Abstract: We present a programming-by-demonstration framework for generically extracting the relevant features of a given task and for addressing the problem of generalizing the acquired knowledge to different contexts. We validate the architecture through a series of experiments, in which a human demonstrator teaches a humanoid robot simple manipulatory tasks. A probability-based estimation of the relevance is suggested by first projecting the motion data onto a generic latent space using principal component analysis. The resulting signals are encoded using a mixture of Gaussian/Bernoulli distributions (Gaussian mixture model/Bernoulli mixture model). This provides a measure of the spatio-temporal correlations across the different modalities collected from the robot, which can be used to determine a metric of the imitation performance. The trajectories are then generalized using Gaussian mixture regression. Finally, we analytically compute the trajectory which optimizes the imitation metric and use this to generalize the skill to different contexts

1,089 citations


Proceedings ArticleDOI
10 Mar 2007
TL;DR: It is shown that the essential characteristics of a gesture can be efficiently transferred by interacting socially with the robot by using active teaching methods that puts the human teacher “in the loop” of the robot's learning.
Abstract: We present an approach to teach incrementally human gestures to a humanoid robot. By using active teaching methods that puts the human teacher "in the loop" of the robot's learning, we show that the essential characteristics of a gesture can be efficiently transferred by interacting socially with the robot. In a first phase, the robot observes the user demonstrating the skill while wearing motion sensors. The motion of his/her two arms and head are recorded by the robot, projected in a latent space of motion and encoded bprobabilistically in a Gaussian Mixture Model (GMM). In a second phase, the user helps the robot refine its gesture by kinesthetic teaching, i.e. by grabbing and moving its arms throughout the movement to provide the appropriate scaffolds. To update the model of the gesture, we compare the performance of two incremental training procedures against a batch training procedure. We present experiments to show that different modalities can be combined efficiently to teach incrementally basketball officials' signals to a HOAP-3 humanoid robot.

360 citations


Journal ArticleDOI
Masato Hirose1, Kenichi Ogawa
TL;DR: The continuous transition from walking in a straight line to making a turn has been achieved with the latest humanoid robot ASIMO, the most advanced robot of Honda so far in the mechanism and the control system.
Abstract: Honda has been doing research on robotics since 1986 with a focus upon bipedal walking technology. The research started with straight and static walking of the first prototype two-legged robot. Now, the continuous transition from walking in a straight line to making a turn has been achieved with the latest humanoid robot ASIMO. ASIMO is the most advanced robot of Honda so far in the mechanism and the control system. ASIMO9s configuration allows it to operate freely in the human living space. It could be of practical help to humans with its ability of five-finger arms as well as its walking function. The target of further development of ASIMO is to develop a robot to improve life in human society. Much development work will be continued both mechanically and electronically, staying true to Honda9s ‘challenging spirit’.

348 citations


Journal ArticleDOI
TL;DR: The importance of replicating human-like capabilities and responses during human-robot interaction in this context is described, including compliant balance, even when affected by unknown external forces, which demonstrates the effectiveness of the method.
Abstract: This paper proposes an effective framework of human-humanoid robot physical interaction. Its key component is a new control technique for full-body balancing in the presence of external forces, which is presented and then validated empirically. We have adopted an integrated system approach to develop humanoid robots. Herein, we describe the importance of replicating human-like capabilities and responses during human-robot interaction in this context. Our balancing controller provides gravity compensation, making the robot passive and thereby facilitating safe physical interactions. The method operates by setting an appropriate ground reaction force and transforming these forces into full-body joint torques. It handles an arbitrary number of force interaction points on the robot. It does not require force measurement at interested contact points. It requires neither inverse kinematics nor inverse dynamics. It can adapt to uneven ground surfaces. It operates as a force control process, and can therefore, accommodate simultaneous control processes using force-, velocity-, or position-based control. Forces are distributed over supporting contact points in an optimal manner. Joint redundancy is resolved by damping injection in the context of passivity. We present various force interaction experiments using our full-sized bipedal humanoid platform, including compliant balance, even when affected by unknown external forces, which demonstrates the effectiveness of the method.

327 citations


Journal ArticleDOI
TL;DR: In this article, a biomechatronic approach is proposed to harmonize the mechanical design of an anthropomorphic artificial hand with the design of the hand control system, and a proper hand control scheme is designed and implemented for the study and optimization of hand motor performance in order to achieve a human-like motor behavior.
Abstract: This paper proposes a biomechatronic approach to the design of an anthropomorphic artificial hand able to mimic the natural motion of the human fingers. The hand is conceived to be applied to prosthetics as well as to humanoid and personal robotics; hence, anthropomorphism is a fundamental requirement to be addressed both in the physical aspect and in the functional behavior. In this paper, a biomechatronic approach is addressed to harmonize the mechanical design of the anthropomorphic artificial hand with the design of the hand control system. More in detail, this paper focuses on the control system of the hand and on the optimization of the hand design in order to obtain a human-like kinematics and dynamics. By evaluating the simulated hand performance, the mechanical design is iteratively refined. The mechanical structure and the ratio between number of actuators and number of degrees of freedom (DOFs) have been optimized in order to cope with the strict size and weight constraints that are typical of application of artificial hands to prosthetics and humanoid robotics. The proposed hand has a kinematic structure similar to the natural hand featuring three articulated fingers (thumb, index, and middle finger with 3 DOF for each finger and 1 DOF for the abduction/adduction of the thumb) driven by four dc motors. A special underactuated transmission has been designed that allows keeping the number of motors as low as possible while achieving a self-adaptive grasp, as a result of the passive compliance of the distal DOF of the fingers. A proper hand control scheme has been designed and implemented for the study and optimization of hand motor performance in order to achieve a human-like motor behavior. To this aim, available data on motion of the human fingers are collected from the neuroscience literature in order to derive a reference input for the control. Simulation trials and computer-aided design (CAD) mechanical tools are used to obtain a finger model including its dynamics. Also the closed-loop control system is simulated in order to study the effect of iterative mechanical redesign and to define the final set of mechanical parameters for the hand optimization. Results of the experimental tests carried out for validating the model of the robotic finger, and details on the process of integrated refinement and optimization of the mechanical structure and of the hand motor control scheme are extensively reported in the paper.

324 citations


Journal ArticleDOI
TL;DR: A mechanism for two social communication abilities: forming long-term relationships and estimating friendly relationships among people is proposed and the results demonstrate the potential of current interactive robots to establish social relationships with humans in the authors' daily lives.
Abstract: Interactive robots participating in our daily lives should have the fundamental ability to socially communicate with humans. In this paper, we propose a mechanism for two social communication abilities: forming long-term relationships and estimating friendly relationships among people. The mechanism for long-term relationships is based on three principles of behavior design. The robot we developed, Robovie, is able to interact with children in the same way as children do. Moreover, the mechanism is designed for long-term interaction along the following three design principles: (1) it calls children by name using radio frequency identification tags; (2) it adapts its interactive behaviors for each child based on a pseudo development mechanism; and (3) it confides its personal matters to the children who have interacted with the robot for an extended period of time. Regarding the estimation of friendly relationships, the robot assumes that people who spontaneously behave as a group together are friends. Then, by identifying each person in the interacting group around the robot, it estimates the relationships between them. We conducted a two-month field trial at an elementary school. An interactive humanoid robot, Robovie, was placed in a classroom at the school. The results of the field trial revealed that the robot successfully continued interacting with many children for two months, and seemed to have established friendly relationships with them. In addition, it demonstrated reasonable performance in identifying friendships among children. We believe that these results demonstrate the potential of current interactive robots to establish social relationships with humans in our daily lives.

317 citations


Proceedings ArticleDOI
10 Mar 2007
TL;DR: Tradeoffs for HRI research of using collocated robots, remote robots, and computer agents as proxies of robots are discussed and a few behavioral and large attitude differences are found.
Abstract: HRI researchers interested in social robots have made large investments in humanoid robots. There is still sparse evidence that peoples' responses to robots differ from their responses to computer agents, suggesting that agent studies might serve to test HRI hypotheses. To help us understand the difference between people's social interactions with an agent and a robot, we experimentally compared people's responses in a health interview with (a) a computer agent projected either on a computer monitor or life-size on a screen, (b) a remote robot projected life-size on a screen, or (c) a collocated robot in the same room. We found a few behavioral and large attitude differences across these conditions. Participants forgot more and disclosed least with the collocated robot, next with the projected remote robot, and then with the agent. They spent more time with the collocated robot and their attitudes were most positive toward that robot. We discuss tradeoffs for HRI research of using collocated robots, remote robots, and computer agents as proxies of robots.

283 citations


Proceedings ArticleDOI
01 Nov 2007
TL;DR: Using these simple models, analytic decision surfaces that are functions of reference points, such as the center of mass and center of pressure, that predict whether or not a fall is inevitable are developed.
Abstract: We extend simple models previously developed for humanoids to large push recovery. Using these simple models, we develop analytic decision surfaces that are functions of reference points, such as the center of mass and center of pressure, that predict whether or not a fall is inevitable. We explore three strategies for recovery: 1) using ankle torques, 2) moving internal joints, and 3) taking a step. These models can be used in robot controllers or in analysis of human balance and locomotion.

277 citations


Proceedings ArticleDOI
10 Apr 2007
TL;DR: This work presents a visual localization and map-learning system that relies on vision only and that is able to incrementally learn to recognize the different rooms of an apartment from any robot position.
Abstract: Localization for low cost humanoid or animal-like personal robots has to rely on cheap sensors and has to be robust to user manipulations of the robot. We present a visual localization and map-learning system that relies on vision only and that is able to incrementally learn to recognize the different rooms of an apartment from any robot position. This system is inspired by visual categorization algorithms called bag of words methods that we modified to make fully incremental and to allow a user-interactive training. Our system is able to reliably recognize the room in which the robot is after a short training time and is stable for long term use. Empirical validation on a real robot and on an image database acquired in real environments are presented.

263 citations


Journal ArticleDOI
01 Aug 2007
TL;DR: Results suggest that robot actions, even those without objects, may activate the human mirror neurons system, and both volitional and nonvolitional human actions also appear to activate the mirror neuron system to relatively the same degree.
Abstract: The current study investigated the properties of stimuli that lead to the activation of the human mirror neuron system, with an emphasis on those that are critically relevant for the perception of humanoid robots. Results suggest that robot actions, even those without objects, may activate the human mirror neuron system. Additionally, both volitional and nonvolitional human actions also appear to activate the mirror neuron system to relatively the same degree. Results from the current studies leave open the opportunity to use mirror neuron activation as a 'Turing test' for the development of truly humanoid robots.

Journal ArticleDOI
Jung-Yup Kim1, Ill-Woo Park1, Jun-Ho Oh1
TL;DR: An online control algorithm that considers local and global inclinations of the floor by which a biped humanoid robot can adapt to the floor conditions is proposed.
Abstract: This paper describes walking control algorithm for the stable walking of a biped humanoid robot on an uneven and inclined floor. Many walking control techniques have been developed based on the assumption that the walking surface is perfectly flat with no inclination. Accordingly, most biped humanoid robots have performed dynamic walking on well designed flat floors. In reality, however, a typical room floor that appears to be flat has local and global inclinations of about 2°. It is important to note that even slight unevenness of a floor can cause serious instability in biped walking robots. In this paper, the authors propose an online control algorithm that considers local and global inclinations of the floor by which a biped humanoid robot can adapt to the floor conditions. For walking motions, a suitable walking pattern was designed first. Online controllers were then developed and activated in suitable periods during a walking cycle. The walking control algorithm was successfully tested and proved through walking experiments on an uneven and inclined floor using KHR-2 (KAIST Humanoid robot-2), a test robot platform of our biped humanoid robot, HUBO.

Book ChapterDOI
01 Jun 2007
TL;DR: Recently, researcher’s interests in robotics are shifting from traditional studies on navigation and manipulation to human-robot interaction, with scant attention paid to robot appearances.
Abstract: Why are people attracted to humanoid robots and androids? The answer is simple: because human beings are attuned to understand or interpret human expressions and behaviors, especially those that exist in their surroundings. As they grow, infants, who are supposedly born with the ability to discriminate various types of stimuli, gradually adapt and fine-tune their interpretations of detailed social clues from other voices, languages, facial expressions, or behaviors (Pascalis et al., 2002). Perhaps due to this functionality of nature and nurture, people have a strong tendency to anthropomorphize nearly everything they encounter. This is also true for computers or robots. In other words, when we see PCs or robots, some automatic process starts running inside us that tries to interpret them as human. The media equation theory (Reeves & Nass, 1996) first explicitly articulated this tendency within us. Since then, researchers have been pursuing the key element to make people feel more comfortable with computers or creating an easier and more intuitive interface to various information devices. This pursuit has also begun spreading in the field of robotics. Recently, researcher’s interests in robotics are shifting from traditional studies on navigation and manipulation to human-robot interaction. A number of researches have investigated how people respond to robot behaviors and how robots should behave so that people can easily understand them (Fong et al., 2003; Breazeal, 2004; Kanda et al., 2004). Many insights from developmental or cognitive psychologies have been implemented and examined to see how they affect the human response or whether they help robots produce smooth and natural communication with humans. However, human-robot interaction studies have been neglecting one issue: the "appearance versus behavior problem." We empirically know that appearance, one of the most significant elements in communication, is a crucial factor in the evaluation of interaction (See Figure 1). The interactive robots developed so far had very mechanical outcomes that do appear as “robots.” Researchers tried to make such interactive robots “humanoid” by equipping them with heads, eyes, or hands so that their appearance more closely resembled human beings and to enable them to make such analogous human movements or gestures as staring, pointing, and so on. Functionality was considered the primary concern in improving communication with humans. In this manner, many studies have compared robots with different behaviors. Thus far, scant attention has been paid to robot appearances. Although

Proceedings ArticleDOI
01 Aug 2007
TL;DR: It is demonstrated that robots and people can effectively and intuitively work together by directly handing objects to one another and a robotic application that relies on this form of human-robot interaction is presented.
Abstract: For manipulation tasks, the transfer of objects between humans and robots is a fundamental way to coordinate activity and cooperatively perform useful work. Within this paper we demonstrate that robots and people can effectively and intuitively work together by directly handing objects to one another. First, we present experimental results that demonstrate that subjects without explicit instructions or robotics expertise can successfully hand objects to a robot and take objects from a robot in response to reaching gestures. Moreover, when handing an object to the robot, subjects control the object's position and orientation to match the configuration of the robot's hand, thereby simplifying robotic grasping and offering opportunities to simplify the manipulation task. Second, we present a robotic application that relies on this form of human-robot interaction. This application enables a humanoid robot to help a user place objects on a shelf, perform bimanual insertion tasks, and hold a box within which the user can place objects. By handing appropriate objects to the robot, the human directly and intuitively controls the robot. Through this interaction, the human and robot complement one another's abilities and work together to achieve results.

Journal ArticleDOI
TL;DR: The Robota project constructs a series of multiple-degrees-of-freedom, doll-shaped humanoid robots, whose physical features resemble those of a human baby as discussed by the authors, which are applied as assistive technologies in behavioral studies with low-functioning children with autism.
Abstract: The Robota project constructs a series of multiple-degrees-of-freedom, doll-shaped humanoid robots, whose physical features resemble those of a human baby. The Robota robots have been applied as assistive technologies in behavioral studies with low-functioning children with autism. These studies investigate the potential of using an imitator robot to assess children's imitation ability and to teach children simple coordinated behaviors. In this article, the authors review the recent technological developments that have made the Robota robots suitable for use with children with autism. They critically appraise the main outcomes of two sets of behavioral studies conducted with Robota and discuss how these results inform future development of the Robota robots and robots in general for the rehabilitation of children with complex developmental disabilities.

Journal ArticleDOI
TL;DR: A humanoid robot and ubiquitous sensors in an autonomous system to assist visitors at an Osaka Science Museum exhibit and shows how simple recognition functions such as identifying an individual are difficult.
Abstract: One objective of the Intelligent Robotics and Communication Laboratories is to develop an intelligent communication robot that supports people in an open everyday environment by interacting with them. A humanoid robot can help achieve this objective because its physical structure lets it interact through human-like body movements such as shaking hands, greeting, and pointing. Both adults and children are more likely to understand such interactions than interactions with an electronic interface such as a touch panel or buttons. To behave intelligently during an interaction, a robot requires many types of information about its environment and the people with whom it interacts. However, in open everyday environments, simple recognition functions such as identifying an individual are difficult because the presence and movement of a large number of people as well as unfavorable illumination and background conditions affect the robot's sensing ability. We integrated humanoid robots and ubiquitous sensors in an autonomous system to assist visitors at an Osaka Science Museum exhibit

Book ChapterDOI
01 Jun 2007
TL;DR: This chapter introduces the paradigmLimit Cycle Walking, a new stability paradigm with fewer artificial constraints and thus more freedom for finding more efficient, natural, fast and robust walking motions.
Abstract: This chapter introduces the paradigmLimit Cycle Walking'. This paradigm for the design and control of two-legged walking robots can lead to unprecedented performance in terms of speed, efficiency, disturbance rejection and versatility. This is possible because this paradigm imposes fewer artificial constraints to the robot's walking motion compared to other existing paradigms. The application of artificial constraints is a commonly adopted and successful approach to bipedal robotic gait synthesis. The approach is similar to the successful development of factory robots, which depend on their constrained, structured environment. For robotic walking, the artificial constraints are useful to alleviate the difficult problem of stabilizing the complex dynamic walking motion. Using artificial stability constraints enables the creation of robotic gait, but at the same time inherently limits the performance of the gait that can be obtained. The more restrictive the constraints are, the less freedom is left for optimizing performance. The oldest and most constrained paradigm for robot walking is that ofstatic stability', used in the first successful creation of bipedal robots in the early 70's. Static stability means that the vertical projection of the Center of Mass stays within the support polygon formed by the feet. It is straightforward to ensure walking stability this way, but it drastically limits the speed of the walking motions that can be obtained. Therefore, currently most humanoid robots use the more advancedZero Moment Point' (ZMP) paradigm (Vukobratovic et al., 1970). The stability is ensured with the ZMP-criterion which constrains the stance foot to remain in flat contact with the floor at all times. This constraint is less restrictive than static walking because the Center of Mass may travel beyond the support polygon. Nevertheless, these robots are still under-achieving in terms of efficiency, disturbance handling, and natural appearance compared to human walking (Collins et al., 2005). The solution to increase the performance is to release the constraints even more, which will require a new way of measuring and ensuring stability. This is the core ofLimit Cycle Walking'; a new stability paradigm with fewer artificial constraints and thus more freedom for finding more efficient, natural, fast and robust walking motions. Although this is the first time we propose and define the termLimit Cycle Walking', the method has been in use for a while now. The core of the method is to analyze the walking motion as a limit cycle, as first proposed by Hurmuzlu (Hurmuzlu and Moskowitz, 1986). Most of the research on `Passive Dynamic Walking' initiated by McGeer (McGeer, 1990a) follows this stability method. But also various actuated bipedal robots that have been built around the world fall in the category ofLimit Cycle Walkers'.

Proceedings ArticleDOI
01 Nov 2007
TL;DR: The outline of the latest results about development of new musculoskeletal humanoid Kojiro, which is the succeeding version of the authors' previous Kotaro, is shown.
Abstract: We have been promoting a project of musculoskeletal humanoids The project aims at the long-term goal of human-symbiotic robots as well as the mid-term goal of necessary design and control concepts for musculoskeletal robots This paper presents the concepts and aim of the project and also shows the outline of our latest results about development of new musculoskeletal humanoid Kojiro, which is the succeeding version of our previous Kotaro

Journal ArticleDOI
TL;DR: The systems for spontaneous speech recognition, multimodal dialogue processing, and visual perception of a user, which includes localization, tracking, and identification of the user, recognition of pointing gestures, as well as the recognition of a person's head orientation are presented.
Abstract: In this paper, we present our work in building technologies for natural multimodal human-robot interaction. We present our systems for spontaneous speech recognition, multimodal dialogue processing, and visual perception of a user, which includes localization, tracking, and identification of the user, recognition of pointing gestures, as well as the recognition of a person's head orientation. Each of the components is described in the paper and experimental results are presented. We also present several experiments on multimodal human-robot interaction, such as interaction using speech and gestures, the automatic determination of the addressee during human-human-robot interaction, as well on interactive learning of dialogue strategies. The work and the components presented here constitute the core building blocks for audiovisual perception of humans and multimodal human-robot interaction used for the humanoid robot developed within the German research project (Sonderforschungsbereich) on humanoid cooperative robots.

Journal ArticleDOI
TL;DR: A framework that achieves the Learning from Observation paradigm for learning dance motions is proposed, which enables a humanoid robot to imitate dance motions captured from human demonstrations.
Abstract: This paper proposes a framework that achieves the Learning from Observation paradigm for learning dance motions. The framework enables a humanoid robot to imitate dance motions captured from human demonstrations. This study especially focuses on leg motions to achieve a novel attempt in which a biped-type robot imitates not only upper body motions but also leg motions including steps. Body differences between the robot and the original dancer make the problem difficult because the differences prevent the robot from straightforwardly following the original motions and they also change dynamic body balance. We propose leg task models, which play a key role in solving the problem. Low-level tasks in leg motion are modelled so that they clearly provide essential information required for keeping dynamic stability and important motion characteristics. The models divide the problem of adapting motions into the problem of recognizing a sequence of the tasks and the problem of executing the task sequence. We have developed a method for recognizing the tasks from captured motion data and a method for generating the motions of the tasks that can be executed by existing robots including HRP-2. HRP-2 successfully performed the generated motions, which imitated a traditional folk dance performed by human dancers.

Journal ArticleDOI
TL;DR: HUBO has greater mechanical stiffness and a more detailed frame design than KHR-2, the stiffness of the frame was increased, and the detailed design around the joints and link frame was either modified or fully redesigned.
Abstract: The Korea Advanced Institute of Science and Technology (KAIST) humanoid robot-1 (KHR-1) was developed for the purpose of researching the walking action of bipeds. KHR-1, which has no hands or head, has 21 d.o.f.: 12 d.o.f. in the legs, 1 d.o.f. in the torso and 8 d.o.f. in the arms. The second version of this humanoid robot, KHR-2 (which has 41 d.o.f.) can walk on a living-room floor; it also moves and looks like a human. The third version, KHR-3 (HUBO), has more human-like features, a greater variety of movements and a more human-friendly character. We present the mechanical design of HUBO, including the design concept, the lower-body design, the upper-body design and the actuator selection of joints. Previously we developed and published details of KHR-1 and KHR-2. The HUBO platform, which is based on KHR-2, has 41 d.o.f., stands 125 cm tall and weighs 55 kg. From a mechanical point of view, HUBO has greater mechanical stiffness and a more detailed frame design than KHR-2. The stiffness of the frame was...

Proceedings ArticleDOI
10 Dec 2007
TL;DR: A robust model-based three-dimensional tracking system by programmable graphics hardware to operate online at frame-rate during locomotion of a humanoid robot and recovers the full 6 degree-of- freedom pose of viewable objects relative to the robot.
Abstract: For humanoid robots to fully realize their biped potential in a three-dimensional world and step over, around or onto obstacles such as stairs, appropriate and efficient approaches to execution, planning and perception are required. To this end, we have accelerated a robust model-based three-dimensional tracking system by programmable graphics hardware to operate online at frame-rate during locomotion of a humanoid robot. The tracker recovers the full 6 degree-of- freedom pose of viewable objects relative to the robot. Leveraging the computational resources of the GPU for perception has enabled us to increase our tracker's robustness to the significant camera displacement and camera shake typically encountered during humanoid navigation. We have combined our approach with a footstep planner and a controller capable of adaptively adjusting the height of swing leg trajectories. The resulting integrated perception-planning-action system has allowed an HRP-2 humanoid robot to successfully and rapidly localize, approach and climb stairs, as well as to avoid obstacles during walking.

Proceedings ArticleDOI
10 Mar 2007
TL;DR: This paper proposes using robots as a passive-social medium, in which multiple robots converse with each other, and looks for a way to attract people's interest in the information that robots convey.
Abstract: This paper reports a method that uses humanoid robots as a communication medium. There are many interactive robots under development, but due to their limited perception, their interactivity is still far poorer than that of humans. Our approach in this paper is to limit robots' purpose to a non-interactive medium and to look for a way to attract people's interest in the information that robots convey. We propose using robots as a passive-social medium, in which multiple robots converse with each other. We conducted a field experiment at a train station for eight days to investigate the effects of a passive-social medium.

Book ChapterDOI
01 Jan 2007
TL;DR: The enactive approach to cognition is outlined, drawing out the implications for phylogenetic configuration, the necessity for ontogenetic development, and the importance of humanoid embodiment, as well as the iCub's mechanical and electronic specifications, its software architecture, its cognitive architecture.
Abstract: This paper describes a multi-disciplinary initiative to promote collaborative research in enactive artificial cognitive systems by developing the iCub : a open-systems 53 degree-of-freedom cognitive humanoid robot. At 94 cm tall, the iCub is the same size as a three year-old child. It will be able to crawl on all fours and sit up, its hands will allow dexterous manipulation, and its head and eyes are fully articulated. It has visual, vestibular, auditory, and haptic sensory capabilities. As an open system, the design and documentation of all hardware and software is licensed under the Free Software Foundation GNU licences so that the system can be freely replicated and customized. We begin this paper by outlining the enactive approach to cognition, drawing out the implications for phylogenetic configuration, the necessity for ontogenetic development, and the importance of humanoid embodiment. This is followed by a short discussion of our motivation for adopting an open-systems approach. We proceed to describe the iCub's mechanical and electronic specifications, its software architecture, its cognitive architecture. We conclude by discussing the iCub phylogeny, i.e. the robot's intended innate abilities, and an scenario for ontogenesis based on human neonatal development.

Journal ArticleDOI
01 Jan 2007
TL;DR: An interactive, multimodal RPD framework using active teaching methods that places the human teacher in the robot's learning loop and incremental teaching scenario is proposed based on insights from various fields addressing developmental, psychological, and social issues related to teaching mechanisms in humans.
Abstract: Robot programming by demonstration (RPD) covers methods by which a robot learns new skills through human guidance. We present an interactive, multimodal RPD framework using active teaching methods that places the human teacher in the robot's learning loop. Two experiments are presented in which observational learning is first used to demonstrate a manipulation skill to a HOAP-3 humanoid robot by using motion sensors attached to the teacher's body. Then, putting the robot through the motion, the teacher incrementally refines the robot's skill by moving its arms manually, providing the appropriate scaffolds to reproduce the action. An incremental teaching scenario is proposed based on insights from various fields addressing developmental, psychological, and social issues related to teaching mechanisms in humans. Based on this analysis, different benchmarks are suggested to evaluate the setup further.

Proceedings ArticleDOI
10 Mar 2007
TL;DR: In this paper, the authors present a framework for interactive task training of a mobile robot where the robot learns how to do various tasks while observing a human, and the robot listens to the human's speech and interprets the speech as behaviors that are required to be executed.
Abstract: Effective human/robot interfaces which mimic how humans interact with one another could ultimately lead to robots being accepted in a wider domain of applications. We present a framework for interactive task training of a mobile robot where the robot learns how to do various tasks while observing a human. In addition to observation, the robot listens to the human's speech and interprets the speech as behaviors that are required to be executed. This is especially important where individual steps of a given task may have contingencies that have to be dealt with depending on the situation. Finally, the context of the location where the task takes place and the people present factor heavily into the robot's interpretation of how to execute the task. In this paper, we describe the task training framework, describe how environmental context and communicative dialog with the human help the robot learn the task, and illustrate the utility of this approach with several experimental case studies.

Proceedings ArticleDOI
10 Dec 2007
TL;DR: An imitation learning algorithm for a humanoid robot on top of a general world model provided by learned object affordances, which is used to recognize the demonstration by another agent and infer the task to be learned.
Abstract: In this paper we build an imitation learning algorithm for a humanoid robot on top of a general world model provided by learned object affordances. We consider that the robot has previously learned a task independent affordance-based model of its interaction with the world. This model is used to recognize the demonstration by another agent (a human) and infer the task to be learned. We discuss several important problems that arise in this combined framework, such as the influence of an inaccurate model in the recognition of the demonstration. We illustrate the ideas in the paper with some experimental results obtained with a real robot.

Journal ArticleDOI
TL;DR: This paper proposes the walking pattern generation method, the kinematic resolution method of center of mass (CoM) Jacobian with embedded motions, and the design method of posture/walking controller for humanoid robots that brings the disturbance input-to-state stability for the simplified bipedal walking robot model.
Abstract: This paper proposes the walking pattern generation method, the kinematic resolution method of center of mass (CoM) Jacobian with embedded motions, and the design method of posture/walking controller for humanoid robots. First, the walking pattern is generated using the simplified model for bipedal robot. Second, the kinematic resolution of CoM Jacobian with embedded motions makes a humanoid robot balanced automatically during movement of all other limbs. Actually, it offers an ability of whole body coordination to humanoid robot. Third, the posture/walking controller is completed by adding the CoM controller minus the zero moment point controller to the suggested kinematic resolution method. We prove that the proposed posture/walking controller brings the disturbance input-to-state stability for the simplified bipedal walking robot model. Finally, the effectiveness of the suggested posture/walking control method is shown through experiments with regard to the arm dancing and walking of humanoid robot.

Proceedings ArticleDOI
10 Apr 2007
TL;DR: The reaction mass pendulum (RMP) model is introduced, a 3D generalization of the better-known reaction wheel pendulum, which provides additional analytical insights into legged robot dynamics, especially for motions involving dominant rotation, and leads to a simpler class of control laws.
Abstract: A number of conceptually simple but behavior-rich "inverted pendulum" humanoid models have greatly enhanced the understanding and analytical insight of humanoid dynamics. However, these models do not incorporate the robot's angular momentum properties, a critical component of its dynamics. We introduce the reaction mass pendulum (RMP) model, a 3D generalization of the better-known reaction wheel pendulum. The RMP model augments the existing models by compactly capturing the robot's centroidal momenta through its composite rigid body (CRB) inertia. This model provides additional analytical insights into legged robot dynamics, especially for motions involving dominant rotation, and leads to a simpler class of control laws. In this paper we show how a humanoid robot of general geometry and dynamics can be mapped into its equivalent RMP model. A movement is subsequently mapped to the time evolution of the RMP. We also show how an "inertia shaping" control law can be designed based on the RMP.

Proceedings ArticleDOI
10 Dec 2007
TL;DR: A balance controller that allows a humanoid to recover from large disturbances and still maintain an upright posture is presented, making it possible to control complex robots using this simple control.
Abstract: This paper presents a balance controller that allows a humanoid to recover from large disturbances and still maintain an upright posture. Balance is achieved by integral control, which decouples the dynamics and produces smooth torque signals. Simulation shows the controller performs better than other simple balance controllers. Because the controller is inspired by human balance strategies, we compare human motion capture and force plate data to simulation. A model tracking controller is also presented, making it possible to control complex robots using this simple control.