scispace - formally typeset
Search or ask a question

Showing papers on "Humanoid robot published in 2004"


Proceedings ArticleDOI
27 Sep 2004
TL;DR: The development of humanoid robot HRP-3 is presented and it is shown that its main mechanical and structural components are designed to prevent the penetration of dust or spray and its wrist and hand are newly designed to improve manipulation.
Abstract: A development of humanoid robot HRP-2 is presented in this paper. HRP-2 is a humanoid robotics platform, which we developed in phase two of HRP. HRP was a humanoid robotics project, which had run by the Ministry of Economy, Trade and Industry (METI) of Japan from 1998FY to 2002FY for five years. The ability of the biped locomotion of HRP-2 is improved so that HRP-2 can cope with uneven surface, can walk at two third level of human speed, and can walk on a narrow path. The ability of whole body motion of HRP-2 is also improved so that HRP-2 can get up by a humanoid robot's own self if HRP-2 tips over safely. In this paper, the appearance design, the mechanisms, the electrical systems, specifications, and features upgraded from its prototype are also introduced.

897 citations


Journal ArticleDOI
TL;DR: The results suggest that humanoid robots may be appropriate for settings in which people have to delegate responsibility to these robots or when the task is too demanding for people to do, and when complacency is not a major concern.
Abstract: The use of autonomous, mobile professional service robots in diverse workplaces is expected to grow substantially over the next decade. These robots often will work side by side with people, collaborating with employees on tasks. Some roboticists have argued that, in these cases, people will collaborate more naturally and easily with humanoid robots as compared with machine-like robots. It is also speculated that people will rely on and share responsibility more readily with robots that are in a position of authority. This study sought to clarify the effects of robot appearance and relative status on human-robot collaboration by investigating the extent to which people relied on and ceded responsibility to a robot coworker. In this study, a 3 × 3 experiment was conducted with human likeness (human, human-like robot, and machine-like robot) and status (subordinate, peer, and supervisor) as dimensions. As far as we know, this study is one of the first experiments examining how people respond to robotic coworkers. As such, this study attempts to design a robust and transferable sorting and assembly task that capitalizes on the types of tasks robots are expected to do and is embedded in a realistic scenario in which the participant and confederate are interdependent. The results show that participants retained more responsibility for the successful completion of the task when working with a machine-like as compared with a humanoid robot, especially when the machine-like robot was subordinate. These findings suggest that humanoid robots may be appropriate for settings in which people have to delegate responsibility to these robots or when the task is too demanding for people to do, and when complacency is not a major concern. Machine-like robots, however, may be more appropriate when robots are expected to be unreliable, are less well-equipped for the task than people are, or in other situations in which personal responsibility should be emphasized.

379 citations


Journal ArticleDOI
TL;DR: An approach for teaching a humanoid robot is presented that will enable the robot to learn typical tasks required in everyday household environments and the main focus is on the knowledge representation in order to be able to abstract the problem solution strategies and to transfer them onto the robot system.

315 citations


Journal ArticleDOI
25 Oct 2004
TL;DR: A humanoid robot that autonomously interacts with humans by speaking and gesturing is developed that communicates with humans and is designed to participate in human society as a partner and suggests a new analytical approach to human-robot interaction.
Abstract: We report the development and evaluation of a new interactive humanoid robot that communicates with humans and is designed to participate in human society as a partner. A human-like body will provide an abundance of nonverbal information and enable us to smoothly communicate with the robot. To achieve this, we developed a humanoid robot that autonomously interacts with humans by speaking and gesturing. Interaction achieved through a large number of interactive behaviors, which are developed by using a visualizing tool for understanding the developed complex system. Each interactive behavior is designed by using knowledge obtained through cognitive experiments and implemented by using situated recognition. The robot is used as a testbed for studying embodied communication. Our strategy is to analyze human-robot interaction in terms of body movements using a motion-capturing system that allows us to measure the body movements in detail. We performed experiments to compare the body movements with subjective evaluation based on a psychological method. The results reveal the importance of well-coordinated behaviors as well as the performance of the developed interactive behaviors and suggest a new analytical approach to human-robot interaction.

260 citations


Book ChapterDOI
TL;DR: OpenHRP is expected to initiate the exploration of humanoid robotics on an open architecture software and hardware, thanks to the unification of the controllers and the examined consistency between the simulator and a real humanoid robot.
Abstract: This paper introduces an open architecture humanoid robotics platform (OpenHRP for short) on which various building blocks of humanoid robotics can be investigated. OpenHRP is a virtual humanoid robot platform with a compatible humanoid robot, and consists of a simulator of humanoid robots and motion control library for them which can also be applied to a compatible humanoid robot as it is. OpenHRP also has a view simulator of humanoid robots on which humanoid robot vision can be studied. The consistency between the simulator and the robot are enhanced by introducing a new algorithm to simulate repulsive force and torque between contacting objects. OpenHRP is expected to initiate the exploration of humanoid robotics on an open architecture software and hardware, thanks to the unification of the controllers and the examined consistency between the simulator and a real humanoid robot.

258 citations


Proceedings ArticleDOI
04 Jul 2004
TL;DR: ST-Isomap augments the existing Isomap framework to consider temporal relationships in local neighborhoods that can be propagated globally via a shortest-path mechanism to reduce nonlinear dimension reduction for data with both spatial and temporal relationships.
Abstract: We present an extension of Isomap nonlinear dimension reduction (Tenenbaum et al., 2000) for data with both spatial and temporal relationships. Our method, ST-Isomap, augments the existing Isomap framework to consider temporal relationships in local neighborhoods that can be propagated globally via a shortest-path mechanism. Two instantiations of ST-Isomap are presented for sequentially continuous and segmented data. Results from applying ST-Isomap to real-world data collected from human motion performance and humanoid robot teleoperation are also presented.

237 citations


Proceedings ArticleDOI
06 Jul 2004
TL;DR: The integrated motion control method to make a bipedal humanoid walk, jump and run is proposed based on the concept of the dynamics filter, which assures that the force and the moment generated by the robot can equilibrate with that caused by the environment.
Abstract: This paper proposes the integrated motion control method to make a bipedal humanoid walk, jump and run. This method generates dynamically consistent motion patterns in real-time based on the concept of the dynamics filter, which assures that the force and the moment generated by the robot can equilibrate with that caused by the environment. The validity of the algorithm is verified by the dynamic simulation. The proposed method is applied to the real humanoid "QRIO" under the adaptive controls, and stable walking, jumping and running including the transitions between them are realized.

225 citations


Proceedings ArticleDOI
06 Jul 2004
TL;DR: Methods for path planning and obstacle avoidance for the humanoid robot QRIO, allowing the robot to autonomously walk around in a home environment are presented, based on plane extraction from data captured by a stereo-vision system that has been developed specifically forQRIO.
Abstract: This work presents methods for path planning and obstacle avoidance for the humanoid robot QRIO, allowing the robot to autonomously walk around in a home environment. For an autonomous robot, obstacle detection and localization as well as representing them in a map are crucial tasks for the success of the robot. Our approach is based on plane extraction from data captured by a stereo-vision system that has been developed specifically for QRIO. We briefly overview the general software architecture composed of perception, short and long term memory, behavior control, and motion control, and emphasize on our methods for obstacle detection by plane extraction, occupancy grid mapping, and path planning. Experimental results complete the description of our system.

204 citations


Proceedings ArticleDOI
28 Sep 2004
TL;DR: The authors' systems for spontaneous speech recognition, multimodal dialogue processing and visual perception of a user, which includes the recognition of pointing gestures as well as the Recognition of a person's head orientation, are presented.
Abstract: In this paper we present our ongoing work in building technologies for natural multimodal human-robot interaction. We present our systems for spontaneous speech recognition, multimodal dialogue processing and visual perception of a user, which includes the recognition of pointing gestures as well as the recognition of a person's head orientation. Each of the components is described in the paper and experimental results are presented. In order to demonstrate and measure the usefulness of such technologies for human-robot interaction, all components have been integrated on a mobile robot platform and have been used for real-time human-robot interaction in a kitchen scenario.

201 citations


Journal ArticleDOI
TL;DR: A general policy for learning the relevant features of an imitation task is developed and a general metric is determined that optimizes the policy of task reproduction, following strategy determination.

195 citations


Book ChapterDOI
01 Jan 2004
TL;DR: In this paper, a longitudinal study with four children with autism was presented, where the children were repeatedly exposed to the humanoid robot over a period of several months, and different behavioural criteria (including eye gaze, touch, and imitation) were evaluated based on the video data of the interactions.
Abstract: This work is part of the Aurora project which investigates the possible use of robots in therapy and education of children with autism (Aurora, 2003), based on findings that people with autism enjoy interacting with computers, e.g. (Powell, 1996). In most of our trials we have been using mobile robots, e.g. (Dautenhahn and Werry, 2002). More recently we tested the use of a humanoid robotic doll. In (Dautenhahn and Billard, 2002) we reported on a first set of trials with 14 autistic subjects interacting with this doll. In this chapter we discuss lessons learnt from our previous study, and introduce a new approach, heavily inspired by therapeutic issues. A longitudinal study with four children with autism is presented. The children were repeatedly exposed to the humanoid robot over a period of several months. Our aim was to encourage imitation and social interaction skills. Different behavioural criteria (including Eye Gaze, Touch, and Imitation) were evaluated based on the video data of the interactions. The chapter exemplifies the results that clearly demonstrate the crucial need for long-term studies in order to reveal the full potential of robots in therapy and education of children with autism.

Journal ArticleDOI
TL;DR: This paper exploits the similarity between human motion and humanoid robot motion to generate joint trajectories for humanoids and proposes an automatic approach to relate humanoid robot kinematics parameters to the kinematic parameters of a human performer.

Proceedings ArticleDOI
20 Sep 2004
TL;DR: This paper describes the work towards building a dynamic collaborative framework enabling human-robot collaboration of this nature, and presents a goal-driven hierarchical task representation, and a resulting collaborative turn-taking system, implementing many of the above-mentioned requirements of a robotic teammate.
Abstract: Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include—in the long term—robots for homes, hospitals, and offices, but already exist in more advanced settings, such as space exploration. The work reported in this paper is part of an ongoing collaboration with NASA JSC to develop Robonaut, a humanoid robot envisioned to work with human astronauts on maintenance operations for space missions. To date, work with Robonaut has mainly investigated performing a joint task with a human in which the robot is being teleoperated. However, perceptive disorientation, sensory noise, and control delays make teleoperation cognitively exhausting even for a highly skilled operator. Control delays in long range teleoperation also make shoulder-to-shoulder teamwork difficult. These issues motivate our work to make robots collaborating with people more autonomous. Our work focuses on a scenario of a human and an autonomous humanoid robot working together shoulder-to-shoulder, sharing the workspace and the objects required to complete a task. A robotic member of such a team must be able to work towards a shared goal, and be in agreement with the human as to the sequence of actions that will be required to reach that goal, as well as dynamically adjust its plan according to the human’s actions. Human-robot collaboration of this nature is an important yet relatively unexplored kind of human-robot interaction. This paper describes our work towards building a dynamic collaborative framework enabling such an interaction. We discuss our architecture and its implementation for controlling a humanoid robot, working on a task with a human partner. Our approach stems from Joint Intention Theory, which shows that for joint action to emerge, teammates must communicate to maintain a set of shared beliefs and to coordinate their actions towards the shared plan. In addition, they must demonstrate commitment to doing their own part, to the others doing theirs, to providing mutual support, and finally—to a mutual belief as to the state of the task. We argue that to this end, the concept of task and action goals is central. We therefore present a goal-driven hierarchical task representation, and a resulting collaborative turn-taking system, implementing many of the above-mentioned requirements of a robotic teammate. Additionally, we show the implementation of relevant social skills supporting our collaborative framework. Finally, we present a demonstration of our system for collaborative execution of a hierarchical object manipulation task by a robot-human team. Our humanoid robot is able to divide the task between the participants while taking into consideration the collaborator’s actions when deciding what to do next. It is capable of asking for mutual support in the cases where it is unable to perform a certain action. To facilitate this interaction, the robot actively maintains a clear and intuitive channel of communication to synchronize goals, task states, and actions, resulting in a fluid, efficient collaboration.

BookDOI
17 May 2004
TL;DR: An android robot that has similar appearance as humans and several actuators generating micro behaviors is developed and a new research direction based on the android robot is proposed.
Abstract: Behavior or Appearance? This is fundamental problem in robot development. Namely, not only the behavior but also the appearance of a robot influences human-robot interaction. There is, however, no research approach to tackling this problem. In order to state the problem, we have developed an android robot that has similar appearance as humans and several actuators generating micro behaviors. This paper proposes a new research direction based on the android robot.

Journal ArticleDOI
07 Jun 2004
TL;DR: A simple and effective gait-generation method, which imitates the energy behavior in every walking cycle considering the zero-moment point condition and other factors of the active walker, is proposed.
Abstract: This paper proposes novel energy-based gait generation and control methods for biped robots based on an analysis of passive dynamic walking. First, we discuss the essence of dynamic walking using a passive walker on a gentle slope from the mechanical energy point of view. Second, we propose a simple and effective gait-generation method, which imitates the energy behavior in every walking cycle considering the zero-moment point condition and other factors of the active walker. The control strategy is formed by taking into account the features of mechanical energy dissipation and restoration. Following the proposed method, the robot can exhibit a natural and reasonable walk on a level ground without any gait planning and design in advance. The effectiveness of the method is examined through numerical simulations and experiments.

Proceedings ArticleDOI
28 Sep 2004
TL;DR: A learning mechanism is presented, implemented on a humanoid robot, to demonstrate that a collaborative dialog framework allows a robot to efficiently learn a task from a human, generalize this ability to a new task configuration, and show commitment to the overall goal of the learned task.
Abstract: We view the problem of machine learning as a collaboration between the human and the machine. Inspired by human-style tutelage, we situate the learning problem within a dialog in which social interaction structures the learning experience, providing instruction, directing attention, and controlling the complexity of the task. We present a learning mechanism, implemented on a humanoid robot, to demonstrate that a collaborative dialog framework allows a robot to efficiently learn a task from a human, generalize this ability to a new task configuration, and show commitment to the overall goal of the learned task. We also compare this approach to traditional machine learning approaches.

Journal ArticleDOI
TL;DR: An overview of the work towards building socially intelligent, cooperative humanoid robots that can work and learn in partnership with people and a theoretical framework that is a novel combination of Joint Intention Theory and Situated Learning Theory is presented.
Abstract: This paper presents an overview of our work towards building socially intelligent, cooperative humanoid robots that can work and learn in partnership with people. People understand each other in social terms, allowing them to engage others in a variety of complex social interactions including communication, social learning, and cooperation. We present our theoretical framework that is a novel combination of Joint Intention Theory and Situated Learning Theory and demonstrate how this framework can be applied to develop our sociable humanoid robot, Leonardo. We demonstrate the robot's ability to learn quickly and effectively from natural human instruction using gesture and dialog, and then cooperate to perform a learned task jointly with a person. Such issues must be addressed to enable many new and exciting applications for robots that require them to play a long-term role in people's daily lives.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper focuses on analyzing the tradeoffs between maintaining dynamic roadmaps and applying an on-line bidirectional rapidly-exploring random tree (RRT) planner alone, which requires no preprocessing or maintenance.
Abstract: We evaluate the use of dynamic roadmaps for online motion planning in changing environments. When changes are detected in the workspace, the validity state of affected edges and nodes of a precompiled roadmap are updated accordingly. We concentrate in this paper on analyzing the tradeoffs between maintaining dynamic roadmaps and applying an on-line bidirectional rapidly-exploring random tree (RRT) planner alone, which requires no preprocessing or maintenance. We ground the analysis in several benchmarks in virtual environments with randomly moving obstacles. Different robotics structures are used, including a 17 degrees of freedom model of NASA's Robonaut humanoid. Our results show that dynamic roadmaps can be both faster and more capable for planning difficult motions than using on-line planning alone. In particular, we investigate its scalability to 3D workspaces and higher dimensional configurations spaces, as our main interest is the application of the method to interactive domains involving humanoids.

Proceedings ArticleDOI
19 Jul 2004
TL;DR: This work uses collaborative discourse with accompanying gestures and social cues to teach a humanoid robot a structurally complex task and dynamically mesh its plans with those of its partner, according to the relative capabilities of the human and the robot.
Abstract: New applications for autonomous robots bring them into the human environment where they are to serve as helpful assistants to untrained users in the home or office, or work as capable members of human-robot teams for security, military, and space efforts. These applications require robots to be able to quickly learn how to perform new tasks from natural human instruction, and to perform tasks collaboratively with human teammates. Using joint intention theory as our theoretical framework, our approach integrates learning and collaboration through a goal based task structure. Specifically, we use collaborative discourse with accompanying gestures and social cues to teach a humanoid robot a structurally complex task. Having learned the representation for the task, the robot then performs it shoulder-to-shoulder with a human partner, using social communication acts to dynamically mesh its plans with those of its partner, according to the relative capabilities of the human and the robot.

Proceedings ArticleDOI
10 Nov 2004
TL;DR: It is shown that the stable change of gait can be realized by using the quasi-real-time method even if the change of the step position is significant.
Abstract: This paper studies the real-time gait planning for a humanoid robot. By simultaneously planning the trajectories of the COG (center of gravity) and the ZMP (zero moment point), the fast and smooth change of gait, can be realized. The change of gait is also realized by connecting the newly calculated trajectories to the current ones. While we propose two methods for connecting two trajectories, i.e. the real-time method and the quasi-real-time one, we show that the stable change of gait can be realized by using the quasi-real-time method even if the change of the step position is significant. The effectiveness of the proposed methods is confirmed by simulation and experiment.

Proceedings ArticleDOI
28 Sep 2004
TL;DR: A classification for dual-arm manipulations is introduced, enabling a segmentation of tasks into adequate subtasks, and it is shown how the generated programs are mapped on and executed by a humanoid robot.
Abstract: This paper deals with easy programming methods of dual-arm manipulation tasks for humanoid robots. Hereby a programming by demonstration system is used in order to observe, learn and generalize tasks performed by humans. A classification for dual-arm manipulations is introduced, enabling a segmentation of tasks into adequate subtasks. Further it is shown how the generated programs are mapped on and executed by a humanoid robot.

Journal ArticleDOI
TL;DR: A humanoid robotics platform that consists of a humanoid robot and an open architecture software platform developed in METI’s Humanoid Robotics Project (HRP) is presented.

Proceedings ArticleDOI
01 Dec 2004
TL;DR: The mechanical features of THE AUTHORS-4RII, the emotion expression humanoid robot developed by integrating the new humanoid robot hands RCH-1 (RoboCasa Hand No.1) into the emotion Expression humanoid robot THEY- 4R, are described.
Abstract: The authors have been developing humanoid robots in order to develop new mechanisms and functions for a humanoid robot that has the ability to communicate naturally with a human by expressing human-like emotion. We considered that human hands play an important role in communication because human hands have grasping, sensing and emotional expression abilities. Then, we developed the emotion expression humanoid robot WE-4RII (Waseda Eye No.4 Refined II) by integrating the new humanoid robot hands RCH-1 (RoboCasa Hand No.1) into the emotion expression humanoid robot WE-4R. Furthermore, we confirmed that RCH-1 and WE-4RII had effective emotional expression ability because the correct recognition rate of WE-4RII's emotional expressions was higher than the WE-4R's one. In this paper, we describe the mechanical features of WE-4RII.

Proceedings ArticleDOI
20 Sep 2004
TL;DR: In this paper, the authors examined children's perceptions of robots in terms of physical attributes, personality and emotion traits, and found that children clearly distinguish between emotions and behaviour when judging robots.
Abstract: Our study considers children's perceptions of robots in terms of physical attributes, personality and emotion traits. To examine children's attitudes towards robots, a questionnaire approach was taken with a large sample of children, followed by a detailed statistical framework to analyse the data. Results show that children clearly distinguish between emotions and behaviour when judging robots. The distinguishing robotic physical characteristics for positive and negative emotions and behaviour are highlighted. Children judge human-like robots as aggressive, but human-machine robots as friendly providing support for the uncanny valley. The paper concludes with discussing the results in light of design implications for children's robots.

Journal ArticleDOI
TL;DR: It is proposed that the dynamics of coherence and incoherence between the robot's and the user’s movements could enhance close interactions between them, and that they could also explain the essential psychological mechanism of joint attention.
Abstract: This study presents experiments on the imitative interactions between a small humanoid robot and a user. A dynamic neural network model of a mirror system was implemented in a humanoid robot, based...

Proceedings ArticleDOI
28 Sep 2004
TL;DR: In this method, audio information and video information are fused by a Bayesian network to enable the detection of speech events and the information of detected speech events is utilized in sound separation using adaptive beam forming.
Abstract: For cooperative work of robots and humans in the real world, a communicative function based on speech is indispensable for robots. To realize such a function in a noisy real environment, it is essential that robots be able to extract target speech spoken by humans from a mixture of sounds by their own resources. We have developed a method of detecting and extracting speech events based on the fusion of audio and video information. In this method, audio information (sound localization using a microphone array) and video information (human tracking using a camera) are fused by a Bayesian network to enable the detection of speech events. The information of detected speech events is then utilized in sound separation using adaptive beam forming. In this paper, some basic investigations for applying the above system to the humanoid robot HRP-2 are reported. Input devices, namely a microphone array and a camera, were mounted on the head of HRP-2, and acoustic characteristics for sound localization/separation performance were investigated. Also, the human tracking system was improved so that it can be used in a dynamic situation. Finally, overall performance of the system was tested via off-line experiments.

Proceedings ArticleDOI
06 Jul 2004
TL;DR: B biped robot HRP-2LR and its hopping with both legs as the authors' first attempt towards running is introduced and a steady hopping motion of 0.06 [s] flight phase and 0.5 [s) support phase is realized.
Abstract: Aiming for a humanoid robot of the next generation, we have been developing a biped which can jump and run. This paper introduces biped robot HRP-2LR and its hopping with both legs as our first attempt towards running. Using a dynamic model of HRP-2LR, hopping patterns are pre-calculated so that it follows the desired profiles of the total linear and angular momentum. For this purpose we used resolved momentum control. Adding small modifications to negotiate the difference between the model and the real hardware, we successfully realized a steady hopping motion of 0.06 [s] flight phase and 0.5 [s] support phase. A hopping with forward velocity of 15 [mm/s] was also realized. Finally, a running pattern of 0.06 [s] flight and 03 [s] support phase was tested. HRP-2LR could successfully run with average speed of 0.16 [m/s].

Proceedings ArticleDOI
28 Sep 2004
TL;DR: This work explores the problem of recognizing, generalizing, and reproducing tasks in a unified mathematical framework, and presents an implementation of this framework to the determination of the optimal strategy to reproduce arbitrary gestures.
Abstract: Robot programming by demonstration (PbD) aims at developing adaptive and robust controllers to enable the robot to learn new skills by observing and imitating a human demonstration. While the vast majority of PbD works has focused on systems that learn a specific subset of tasks, our work explores the problem of recognizing, generalizing, and reproducing tasks in a unified mathematical framework. The approach makes abstraction of the task and dataset at hand to tackle the general issue of learning which of the features are the relevant ones to imitate. In this paper, we present an implementation of this framework to the determination of the optimal strategy to reproduce arbitrary gestures. The model is tested and validated on a humanoid robot, using recordings of the kinematics of the demonstrator's arm motion. The hand path and joint angle trajectories are encoded in hidden Markov models. The system uses the optimal prediction of the models to generate the reproduction of the motion.

Proceedings ArticleDOI
06 Jul 2004
TL;DR: A new neural oscillator arrangement applied to a compass-like biped robot that resembles human-like locomotion is proposed and initial results suggesting optimal amplitude for dealing with perturbation are presented.
Abstract: Humanoid research has made remarkable progress during the past 10 years. However, currently most humanoids use the target ZMP (zero moment point) control algorithm for bipedal locomotion, which requires precise modeling and actuation with high control gains. On the contrary, humans do not rely on such precise modeling and actuation. Our aim is to examine biologically related algorithms for bipedal locomotion that resemble human-like locomotion. This paper describes an empirical study of a neural oscillator for the control of biped locomotion. We propose a new neural oscillator arrangement applied to a compass-like biped robot. Dynamic simulations and experiments with a real biped robot were carried out and the controller performs steady walking for over 50 steps. Gait variations resulting in energy efficiency was made possible through the adjustment of only a single neural activity parameter. Aspects of adaptability and robustness of our approach are shown by allowing the robot to walk over terrains with varying surfaces with different frictional properties. Initial results suggesting optimal amplitude for dealing with perturbation are also presented.

Proceedings ArticleDOI
10 Nov 2004
TL;DR: In this paper, the design of a modular system for untethered real-time kinematic motion capture using sensors with inertial measuring units (IMU) is described.
Abstract: We describe the design of a modular system for untethered real-time kinematic motion capture using sensors with inertial measuring units (IMU) Our system is comprised of a set of small and lightweight sensors Each sensor provides its own global orientation (3 degrees of freedom) and is physically and computationally independent, requiring only external communication Orientation information from sensors is communicated via wireless to host computer for processing We present results of the real-time usage of our untethered motion capture system for teleoperating the NASA Robonaut We also discuss potential applications for untethered motion capture with respect to humanoid robotics