scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Humanoid Robotics in 2004"


Journal ArticleDOI
TL;DR: The paper gives an in-depth discussion of source results concerning ZMP, paying particular attention to some delicate issues that may lead to confusion if this method is applied in a mechanistic manner onto irregular cases of artificial gait, i.e. in the case of loss of dynamic balance of a humanoid robot.
Abstract: This paper is devoted to the permanence of the concept of Zero-Moment Point, widelyknown by the acronym ZMP. Thirty-five years have elapsed since its implicit presentation (actually before being named ZMP) to the scientific community and thirty-three years since it was explicitly introduced and clearly elaborated, initially in the leading journals published in English. Its first practical demonstration took place in Japan in 1984, at Waseda University, Laboratory of Ichiro Kato, in the first dynamically balanced robot WL-10RD of the robotic family WABOT. The paper gives an in-depth discussion of source results concerning ZMP, paying particular attention to some delicate issues that may lead to confusion if this method is applied in a mechanistic manner onto irregular cases of artificial gait, i.e. in the case of loss of dynamic balance of a humanoid robot. After a short survey of the history of the origin of ZMP a very detailed elaboration of ZMP notion is given, with a special review concerning “boundary cases” when the ZMP is close to the edge of the support polygon and “fictious cases” when the ZMP should be outside the support polygon. In addition, the difference between ZMP and the center of pressure is pointed out. Finally, some unresolved or insufficiently treated phenomena that may yield a significant improvement in robot performance are considered.

2,011 citations


Journal ArticleDOI
TL;DR: This paper establishes models of the dynamic behavior of secondary task objectives within the posture space and presents a whole-body control framework that decouples the interaction between the task and postural objectives and compensates for the dynamics in their respective spaces.
Abstract: With the increasing complexity of humanoid mechanisms and their desired capabilities, there is a pressing need for a generalized framework where a desired whole-body motion behavior can be easily specified and controlled. Our hypothesis is that human motion results from simultaneously performing multiple objectives in a hierarchical manner, and we have analogously developed a prioritized, multiple-task control framework. The operational space formulation10 provides dynamic models at the task level and structures for decoupled task and posture control.13 This formulation allows for posture objectives to be controlled without dynamically interfering with the operational task. Achieving higher performance of posture objectives requires precise models of their dynamic behaviors. In this paper we complete the picture of task descriptions and whole-body dynamic control by establishing models of the dynamic behavior of secondary task objectives within the posture space. Using these models, we present a whole-body control framework that decouples the interaction between the task and postural objectives and compensates for the dynamics in their respective spaces.

334 citations


Journal ArticleDOI
TL;DR: A preliminary exploration of several aspects of the Japanese culture and a survey of the most important myths and novels involving artificial beings in Western literature try to shed light on particular cultural features that may account for contemporary differences in the authors' behavior towards humanoids.
Abstract: Are robots perceived in the same manner in the West and in Japan? This article presents a preliminary exploration of several aspects of the Japanese culture and a survey of the most important myths and novels involving artificial beings in Western literature. Through this analysis, the article tries to shed light on particular cultural features that may account for contemporary differences in our behavior towards humanoids.

256 citations


Journal ArticleDOI
TL;DR: A new Self-Aware Self-Effecting (SASE) agent concept is proposed, based on the authors' SAIL and Dav developmental robots, and some experimental results for developmental robotics are presented.
Abstract: A hand-designed internal representation of the world cannot deal with unknown or uncontrolled environments. Motivated by human cognitive and behavioral development, this paper presents a theory, an architecture, and some experimental results for developmental robotics. By a developmental robot, we mean that the robot generates its “brain” (or “central nervous system,” including the information processor and controller) through online, real-time interactions with its environment (including humans). A new Self-Aware Self-Effecting (SASE) agent concept is proposed, based on our SAIL and Dav developmental robots. The manual and autonomous development paradigms are formulated along with a theory of representation suited for autonomous development. Unlike traditional robot learning, the tasks that a developmental robot ends up learning are unknown during the programming time so that the task-specific representation must be generated and updated through real-time “living” experiences. Experimental results with SAIL and Dav developmental robots are presented, including visual attention selection, autonomous navigation, developmental speech learning, range-based obstacle avoidance, and scaffolding through transfer and chaining.

161 citations


Journal ArticleDOI
TL;DR: An overview of the work towards building socially intelligent, cooperative humanoid robots that can work and learn in partnership with people and a theoretical framework that is a novel combination of Joint Intention Theory and Situated Learning Theory is presented.
Abstract: This paper presents an overview of our work towards building socially intelligent, cooperative humanoid robots that can work and learn in partnership with people. People understand each other in social terms, allowing them to engage others in a variety of complex social interactions including communication, social learning, and cooperation. We present our theoretical framework that is a novel combination of Joint Intention Theory and Situated Learning Theory and demonstrate how this framework can be applied to develop our sociable humanoid robot, Leonardo. We demonstrate the robot's ability to learn quickly and effectively from natural human instruction using gesture and dialog, and then cooperate to perform a learned task jointly with a person. Such issues must be addressed to enable many new and exciting applications for robots that require them to play a long-term role in people's daily lives.

157 citations


Journal ArticleDOI
TL;DR: This work reports on a dynamically balancing robot with a dexterous arm designed to operate in built-for-human environments and its initial target task was for the robot to navigate, identify doors, open them, and proceed through them.
Abstract: We report on a dynamically balancing robot with a dexterous arm designed to operate in built-for-human environments. Our initial target task was for the robot to navigate, identify doors, open them, and proceed through them.

104 citations


Journal ArticleDOI
TL;DR: This work proposes a representation for a skill-level interface as a "behavior vocabulary," a repertoire of modular exemplar-based memory models expressing kinematic motion that encodes a flow field (or gradient field) in joint angle space that describes the "flow" of kinemic motion for a particular skill- level behavior, enabling prediction from a given kinematics configuration.
Abstract: Control for and interaction with humanoid robots is often restrictive due to limitations of the robot platform and the high dimensionality of controlling systems with many degrees of freedom. We focus on the problem of providing a "skill-level interface" for a humanoid robot. Such an interface serves as (i) a modular foundation for structuring task-oriented control, (ii) a parsimonious abstraction of motor-level control (e.g. PD-servo control), and (iii) a means for grounding interactions between humans and robots through common skill vocabularies. Our approach to constructing skill-level interfaces is two-fold. First, we propose a representation for a skill-level interface as a "behavior vocabulary," a repertoire of modular exemplar-based memory models expressing kinematic motion. A module in such a vocabulary encodes a flow field (or gradient field) in joint angle space that describes the "flow" of kinematic motion for a particular skill-level behavior, enabling prediction from a given kinematic configuration. Second, we propose a data-driven method for deriving behavior vocabularies from time-series data of human motion using spatio-temporal dimension reduction and clustering. Results from evaluating an implementation of our methodology are presented along with the application of derived behavior vocabularies as predictors towards on-line humanoid trajectory formation and off-line motion synthesis.

103 citations


Journal ArticleDOI
TL;DR: A vision-based grasping system able to deal with previously unknown objects in real time and in an intelligent man- ner is interested, and two prediction/classication strategies are dened which allow the robot to predict the outcome of a grasp only analizing its visual features.
Abstract: Manipulation skills are a key issue for a humanoid robot. Here, we are interested in a vision-based grasping system able to deal with previously unknown objects in real time and in an intelligent man- ner. Starting from a number of feasible candidate grasps, we focus on the problem of predicting their reliability using the knowledge acquired in previous grasping experiences. A set of visual features which take into account physical properties that can aect the stability and reliability of a grasp are dened. A humanoid robot obtains its grasping experience by repeating a large number of grasping actions on dieren t objects. An experimental protocol is established in order to classify grasps according to their reliability. Two prediction/classication strategies are dened which allow the robot to predict the outcome of a grasp only analizing its visual features. The results indicate that these strategies are adequate to predict the realibility of a grasp and to generalize to dieren t objects.

79 citations


Journal ArticleDOI
TL;DR: In this paper, a learning-based approach for the modeling of complex movement sequences is presented, based on the method of Spatio-Temporal Morphable Models (STMMs), which can be applied for modeling and synthesis of complex sequences of human movements that contain movement elements with a variable style.
Abstract: In this paper we present a learning-based approach for the modeling of complex movement sequences. Based on the method of Spatio-Temporal Morphable Models (STMMs) we derive a hierarchical algorithm that, in a first step, identifies automatically movement elements in movement sequences based on a coarse spatio-temporal description, and in a second step models these movement primitives by approximation through linear combinations of learned example movement trajectories. We describe the different steps of the algorithm and show how it can be applied for modeling and synthesis of complex sequences of human movements that contain movement elements with a variable style. The proposed method is demonstrated on different applications of movement representation relevant for imitation learning of movement styles in humanoid robotics.

60 citations


Journal ArticleDOI
TL;DR: A method for humanoid robots to quickly learn new dynamic tasks from observing others and from practice and to break learning problems up into as many simple learning problems as possible is presented.
Abstract: We present a method for humanoid robots to quickly learn new dynamic tasks from observing others and from practice. Ways in which the robot can adapt to initial and also changing conditions are described. Agents are given domain knowledge in the form of task primitives. A key element of our approach is to break learning problems up into as many simple learning problems as possible. We present a case study of a humanoid robot learning to play air hockey.

53 citations


Journal ArticleDOI
TL;DR: An overview on current and forthcoming research activities of the Collaborative Research Center 588 "Humanoid Robots — Learning and Cooperating Multimodal Robots" is given, and the application scenario testing the robot system is introduced.
Abstract: This paper gives an overview on current and forthcoming research activities of the Collaborative Research Center 588 "Humanoid Robots — Learning and Cooperating Multimodal Robots" which is located in Karlsruhe, Germany. Its research activities can be divided into the following areas: mechatronic robot system components like lightweight 7 DOF arms, 5-fingered dexterous hands, an active sensor head and a spine type central body and skills of the humanoid robot system; multimodal man-machine interfaces; augmented reality for modeling and simulation of robots, environment and user; and finally, cognitive abilities. Some of the research activities are described in this paper, and we introduce the application scenario testing the robot system. In particular, we present a robot teaching center and the execution which is of type "household."

Journal ArticleDOI
TL;DR: ISAC is described in terms of its software components and with respect to the design philosophy that has evolved over the course of its development, comprising a parallel, distributed software architecture comprising a set of independent software objects or agents that execute as needed on standard PCs linked via Ethernet.
Abstract: During the last decade, Researchers at Vanderbilt have been developing a humanoid robot called the Intelligent Soft Arm Control (ISAC). This paper describes ISAC in terms of its software components and with respect to the design philosophy that has evolved over the course of its development. Central to the control system is a parallel, distributed software architecture, comprising a set of independent software objects or agents that execute as needed on standard PCs linked via Ethernet. Fundamental to the design philosophy is the direct physical interaction of the robot with people. Initially, this philosophy guided application development. Yet over time it became apparent that such interaction may be necessary for the acquisition of intelligent behaviors by an agent in a human-centered environment. Concurrent to that evolution was a shift from a programmer’s high-level specification of action toward the robot’s own motion acquisition of primitive behaviors through sensory-motor coordination (SMC) and task learning through cognitive control and working memory. . Described is the parallel distributed cognitive control architecture and the advantages and limitations that have guided its development. Primary structures for sensing, memory, and cognition are described. Motion learning through teleoperation and fault diagnosis through system health monitoring are described. The generality of the control system is discussed in terms of its applicability to physically heterogeneous robots and multi-robot systems.

Journal ArticleDOI
TL;DR: The biped robot "Johnnie" is designed to achieve a dynamically stable gait pattern, allowing for high walking velocities, and the design of the 3D-orientation sensor and the 6-axes force-torque sensor are presented.
Abstract: The biped robot "Johnnie" is designed to achieve a dynamically stable gait pattern, allowing for high walking velocities. Very accurate and fast sensors were developed for the machine. In particular the design of the 3D-orientation sensor and the 6-axes force-torque sensor are presented. The control scheme is based on the information from these sensors to deal with unstructured terrain and disturbances. Two different implementations are investigated: a computed torque approach and a trajectory control with adaptive trajectories. Walking speeds of 2.2 km/h have been achieved in experiments.

Journal ArticleDOI
TL;DR: To evaluate the dependability of this extremely complex machine, and its ability to interact with strangers, HERMES was exhibited in a museum, far away from its home laboratory, for more than six months, and several qualitative results are given.
Abstract: A large number of functionalities have been integrated into a single fully autonomous humanoid robot, HERMES. To evaluate the dependability of this extremely complex machine, and its ability to interact with strangers, HERMES was exhibited in a museum, far away from its home laboratory, for more than six months. During this period the robot and its skills were regularly demonstrated to the public by non-expert presenters up to 12 hours per day. Also, HERMES interacted with the visitors, chatted with them in English, French and German, answered questions and performed services as requested by them. Only three major failures occurred during the 6-month period, all of them caused by failures of commercially available modules that could easily be replaced. The key to this success was the dependability that had been originally designed into HERMES. During the design process certain design principles were followed in both hardware and software. These principles are introduced, and some long- and short-term experiments carried out with the real robot in real environments are presented. In fact, by demonstrating HERMES in the museum, at trade fairs and in TV studios — besides our institute environment — we have learned valuable lessons, especially regarding the interaction of a complex robotic assistant with unknown humans. Although we did not quantitatively evaluate the robot’s performance or acceptance by the non-expert users, several qualitative results are given in this paper, and many videos highlighting these results can be downloaded from the HERMES homepage.

Journal ArticleDOI
TL;DR: This paper proposes a method of mutual telexistence using projection technology with retro-reflective objects, and describes experimental hardware constructed to demonstrate the feasibility of the proposed method.
Abstract: Telexistence is fundamentally a concept named for the technology that enables a human being to have a real-time sensation of being at a place other than where he or she actually is, and to interact with the remote environment, which may be real, virtual, or a combination of both. It also refers to an advanced type of teleoperation system that enables an operator at the controls to perform remote tasks dexterously with the feeling of existing in a surrogate robot. Although conventional telexistence systems provide an operator the real-time sensation of being in a remote environment, persons in the remote environment have only the sensation that a surrogate robot is present, not the operator. Mutual telexistence aims to solve this problem so that the existence of the operator is apparent to persons in the remote environment by providing mutual sensations of presence. This paper proposes a method of mutual telexistence using projection technology with retro-reflective objects, and describes experimental hardware constructed to demonstrate the feasibility of the proposed method.

Journal ArticleDOI
TL;DR: An overview of the current and forthcoming research projects of the Collaborative Research Center 588 "Humanoid Robots — Learning and Cooperating Multimodal Robots" is given.
Abstract: This paper gives an overview of the current and forthcoming research projects of the Collaborative Research Center 588 "Humanoid Robots — Learning and Cooperating Multimodal Robots." The activities can be divided into several areas: development of mechatronic components and construction of a demonstrator system, perception of user and environment, modeling and simulation of robots, environment and user, and finally cooperation and learning. The research activities in each of these areas are described in detail. Finally, we give an insight into the application scenario of our robot system, i.e. the training setup and the experimental setup "household."

Journal ArticleDOI
TL;DR: A set of sensors and analyzes the actuator properties of an anthropomorphic robot hand driven by flexible fluidic actuators, which incorporates the viscoelastic material behavior and describes the relations of joint angle, actuator pressure, and actuator torque.
Abstract: The successful control of a robot hand with multiple degrees of freedom not only requires sensors to determine the state of the hand but also a thorough understanding of the actuator system and its properties. This article presents a set of sensors and analyzes the actuator properties of an anthropomorphic robot hand driven by flexible fluidic actuators. These flexible and compact actuators are integrated directly into the finger joints, they can be driven either pneumatically or hydraulically. The sensors for the measurement of joint angles, contact forces, and fluid pressure are described; the designs utilize mostly commodity components. Hall sensors and customized half-ring rare-earth magnets are used to integrate the joint angle sensors directly into the actuated joints. A force sensor setup allowing soft finger surfaces is evaluated. Fluid pressure sensors are needed for the model-based computation of joint torques and to limit the actuator pressure. Static and dynamic actuator characteristics are determined in a theoretical process analysis, and suitable parameters are identified in several experiments. The resulting actuator model incorporates the viscoelastic material behavior and describes the relations of joint angle, actuator pressure, and actuator torque. It is used in simulations and for the design of a joint position controller.

Journal ArticleDOI
TL;DR: This work proposes and investigates a fully dynamic whole-body task including underactuated motion whose state trajectory is insoluble, and unpredictable perturbations due to complex contacts with the ground, and suggests a non-uniform control strategy which focuses on sparse critical points in the global phase space, and allows deviations and trade-offs at other parts.
Abstract: Whole-body dynamic actions under various contacts with the environment will be very important for future humanoid robots to support human tasks in unstructured environments. Such skills are very difficult to realize using the standard motion control methodology based on asymptotic convergence to the successive desired states. An alternative approach would be to exploit the passive dynamics of the body under constrained motion, and to navigate through multiple dynamics by imposing the least control in order to robustly reach the goal state. As a first example of such a strategy, we propose and investigate a "Roll-and-Rise" motion. This is a fully dynamic whole-body task including underactuated motion whose state trajectory is insoluble, and unpredictable perturbations due to complex contacts with the ground. First, we analyze the global structure of Roll-and-Rise motion. Then the critical points are analyzed using simplified models and simulations. The results suggest a non-uniform control strategy which focuses on sparse critical points in the global phase space, and allows deviations and trade-offs at other parts. Finally, experiments with a real adult-size humanoid robot are successfully carried out. The robot rose from a flat-lying posture to a crouching posture within 2 seconds.

Journal ArticleDOI
TL;DR: Experimental results show that the approach enables the marionette to perform motions that are qualitatively similar to the original human motion capture data.
Abstract: In this paper, we present a method for controlling a motorized, string-driven marionette using motion capture data from human actors and from a traditional marionette operated by a professional puppeteer. We are interested in using motion capture data of a human actor to control the motorized marionette as a way of easily creating new performances. We use data from the hand-operated marionette both as a way of assessing the performance of the motorized marionette and to explore whether this technology could be used to preserve marionette performances. The human motion data must be extensively adapted for the marionette because its kinematic and dynamic properties differ from those of the human actor in degrees of freedom, limb length, workspace, mass distribution, sensors, and actuators. The motion from the hand-operated marionette requires less adaptation because the controls and dynamics are a closer match. Both data sets are adapted using an inverse kinematics algorithm that takes into account marker positions, joint motion ranges, string constraints, and potential energy. We also apply a feedforward controller to prevent extraneous swings of the hands. Experimental results show that our approach enables the marionette to perform motions that are qualitatively similar to the original human motion capture data.

Journal ArticleDOI
TL;DR: The five applications that the authors think are suitable for humanoid robots and expect to open a new industry are described.
Abstract: The Ministry of Economy, Trade and Industry (METI) of Japan ran an R&D project on humanoid robotics, called HRP. In the project, a humanoid robotics platform was developed in the first phase, and contributors of the project followed up with research on the applications of humanoid robots to various industries. In this paper, we describe the five applications that we think are suitable for humanoid robots and expect to open a new industry.

Journal ArticleDOI
TL;DR: The design, implementation and application of a humanoid interaction robot (H10), developed as a case study to operate at points of sale, information desks and demonstrations, are presented.
Abstract: This paper presents the design, implementation and application of a humanoid interaction robot (H10). In interdisciplinary cooperation H10 was developed as a case study to operate at points of sale, information desks and demonstrations. If the user given speech input matches an entry of the adaptive database, H10 will react with a suitable answer. Synchronously to the speech generation, face animation and pre-defined gestures of hands and arms are triggered by the core of the system. The principles of the speech, gesture and physical interaction interface as well as some fundamental mechanic and electronic details are described.

Journal ArticleDOI
TL;DR: A learning-synthesis-analysis framework which aims to enable a robot, or computer, to understand and convey meaning through texts or images and lay out a sound foundation on which interdisciplinary research could effectively progress toward the development of machines or robots that understand meaning throughtexts or images.
Abstract: In the past fifty years, efforts in classical AI have focussed on computerizing human intelligence. Naturally, computerized human intelligence is not a proof of machine or robot intelligence because the programs underlying computerized human intelligence are still made by humans. So far, there is no computer nor robot which is creative enough to master its own language and to compose a text expressing its intentions. Thus, it is time to shift our research focus from computerizing human intelligence to developing machine intelligence. A first and necessary step towards this goal is to make machines or robots learn, manipulate, understand and create both elementary and composite meanings encoded in a natural language such as English. Elementary and composite meanings could be acquired through both sample texts and images. Hence, we propose a learning-synthesis-analysis framework which aims to enable a robot, or computer, to understand and convey meaning through texts or images. The main contribution of this paper is to lay out a sound foundation on which interdisciplinary research could effectively progress toward the development of machines or robots that understand meaning through texts or images.

Journal ArticleDOI
TL;DR: Comparison with neural networks optimized by a back propagation algorithm and decision trees generated by C4.5 proves that the accuracy of recognition in the proposed method is superior to others.
Abstract: This paper presents a recognition method for human actions in daily life. The system deals with actions related to regular human activity such as walking or lying down. The main features of the proposed method are: (i) simultaneous recognition, (ii) expressing lack of clarity in human recognition, (iii) defining similarities between two motions by utilizing kernel functions derived from expressions of actions based on human knowledge, (iv) robust learning capability based on support vector machine. Comparison with neural networks optimized by a back propagation algorithm and decision trees generated by C4.5 proves that the accuracy of recognition in the proposed method is superior to others. Recognizing actions in daily life robustly is expected to ensure smooth communication between humans and robots and to enhance support functionality in intelligent systems.

Journal ArticleDOI
TL;DR: A real-time self-collision avoidance system for robots which cooperate with a human/humans that is represented by elastic elements, and experiments using the mobile robot with dual manipulators illustrate the validity of the proposed system.
Abstract: In this paper, we propose a real-time self-collision avoidance system for robots which cooperate with a human/humans. First, the robot is represented by elastic elements. The representation method is referred to as RoBE (Representation of Body by Elastic elements). Elastic balls and cylinders are used as the elements to simplify collision detection, although elements of any shape could be used for RoBE. When two elements collide with each other, a reaction force is generated between them, and self-collision avoidance motion is generated by the reaction force. Experiments using the mobile robot with dual manipulators, referred to as MR Helper, illustrate the validity of the proposed system.

Journal ArticleDOI
TL;DR: The human combination of an articulated waist and neck will be shown to enable the use of smaller arms, achieving greater regions of workspace dexterity than the larger limbs of gorillas and other hominoidea.
Abstract: The primate order of animals is investigated for clues in the design of Humanoid Robots. The pursuit is directed with a theory that kinematics, musculature, perception, and cognition can be optimized for specific tasks by varying the proportions of limbs, and in particular, the points of branching in kinematic trees such as the primate skeleton. Called the Bifurcated Chain Hypothesis, the theory is that the branching proportions found in humans may be superior to other animals and primates for the tasks of dexterous manipulation and other human specialties. The primate taxa are defined, contemporary primate evolution hypotheses are critiqued, and variations within the order are noted. The kinematic branching points of the torso, limbs and fingers are studied for differences in proportions across the order, and associated with family and genus capabilities and behaviors. The human configuration of a long waist, long neck, and short arms is graded using a kinematic workspace analysis and a set of design axioms for mobile manipulation robots. It scores well. The re emergence of the human waist, seen in early Prosimians and Monkeys for arboreal balance, but lost in the terrestrial Pongidae, is postulated as benefiting human dexterity. The human combination of an articulated waist and neck will be shown to enable the use of smaller arms, achieving greater regions of workspace dexterity than the larger limbs of Gorillas and other Hominoidea.

Journal ArticleDOI
TL;DR: The current progress of a project in the Intelligent Robotics Research Center at Monash University that has the aim of developing a synergistic set of sensory systems for a humanoid robot is described.
Abstract: Sensing is a key element for any intelligent robotic system. This paper describes the current progress of a project in the Intelligent Robotics Research Center at Monash University that has the aim of developing a synergistic set of sensory systems for a humanoid robot. Currently, sensing modes for colour vision, stereo vision, active range, smell and airflow are being developed in a size and form that is compatible with the humanoid appearance. Essential considerations are sensor calibration and the processing of sensor data to give reliable information about properties of the robot's environment. In order to demonstrate the synergistic use of all of the available sensory modes, a high level supervisory control scheme is being developed for the robot. All time-stamped sensor data together with derived information about the robot's environment are organized in a blackboard system. Control action sequences are then derived from the blackboard data based on a task description. The paper presents details of each of the robot's sensory systems, sensor calibration, and supervisory control. Results are also presented of a demonstration project that involves identifying and selecting mugs containing household chemicals. Proposals for future development of the humanoid robot are also presented.

Journal ArticleDOI
TL;DR: The interest of this method is that the intrinsic dynamics of the system are exploited by using a succession of active and passive phases and the control strategy is very simple to implement on-line.
Abstract: In this paper, we propose a control strategy allowing us to perform the transition of velocities included in [0 m/s; 1 m/s] for the dynamic walking of a virtual under-actuated robot (RABBIT) without reference trajectories. This strategy of control enables us to carry out the transition from stop towards walking and the reverse process. The interest of this method is that, on the one hand, the intrinsic dynamics of the system are exploited by using a succession of active and passive phases and, on the other hand, the control strategy is very simple to implement on-line. Moreover, we apply this method by taking into account the technological limitations related to experimentation on the real robot such as dry and viscous frictions, maximum torque, and maximum power.

Journal ArticleDOI
TL;DR: A natural behavior generation method for humanoid robots that is a hybrid generation between voluntary and involuntary motions and develops a wheeled inverted pendulum type of humanoid robot, named "Robovie-III," in order to generate involuntary motions like oscillation.
Abstract: Human behaviors consist of both voluntary and involuntary motions. Almost all behaviors of task-oriented robots, however, consist solely of voluntary motions. Involuntary motions are important for generating natural motions like those of humans. Thus, we propose a natural behavior generation method for humanoid robots that is a hybrid generation between voluntary and involuntary motions. The key idea of our method is to control robots with a hybrid controller that combines the functions of a communication behavior controller and body balancing controllers. We also develop a wheeled inverted pendulum type of humanoid robot, named "Robovie-III," in order to generate involuntary motions like oscillation. This paper focuses on the system architecture of this robot. By applying our method to this robot and conducting preliminary experiments, we verify its validity. Experimental results show that the robot generates both voluntary and involuntary motions.

Journal ArticleDOI
TL;DR: A communicative robot — BUGNOID — which integrates various sensory data and behavior modules and can create an environmental map and recognize its environment taking human behavior into account with the aim of co-existing with humans.
Abstract: A communicative robot — BUGNOID — which integrates various sensory data and behavior modules is introduced with some experimental results. To achieve flexible communication with humans, the robot has a multi-modal interface with diverse channels of communication. Moreover, the robot can create an environmental map and recognize its environment taking human behavior into account with the aim of co-existing with humans.

Journal ArticleDOI
TL;DR: There is a growing enthusiasm for the usefulness of humanoid research because the recent quantum leaps in enabling technologies let us assess more clearly what the potential benefits might be, i.e. direct applications but also spin-offs, made possible through these technological advances.
Abstract: For a long time, many inventors have strived to create a perfect machine counterpart of humans — to copy the innards of creatures, but also with the objective to build “mechanical slaves” that to some extent relieve us from our burdens in everyday life. Humanoid robots — as an attempt to construct an approximation to human shape and behavior — can be considered as today’s answers to these age-old dreams. In fact, they may be potentially very useful, not only as a research tool spanning many disciplines, but as everybody’s personal “servant.” Not yet fully realized but clearly conceivable are really autonomous robots that one can instruct to help at home or that can even do parts of our household chores unsupervised. A little while ago this seemed to be pure science fiction, but recent developments justify a much more optimistic view. The first designs appeared in the early seventies, when the state of computing technology (but also sensors and vision, energy supply, etc.) was still far from what is needed for even a basic notion of “autonomy.” Even though at that time one could not even dream of implementing higher-level cognitive abilities (vision, speech recognition, problem solving, planning, etc.) as integral functions of these bodies, there were impressive achievements in the emulation of human motor skills (walking, grasping, and even piano-playing). Today, we observe a growing enthusiasm for the usefulness of humanoid research because the recent quantum leaps in enabling technologies let us assess more clearly what the potential benefits might be, i.e. direct applications but also spin-offs, made possible through these technological advances. Here are some of the possible applications areas, which we list regardless of the state of research today, as “controlled fiction”: