scispace - formally typeset
Search or ask a question

Showing papers on "Humanoid robot published in 1997"


Book ChapterDOI
07 Sep 1997
TL;DR: In this paper, the authors describe how they developed a biped robot capable of walking like a human and discuss the leg's structure, dynamics and ways of maintaining stability, and areas on which future development efforts will be focussed.
Abstract: Noting that legged locomotion allows greater mobility than motion on wheels, the authors describe how they developed a biped robot capable of walking like a human. They discuss the leg's structure, dynamics etc. Ways of maintaining stability are considered. Areas on which future development efforts will be focussed are outlined.

188 citations


Proceedings ArticleDOI
12 Oct 1997
TL;DR: This work developed the real time machine recognition of facial expressions by using a layered neural network and achieved a high correct recognition ratio of 85% with respect to 6 typical facial expressions of 15 subjects in 55 ms.
Abstract: We study the realization of a realistic human-like response of an animated 3D face robot in communicative interaction with human beings. The face robot can produce human-like facial expressions and recognize human facial expressions using facial image data obtained by a CCD camera mounted inside the left eyeball. We developed the real time machine recognition of facial expressions by using a layered neural network and achieved a high correct recognition ratio of 85% with respect to 6 typical facial expressions of 15 subjects in 55 ms. We also developed a new small-size actuator for display of facial expressions on the face robot, giving the same speed in dynamic facial expressions as in human even in the case of a high-speed expression of "surprise". For facial interactive communication between the face robot and human beings, we integrated these two technologies to produce the facial expression in respond to the recognition result of the human facial expression in real time. This implies a high technological potential for the animated face robot to undertake interactive communication with human when an artificial emotion being implemented.

130 citations


Proceedings ArticleDOI
20 Apr 1997
TL;DR: The authors designed and built an anthropomorphic biped walking robot having antagonistic driven joints and introduced the design method of the robot, which realized quasi-dynamic bipedwalking using antagonist driven joint.
Abstract: The authors are engaged in studies of biped walking robots from the following two viewpoints. One is a viewpoint as a human science. The other is a viewpoint towards the development of humanoid robots. In the current research concerning a biped walking robot, there is no developed example of a life-size biped walking robot with the antagonistically driven joints by which the human musculo-skeletal system is imitated in lower limbs. Humans are considered to exhibit walking behavior, both efficient and capable of flexibly coping with contact with the outside environment. However, developed biped walking robots can not realize the human walking. The human joint is driven by two or more antagonistic muscle groups. Humans can vary the joint stiffness, using nonlinear spring characteristics possessed by the muscle themselves. The function is an indispensable function for a humanoid. Therefore, the authors designed and built an anthropomorphic biped walking robot having antagonistic driven joints. In this paper, the authors introduce the design method of the robot. The authors performed walking experiments with the robot. As a result, quasi-dynamic biped walking using antagonist driven joint was realized. The walking speed was 7.68 s/step with a 0.1 m step length.

67 citations


Journal ArticleDOI
TL;DR: A humanoid robot at the MIT Artificial Intelligence Laboratory is meant to be able to emulate human functionality and is particularly interested in using it as vehicle to understand how humans work, by trying to combine many theories from artificial intelligence, cognitive science, physiology and neuroscience.
Abstract: We are building a humanoid robot at the MIT Artificial Intelligence Laboratory. It is a legless human sized robot and is meant to be able to emulate human functionality. We are particularly interested in using it as vehicle to understand how humans work, by trying to combine many theories from artificial intelligence, cognitive science, physiology and neuroscience. While undergoing continual revisions in its hardware and software the robot has been operational in one form or another for over three years. We are working on systems that emulate both human development and human social interactions.

44 citations


Proceedings ArticleDOI
07 Sep 1997
TL;DR: The concept of action oriented control has been investigated with simulation example and a brief description of ongoing design of ETL-humanoid which conforms to the above constraints is presented.
Abstract: This paper considers the humanoid research as an approach to understanding and realizing complex real world interactions among the robot, environment, and human. As a first step towards extracting a common principle over the three term interactions, the concept of action oriented control has been investigated with simulation example. The complex interaction view casts unique constraints on the design of a humanoid, such as the whole body, smooth shape and non-functional-modular design. A brief description of ongoing design of ETL-humanoid which conforms to the above constraints is presented.

37 citations


Proceedings ArticleDOI
07 Sep 1997
TL;DR: The features of Saika are: (a) Saika as modularized to reduce the developing cost and to make maintenance easy, (b) the total weight of the head, the neck, the two upper arms and the torso is only eight kilograms and (c) most of the motors are installed inside the arms andThe torso.
Abstract: This article addresses the development of a light-weight, human-size and low-cost developing humanoid robot named Saika. Saika has a 2-DOF neck, two 5-DOF upper arms, a torso and a head. Several types of hands and forearms are developed. They are chosen depending upon the tasks to perform. The features of Saika are: (a) Saika as modularized to reduce the developing cost and to make maintenance easy, (b) the total weight of the head, the neck, the two upper arms and the torso is only eight kilograms and (c) most of the motors are installed inside the arms and the torso.

37 citations


Proceedings ArticleDOI
01 Dec 1997
TL;DR: This paper describes a humanoid robot "Hadaly" that was developed as a basic model for the final version of Humanoid, and an attempt to find the configurations and functions that are required for a humanoid.
Abstract: This paper describes a humanoid robot "Hadaly" that was developed as a basic model for the final version of Humanoid. In this study, the authors feature an attempt to find the configurations and functions that are required for a humanoid. Hadaly consists of four systems; an audio-visual system, a head system, a speech system and an arm system. The configurations and functions required for a humanoid robot are clarified based on the results of the information assistance experiment with Hadaly.

36 citations


Proceedings ArticleDOI
07 Sep 1997
TL;DR: The design method of the biped walking robot and a bipedal humanoid having antagonistic driven joints is introduced and quasi-dynamic bipedwalking, dynamic walking, dynamic dancing with dance music and carry with dynamic walking using antagonist driven joint were realized.
Abstract: The authors are engaged in studies of biped walking robots from the following two viewpoints. One is a viewpoint as a human science. The other is a viewpoint towards the development of humanoid robots. In the current research concerning a biped walking robot, there is no developed example of a life-size biped walking robot with the antagonistically driven joints by which the human musculo-skeletal system is imitated in lower limbs. Humans are considered to exhibit walking behavior, both efficient and capable of flexibly coping with contact with the outside environment. However, developed biped walking robots can not realize the human walking. Humans can vary the joint stiffness, using nonlinear spring characteristics possessed by the muscle themselves. The function is an indispensable function for a bipedal humanoid. Therefore, the authors designed and built a biped walking robot and a bipedal humanoid having antagonistic driven joints. In this paper, the authors introduce the design method of the biped walking robot. The authors performed walking experiments with the biped walking robot and the bipedal humanoid As a result, quasi-dynamic biped walking, dynamic walking, dynamic dancing with dance music and carry with dynamic walking using antagonist driven joint were realized.

35 citations


Journal ArticleDOI
TL;DR: A robotic hand that is a part of a humanoid being developed at the MIT Artificial Intelligence Laboratory is constructed, a human-scale cable-driven tool containing four actuators, thirty-six sensors, and computing tools on board.
Abstract: We have constructed and tested a robotic hand that is a part of a humanoid being developed at the MIT Artificial Intelligence Laboratory. It is a human-scale cable-driven tool containing four actuators, thirty-six sensors, and computing tools on board. The combination of its conciseness and sensing capability allows an integration with other anthropomorphically scaled systems. This paper presents a detailed description of the mechanical design and its implementation, including the structure of the physical hand, tendon cabling strategy, actuators, sensors and computing tools.

35 citations


Proceedings ArticleDOI
29 Sep 1997
TL;DR: I contemplate what I believe to be eight of the most challenging long-term problems in human-robot communication, including communication among the designers of robot systems with respect to the high-level design, programming and use of robots for humane purposes.
Abstract: I contemplate what I believe to be eight of the most challenging long-term problems in human-robot communication. I interpret this phrase in a broad sense, including communication among the designers of robot systems with respect to the high-level design, programming and use of robots for humane purposes. Since the earliest robots of the 1940s human communication with them has been a serious consideration, and progress over the ensuing 50 years has been great. Control of prosthetic arms has changed from crude cables to myoelectric signals and speech. Control of telemanipulators for hazardous environments such as nuclear plants, space and undersea has changes from simple on-off control of individual joints to high-level supervisory languages of various kinds, combining both analog and symbolic communication. The same is true for industrial robots, which can now make movements very much faster and more precisely than can humans. It is not my intent to review that progress, though it is clear that as robots themselves become more intelligent the nature of communication between the human operator and the robot is becoming less and less like that of using a passive hand tool and more and more like the relationship between two human beings.

35 citations


Proceedings ArticleDOI
29 Sep 1997
TL;DR: This paper addresses ball-catching behavior by the humanoid robot Saika to implement human skills on the humanoid in the same manner as human performs them and demonstrates the validity of the catching strategies.
Abstract: This paper addresses ball-catching behavior by the humanoid robot Saika. The aim of this study is to implement human skills on the humanoid in the same manner as human performs them. The behavior of catching a falling and a thrown ball is chosen as an example of dynamic skilful manipulation. Considering the human behavior we realize ball-catching behavior by three steps: (a) localization of the ball by the vision system, (b) prediction of the ball's path to determine the catching point, and (c) reaching out the hand to the catching point by a neural network inverse kinematics model. Experimental results demonstrate the validity of the catching strategies.


Journal ArticleDOI
TL;DR: An anthropomorphic dynamic bipedwalking robot adapting to the humans' living floor with adaptable deviation range from -16 to+16 mm/step in the vertical direction, and from-3 to +3° in the tilt angle.
Abstract: The authors are engaged in studies of biped walking robots from the following two viewpoints. One is a viewpoint as a human science. The other is a viewpoint towards the development of humanoid robots. In this paper, the authors introduce an anthropomorphic dynamic biped walking robot adapting to the humans‘ living floor. The robot has two remarkable systems: (1) a special foot system to obtain the position relative to the landing surface and the gradient of the surface during its dynamic walking; (2) an adaptive walking control system to adapt to the path surfaces with unknown shapes by utilizing the information of the landing surface, obtained by the foot system. Two units of the foot system WAF-3 were produced, a biped walking robot WL-12RVII that had the foot system and the adaptive walking control system installed inside it was developed, and a walking experiment with WL-12RVII was performed. As a result, dynamic biped walking adapting to humans‘ floors with unknown shapes was realized. The maximum walking speed was 1.28 s/step with a 0.3 m step length, and the adaptable deviation range was from -16 to +16 mm/step in the vertical direction, and from -3 to +3° in the tilt angle.

Proceedings ArticleDOI
20 Apr 1997
TL;DR: A humanoid that integrates stereo vision, full-body tactile sensor, sound sensors and wrist force sensors into a sensor image that processes a series of sensor images and controls its body using them is described.
Abstract: A robot body is not only an active unit but also a sensing unit. This paper describes a humanoid that integrates stereo vision, full-body tactile sensor, sound sensors and wrist force sensors into a sensor image. The sensor image reflects the current state of the environment and is regarded as all the input to the robot brain in this system. The brain program processes a series of sensor images and controls its body using them. The robot is remote-brained and designed to be a testbed to do research on sensor based behaviors of a full-body humanoid. This paper describes the design concept and the details of the system.

Proceedings ArticleDOI
07 Jul 1997
TL;DR: It is demonstrated that using a rapid processing of the visual input of an active camera system and controlling a humanoid robot closed loop by distributed dynamical systems is a promising way to perform robust and fast man-machine interaction.
Abstract: In the field of advanced service robotics it is of major importance to design man-machine interfaces that are fast and robust enough to cope with fuzzy and rapid human gesture. This paper describes our implementation of a complex interactive behavior on our multi-degree-of-freedom robot Arnold. The desired behavior is to position Arnold in front of a person and reach for the human hand, making use of all degrees of freedom the robot possesses. It is easy to see that this is a basic component in tasks where a man-machine interaction is required, e.g. passing objects or learning by showing. The complex overall behaviour can be decomposed into simple behaviors that are related to the basic robot devices, i.e. platform, arm and head. These processes act in combination without explicit exchange of information but their effect on the sensor input, following the dynamic approach described by G. Schoner et al. (1996). A fast perception of behavioral relevant information is achieved by a combination of color- and stereo-vision-algorithms in an active vision process. Closing the feedback loop by observing the environment demonstrates a good coordination of active vision and positioning of hand and platform in real time. We demonstrate that using a rapid processing of the visual input of an active camera system and controlling a humanoid robot closed loop by distributed dynamical systems is a promising way to perform robust and fast man-machine interaction.

Proceedings ArticleDOI
15 Sep 1997
TL;DR: In this article, the authors compared some kinematic properties of the human arm with standard approaches in robot design and found that human arm possesses an extremely effective kinemastic structure of the mechanism if we take into account the mechanism's kinematics redundancy, the workspace properties, and the velocity-torque capabilities.
Abstract: The paper studies possible advantages of humanoid robot manipulators by comparing some kinematic properties of the human arm with standard approaches in robot design. The human arm possesses an extremely effective kinematic structure of the mechanism if we take into account the mechanism's kinematic redundancy, the workspace properties, and the velocity-torque capabilities. Human arm utilises kinematic singularities to compensate weak actuation, while in robotic practice kinematic singularities are still avoided. An important issue is also the dual-arm cooperation, as well as the teaching by learning or the visual teaching by showing accompanied by vocal instructions.

Book ChapterDOI
15 Jun 1997
TL;DR: In this paper, the authors investigate the problem of artificial perception related to manipulation tasks in robotics, and propose a method to improve the perception of manipulation tasks by training a neural network.
Abstract: This paper investigates the problem of artificial perception related to manipulation tasks in robotics.

Journal ArticleDOI
TL;DR: This paper describes the design concept and the details of the system, and introduces an experiment in which the robot reaches and grasps an object on a table through coordinating legs and arms.
Abstract: We present a 35 d.o.f. humanoid which can perform a reach-and-grasp motion through coordinating legs and arms. The humanoid robot is designed to be a research testbed to integrate research tools. Each leg and arm has 6 d.o.f. The neck has 3 d.o.f. Each hand has 4 d.o.f. The key idea of the system architecture is a remote-brained approach. In this paper we describe the design concept and the details of the system, and introduce an experiment in which the robot reaches and grasps an object on a table through coordinating legs and arms.

Proceedings ArticleDOI
07 Aug 1997
TL;DR: This paper presents an algorithm for planning the motions of a sensor and a manipulator that are kinematically connected that allows occlusion-free viewing of a target point and placement of the hand at a target hand location.
Abstract: This paper presents an algorithm for planning the motions of a sensor and a manipulator that are kinematically connected. Humanoid robots or other service robots have a vision system and one or two arms, and they are usually mounted on a single body. Given a task to be performed, the robot needs to plan a short, collision-free motion for the arm while placing the camera for optimal viewing conditions so the task progress can be monitored. Since they are kinematically connected, motion of the camera affects the motion of the arm, or the arm may block the view of the camera. Our algorithm plans the body posture, camera motion and the manipulator motion that allows occlusion-free viewing of a target point and placement of the hand at a target hand location. It uses robot's kinematic redundancy to avoid collisions and occlusion of the view point. Since the ensemble of the camera actuator and the manipulator has many joints, the search is performed in the world space rather than in the high-dimensional joint space. The global nature of our search algorithm enables finding a solution in most practical cases.

Proceedings ArticleDOI
07 Sep 1997
TL;DR: The system is designed by dividing the brain into a low-level real-time layer and a high-level action decision layer, and the real- time layer is put on a transputer network.
Abstract: In this paper, we describe the design implementation of this real-time layer for remote-brained robot. "Remote-brained approach" is a paradigm for software research by separating the robot brain from the robot body. The interface from the body to the brain top-level software is also denoted. Furthermore, an application of the reactive motion of humanoid-type robot is described. The real-time facility is fundamentally important for a robot that behaves in the read-world. To accomplish both large scale intelligent software and real-time functionality, we design the system by dividing the brain into a low-level real-time layer and a high-level action decision layer. The action decision layer is put on a WS, and the real-time layer is put on a transputer network.


Book ChapterDOI
15 Jun 1997
TL;DR: Three kinds of manipulations performed by Saika are presented: hitting a bounding ball, grasping unknown objects by groping and catching a thrown ball, chosen to study behavior-based movement control and intelligence.
Abstract: This article addresses the development of a humanoid robot named Saika and skillful manipulations performed by Saika. The developed humanoid robot Saika has a two-DOF neck, dual five-DOF upper arms, a torso and a head which consists of two eyes and two ears. Saika has humansize dimension and weighs eight kilograms. This article also presents three kinds of manipulations performed by Saika: (1) hitting a bounding ball, (2) grasping unknown objects by groping and (3) catching a thrown ball. Those tasks are chosen to study behavior-based movement control and intelligence.


Proceedings ArticleDOI
12 Oct 1997
TL;DR: An algorithm that computes arm motion and hand grasp configuration to reach and grasp an object by a humanoid robot and will be an important module for humanoid robots and avatars in virtual reality systems.
Abstract: This paper presents an algorithm that computes arm motion and hand grasp configuration to reach and grasp an object by a humanoid robot. Although grasping an object is a relatively easy task for humans, this task needs to take into account many constraints including arm joint limits, stability of grasp, and the possibility of collisions between the robot and objects in the environment, The presented algorithm finds the optimal arm and hand configuration to grasp an object without enumerating all possible configurations by employing heuristics to guide the search. Efficiency is gained by evaluating different constraints in increasing order of complexity so as to eliminate infeasible grasp configurations with minimal computation. Computed grasp configurations are such that arm joints are far from their limits, and they are close to grasps used by humans. Our algorithm will be an important module for humanoid robots and avatars in virtual reality systems.

Proceedings ArticleDOI
20 Jun 1997
TL;DR: A walking pattern generation method for a humanoid biped walking robot, which has two knee joints, four hip joints and two arm joints, is presented and it is shown by computer simulations that the proposed networks can successfully generate the desirable walking patterns.
Abstract: Summary form only given. It has been demonstrated by experiments of physiology that most autonomic oscillatory activities of living organisms are generated by rhythmic activities of the corresponding neural systems, and several models of neural oscillatory networks have been suggested. This paper concentrates on constructing suitable neural oscillatory networks which can generate desirable walking patterns. It presents a walking pattern generation method for a humanoid biped walking robot, which has two knee joints, four hip joints and two arm joints. We propose several neuron-oscillator-networks having six or eight neurons respectively and are established in accordance to the mechanism of human walking to generate the walking patterns of six joints of leg in a 3D plane, six joints of body in a sagittal plane and eight joints of body in a 3D plane, respectively. It is shown by computer simulations that the proposed networks can successfully generate the desirable walking patterns.

Proceedings ArticleDOI
09 Dec 1997
TL;DR: This paper describes a research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task, and some proposed experiments for evaluating the integration of the human into the robot system.
Abstract: This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
01 Jul 1997
TL;DR: A three-layered control scheme for the incorporation of linguistically formulated control strategies into a robot controller that consists of parallel working layers with possibly conflicting behaviors encoded as a fuzzy algorithm is discussed.
Abstract: In this paper a three-layered control scheme for the incorporation of linguistically formulated control strategies into a robot controller is discussed. Based upon the behavioral decomposition of activities, the controller consists of parallel working layers with possibly conflicting behaviors. Each layer is encoded as a fuzzy algorithm. The first layer of the controller contains a recently introduced fuzzy algorithm for the solution of the inverse kinematics for redundant robots. This algorithm is demonstrated on a three-link planar manipulator. The kinematic redundancy of the system is used for local collision avoidance and joint limit avoidance. Instead of optimizing each layer independently, we are interested in a good overall behavior of the controller.

Proceedings ArticleDOI
07 Jul 1997
TL;DR: In this article, a survey on how this type of machine vision can be applied to improve the performance of attitude determination systems used in space research, or the orientation capability of space rovers, how it could extend the usefulness of radial profilometry and to study crystal growth in microgravity.
Abstract: Centric minded imaging (CMI)-in contrary to everyday practice where see-through-window (STW) imaging is used-is applied to develop machine vision systems which have-similarly to human vision-peripheral and foveal field of vision simultaneously. A survey is given on how this type of machine vision can be applied to improve the performance of attitude determination systems used in space research, or the orientation capability of space rovers, how it could extend the usefulness of radial profilometry and to study crystal growth in microgravity.

01 Jan 1997
TL;DR: This paper presents an algorithm for planning the motions of a sensor and a manipulator that are kinematically connected that allows occlusion-free viewing of a target point and placement of the hand at a target hand location.
Abstract: This paper presents an algorithm for planning the motions of a sensor and a manipulator that are kinematically connected. Humanoid robots or other service robots have a vision system and one or two arms, and they are usually mounted on a single body. Given a task to be performed, the robot needs to plan a short, collision-free motion for the arm while placing the camera for optimal viewing conditions so the task progress can be monitored. Since they are kinematically connected, motion of the camera affects the motion of the arm, or the arm may block the view of the camera. Our algorithm plans the body posture, camera motion and the manipulator motion that allows occlusion-free viewing of a target point and placement of the hand at a target hand location. It uses robot’s kinematic redundancy to avoid collisions and occlusion of the view point. Since the ensemble of the camera actuator and the manipulator has many joints, the search is performed in the world space rather than in the high-dimensional joint space. The global nature of our search algorithm enables finding a solution in most practical cases.