scispace - formally typeset
Search or ask a question

Showing papers presented at "Robot and Human Interactive Communication in 1996"


Proceedings ArticleDOI
11 Nov 1996
TL;DR: Experimental results obtained demonstrate that personified interfaces help users engage in a task, and are well suited for an entertainment domain, and that there is a dichotomy between user groups which have opposite opinions about personification.
Abstract: It is still an open question whether software agents should be personified in the interface. In order to study the effects of faces and facial expressions in the interface a series of experiments was conducted to compare subjects' responses to and evaluation of different faces and facial expressions. The experimental results obtained demonstrate that: (1) personified interfaces help users engage in a task, and are well suited for an entertainment domain; (2) people's impressions of a face in a task are different from ones of the face in isolation. Perceived intelligence of a face is determined not by the agent's appearance but by its competence; (3) there is a dichotomy between user groups which have opposite opinions about personification. Thus, agent-based interfaces should be flexible to support the diversity of users' preferences and the nature of tasks.

231 citations


Proceedings ArticleDOI
11 Nov 1996
TL;DR: This study deals with an exercise for restoration of function being one of important rehabilitation tasks, which requires the exercise robot with multi ple degrees of freedom to generate more realistic motion pattern.
Abstract: The application of robot to rehabilitation has become a matter of great concern. This study deals with an exercise for restoration of function being one of important rehabilitation tasks. An exercise of single joint has already been achieved with some automatically controlled machines. Now, the multijoint exercise becomes desirable, which requires the exercise robot with multi ple degrees of freedom to generate more realistic motion pattern. This kind of robot has to be absolutely safe for humans. A pneumatic calculator may be so effective for such a robot because of the flexibility from air compressibility that a rubber artificial muscle manipulator pneumatically driven is applied to construct the exercise robot with two degrees of freedom. Also an impedance control strategy is employed to realize various exercise motion modes. Further, an identification method of the recovery condition is proposed to execute the effective rehabilitation. Some experiments show the availability of proposed rehabilitation robot system.

165 citations


Proceedings ArticleDOI
11 Nov 1996
TL;DR: The artificial emotional creature project is introduced, which aims to explore a new area in robotics, with an emphasis on human-robot interaction, and describes an algorithm implementing a focus of attention through the integration of sensors.
Abstract: Recent advances in robotics have been applied to automation in industrial manufacturing, with the primary purpose of optimizing practical systems in terms of such objective measures as accuracy, speed, and cost. This paper introduces the artificial emotional creature project that seeks to explore a different direction that is not so rigidly dependent an such objective measures. The goal of this project is to explore a new area in robotics, with an emphasis on human-robot interaction. There is a large body of evidence that shows the importance of the interaction between humans and animals such as pets. We have been building a pet robot. As an implementation of an artificial emotional creature, with the subjective appearance of "behaviors" that ape dependent on internal states, or "emotions", as well as external stimuli from both the physical environment and human beings. Human-robot interaction plays a large role, with mutual benefits. The pet robot has visual, audio, and tactile sensors. Olfactory sensors will also be available. The paper describes an algorithm implementing a focus of attention through the integration of those sensors. In particular, simple sound localization will be developed by the robot through the integration of vision and audition, using the interaction of a human being with the robot as the training reference.

125 citations


Proceedings ArticleDOI
11 Nov 1996
TL;DR: A new concept of visual/haptic display called a WYSIWYF Display; (What You See Is What You Feel) is proposed, which ensures correct visual/Haptic registration which is important for effective hand-eye coordination training.
Abstract: We investigate a possibility of skill mapping from human to human via a visual/haptic display system. Our goal in the future is to develop a training system for motor skills such as surgical operations. We have proposed a new concept of visual/haptic display called a WYSIWYF Display; (What You See Is What You Feel). The proposed concept ensures correct visual/haptic registration which is important for effective hand-eye coordination training. Using the prototype WYSIWYF display, we did a preliminary experiment of skill training. Our idea of skill transfer is very simple; basically it is a "record-and-replay" strategy. Questions are "What is the essential data to be recorded for transferring the skill?" and "What is the best way to provide the data to the trainee?". Several methods were tried but no remarkable result was obtained, presumably because the chosen task was too simple.

97 citations


Proceedings ArticleDOI
11 Nov 1996
TL;DR: A more efficient human/robot system which attains velocity reduction on the robot side activated by the incipient contact detection at the surface and gives the human side an interval margin for the reflexive withdrawal motion to avoid the more severely interactive situation.
Abstract: We discuss a way to achieve a fail-safe human/robot contact system. Most of our discussion is based on the human pain tolerance, evaluated for the purpose of establishing the human safety space. First we review our previous work on the human-oriented design of a safe robot and the procedure of covering a robot with a viscoelastic material to achieve both impact force attenuation and contact sensitivity, keeping within the human pain tolerance limit. The safe robot design is verified through a demonstration that the robot exerts a contact force much less than the human pain tolerance and gives no pain to humans. Next, we propose a more efficient human/robot system which attains velocity reduction on the robot side activated by the incipient contact detection at the surface and gives the human side an interval margin for the reflexive withdrawal motion to avoid the more severely interactive situation. The experimental result shows the effectiveness of the velocity reduction of the robot in a fail-safe manner.

63 citations


Proceedings ArticleDOI
11 Nov 1996
TL;DR: A human interface system for multirobot teleoperation using the WWW system is described and the operator can carry out inspection tasks from a distant place by teleoperating actual mobile robots.
Abstract: A concept of distributed autonomous robotic systems (DARS) attracts many researchers' interests as one of the possible solutions which could realize flexible, robust and intelligent robotic systems. However, it can be observed that it is not possible for robots to carry out all the high-level tasks by themselves. A human operator should somehow operate the robotic system according to the requirements for the tasks. We have developed a human interface system for DARS. In this paper, a framework of human interface system for teleoperation is examined to clarify the requirements for those systems which require a single operator to operate multiple robots from a distant place. A prototype of the teleoperation system using the Internet as a medium for information transfer is developed and implemented onto an actual testing platform which consists of multiple omnidirectional mobile robots with cameras.

54 citations


Proceedings ArticleDOI
11 Nov 1996
TL;DR: In this paper, the authors used hidden Markov models (HMM) with continuous output probabilities to extract a temporal pattern of facial motion and proposed a new feature obtained from wavelet transform coefficients.
Abstract: Facial expression recognition is an important technology fundamental to realize intelligent image coding systems and advanced man-machine interfaces in visual communication systems. In the computer vision field, many techniques have been developed to recognize facial expressions. However, most of these techniques are based on static features extracted from one or two still images. Those techniques are not robust against noise and cannot recognize subtle changes in facial expressions. In this paper we use hidden Markov models (HMM) with continuous output probabilities to extract a temporal pattern of facial motion. In order to improve the recognition performance, we propose a new feature obtained from wavelet transform coefficients. For the evaluation, we use 180 image sequences taken from three male subjects. Using these image sequences, the recognition rate for user trained mode achieved 98% compared with 84% using our previous method. The recognition rate for user independent mode achieved 84% when the expressions were restricted to four expressions.

39 citations


Proceedings ArticleDOI
11 Nov 1996
TL;DR: The NN recognition of facial expressions and face robots performance in generating facial expressions are of almost the same level as that in humans, implying a high technological potential for the animate face robot to undertakes interactive communication with human when an artificial emotion being implemented.
Abstract: We attempt to introduce a 3D realistic human-like animate face robot to human-robot communication modality. The face robot can recognize human facial expressions as well as produce realistic facial expressions in real time. For the animate face robot to communicate interactively, we propose a new concept of "active human interface", and we investigate the performance of real-time recognition of facial expressions by neutral network (NN) and the expression ability of facial messages on the face robot. We found that the NN recognition of facial expressions and face robots performance in generating facial expressions are of almost the same level as that in humans. We integrate these two component technologies for the face to produce facial expression in reaction to the recognition result of human facial expression in real time. This implies a high technological potential for the animate face robot to undertakes interactive communication with human when an artificial emotion being implemented.

33 citations


Proceedings ArticleDOI
G.C. Burdea1
11 Nov 1996
TL;DR: Several key aspects of medical VR including organ modeling, tissue compliance and cutting, and the Teleos Toolkit are surveyed, followed by a review of medical robotics from the kinematics and safety points of view, including special-purpose manipulators and force feedback masters.
Abstract: Virtual reality and robotics are teaming to revolutionize the art of medicine, from student training, to diagnosis, anesthesia, surgery and rehabilitation. This paper surveys several key aspects of medical VR including organ modeling, tissue compliance and cutting, and the Teleos Toolkit. This is followed by a review of medical robotics from the kinematics and safety points of view, including special-purpose manipulators and force feedback masters. Finally, we present applications in the areas of tumor palpation, epidural anesthesia, laparoscopic surgery, as well as open and telesurgery.

32 citations


Proceedings ArticleDOI
11 Nov 1996
TL;DR: Using this system, the operation force distribution, joint angles of hands and fingers, wrist position and direction in the operation space and the view of the operation can be simultaneously measured and recorded in real time.
Abstract: A measuring system for grasping function is developed. The purpose of this system is to measure and analyze the grasping function of human hands. In this report, the structure of the system is introduced. The sensor glove which is the main device of the system, developed by the authors to measure the operation force distribution in hands, is also explained in detail. Using this system, the operation force distribution, joint angles of hands and fingers, wrist position and direction in the operation space and the view of the operation can be simultaneously measured and recorded in real time. The system consists of 5 subsystems: 1) the sensor glove for operation force distribution; 2) the Cyber Glove (by Virtual Technologies) for 18 joint angles in hand; 3) a magnetic sensor to detect the wrist position and direction; 4) a video equipment for the observation of the operation view; and 5) a personal computer to control the system.

31 citations


Proceedings ArticleDOI
11 Nov 1996
TL;DR: The basic models and methodologies for their analysis and control of mobile manipulation systems are presented and a new decentralized control structure for cooperative tasks is proposed.
Abstract: Mobile manipulation capabilities are key to many new applications of robotics in space, underwater, construction, and service environments. This article discusses the ongoing effort at Stanford University for the development of multiple mobile manipulation systems and presents the basic models and methodologies for their analysis and control. We present the extension of these methodologies to mobile manipulation systems and propose a new decentralized control structure for cooperative tasks. The article also discusses experimental results obtained with two holonomic mobile manipulation platforms we have designed and constructed at Stanford University.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: An example of an evolutional system that determines actions of virtual creatures in a virtual environment and its process of the evolution is reported.
Abstract: This paper proposes a decision making system and its process of the evolution. By evolving, this system comes to decide its action as if it has an emotion. According to the URGE theory asserted by Masanao Toda (1993), emotion is the most optimized and fundamental decision making system. If it is possible to generate such a system artificially, it may be applied in various engineering fields. Conventional decision making systems never have such an emotional nature. Thus, we started the research of the decision making system with emotion, which may have many possibilities. As the first step, we are trying to make an emotional feature that is included in the decision making system of ovulating. This paper reports an example of an evolutional system that determines actions of virtual creatures in a virtual environment.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: The modeling scheme of emotions appearing in a speech production is described by using a neural network and the synthesizing technique of emotional condition from neutral speech and the subjective evaluation result of synthesized speech from the emotion space is described.
Abstract: This paper describes the modeling scheme of emotions appearing in a speech production by using a neural network and the synthesizing technique of emotional condition from neutral speech. To model emotion conditions in speech production, emotion space is introduced. Emotion space can represent the emotion condition appearing in speech production in a two dimensional space and realize both mapping and inverse mapping between the emotion condition and the speech production. We developed the emotional speech synthesizer to synthesize emotional speech. The emotional speech synthesizer has an ability to synthesize emotional speech by modifying neutral speech in its timing, pitch and intensity. This paper also describes the subjective evaluation result of synthesized speech from the emotion space.

Proceedings ArticleDOI
Woong-Jang Cho1, Dong-Soo Kwon
11 Nov 1996
TL;DR: A new approach based on artificial potential function is proposed for the obstacle avoidance of redundant manipulators that searches the path in real-time using the local distance information and implemented for the collision avoidance of a redundant robot in simulation.
Abstract: A new approach based on artificial potential function is proposed for the obstacle avoidance of redundant manipulators. Unlike the so-called "global" path planning method, which requires expensive computations for the path search before the manipulator starts to move, this new approach, called the "local" path planning, searches the path in real-time using the local distance information. Previous use of artificial potential functions has exhibited local minima in some complex environments. This paper proposes a potential function that has no local minima even for a cluttered environment. The proposed potential function has been implemented for the collision avoidance of a redundant robot in simulation. A simulation is demonstrated on an algorithm that prevents collisions with obstacles by calculating the repulsive potential exerted on links, based on the shortest distance to an object.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: In this article, a Virtual Kabuki Theater (VKT) is presented, in which people at different locations can be Kabuki actors in a Kabuki scene, and facial expressions of a person are detected in real-time in face images from the small camera fired to the helmet worn by the person.
Abstract: This paper describes the Virtual Kabuki Theater the authors have recently developed. In the Virtual Kabuki Theater, people at different locations can be Kabuki actors in a Kabuki scene. In our system, Kabuki actors' 3D models are created in advance. Facial expressions of a person are detected in real-time in the face images from the small camera fired to the helmet worn by the person. Body movements of the person are estimated in real-time from the thermal images acquired by the infrared camera that observes the person. The detected expressions and body movements are reproduced in the Kabuki actor's model. Our implementation shows good performance. The Virtual Kabuki Theater is a first step towards human metamorphosis systems, in which anyone can change (metamorphose) his/her form into any other characters.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: The paper describes a philosophical approach and the consequent technical solution to these problems proposed in the framework of a European joint research project: the TIDE-MOVAID project (mobility and activity assistance systems for the disabled).
Abstract: This paper provides a general approach to the problem of designing proper "interfaces" between humans and technologies at home. These interfaces should be designed for "all users", i.e. as friendly and simple to operate as possible, whatever the real technical background and/or motor ability of the user. The paper describes a philosophical approach and the consequent technical solution to these problems proposed in the framework of a European joint research project: the TIDE-MOVAID project (mobility and activity assistance systems for the disabled). The MOVAID approach is based on the assumption that all users are disabled when facing new technologies. MOVAID aims to provide solutions for interfacing all users, including the moderately and severely disabled, with their home environment. The general needs for simplicity and functionality of able-bodied and moderately disabled users are catered for by model friendly interfaces for standard appliances while the need of independence and autonomy of operation of the severely disabled users are catered for by a mobile manipulator robotic interface (the MOVAID unit). In the paper, the MOVAID approach and the prototypes of a novel friendly interface for a microwave oven and of the MOVAID mobile unit are presented in detail. Furthermore, the results of validation tests with moderately disabled users of the appliance interface are reported and discussed.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: A new control approach, virtual impedance with position error correction (VIPEC), which is based on the compliant motion technique is proposed, which can help two operators to interact with each other.
Abstract: As a new application of teleoperation, this paper presents human-human interaction through the Internet. By integrating the concept of teleoperation with the Internet, we have developed the Tele-Handshaking System (THS) which allows two persons in two different locations to physically communicate with each other by shaking hands through the system and receive tactile feedback. Data between both locations is transmitted through the Internet by using TCP/IP protocol. A design to reduce the effect of variable time delay in data transmission through the Internet is shown. In order to physically couple two operators with each other, some required ideal responses for THS with time delay are defined. To achieve these ideal responses, we propose a new control approach, virtual impedance with position error correction (VIPEC), which is based on the compliant motion technique. Achievement of the ideal responses on THS can help two operators to interact with each other.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: A muscle model is proposed to create a super realistic human face and this work tries to choose and modify muscles which are good for mouth shape generation to realize a natural conversation scene.
Abstract: Human image synthesis by computer graphics is essential to a virtual agent in human interfaces and entertainment visual systems. In this paper, a muscle model is proposed to create a super realistic human face. There are several researches to synthesize human expression, however, research about mouth shape control in conversation is limited to our group. Especially, we try to choose and modify muscles which are good for mouth shape generation to realize a natural conversation scene. Basic mouth shape is defined by measuring the real image captured by camera. We also try to make animation using standard phoneme duration to realize lip-sync.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: The reconstruction method has the feature that no task motion error is induced by the hardware limitations while minimizing a possible null motion error, under the recoverability assumed.
Abstract: Various physical limitations which exist in the manipulator inverse kinematic system, for example joint travel and velocity limits, induce inevitable motion errors. This paper deals with the problem on how to reconstruct such an inverse kinematic solution using redundancy, in order not to entail any task motion error. By analyzing the error due to hardware limitations with respect to the kinematically decoupled coordinates, we show that the recoverability limitation reduces to the solvability of a reconstruction equation under the feasibility condition. Next it is shown that the reconstruction equation is solvable if the configuration is not a joint-limit singularity. The reconstruction method is proposed based on the geometrical analysis of the recoverability of hardware limitations. The method has the feature that no task motion error is induced by the hardware limitations while minimizing a possible null motion error, under the recoverability assumed.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: Information exchange requirements between a human operator and a semi-autonomous service robot operating in a remote environment are discussed and the resulting human-robot-interface (MRI) allows specification of various types of robot commands by use of an advanced system for natural spoken user-independent speech understanding and flexible command generation.
Abstract: Information exchange requirements between a human operator and a semi-autonomous service robot operating in a remote environment are discussed in this paper. The resulting human-robot-interface (MRI) allows specification of various types of robot commands by use of an advanced system for natural spoken user-independent speech understanding and flexible command generation. Visual screen-based monitoring and support of complex operations is achieved by means of an animated 3D environmental model augmented by the image of an onboard CCD camera. Typical features of the MRI are demonstrated through experiments performed with the service robot ROMAN.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: A method for detecting potential collisions between three-dimensional moving objects is described, and the performance of the algorithm is found to remain linear with respect to the complexity of the colliding objects.
Abstract: A method for detecting potential collisions between three-dimensional moving objects is described in this paper. An object-centered, spherical octree representation is defined and implemented for the localisation of potentially colliding features between polyhedral objects. These features are subsequently tested for intersection in order to calculate precisely the actual collision points. Application of the algorithm for the direct manipulation of objects in a virtual scene is considered, to investigate its real-time behaviour. The performance of the algorithm is found to remain linear with respect to the complexity of the colliding objects.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: The target of human avoidance motion in this study is passing motions and it was found that when the subjects were walking at an almost constant velocity and passed each other, the locus could be approximated into a catenary.
Abstract: The possibility of applying human avoidance motion to robot is considered. Our target of human avoidance motion in this study is passing motions. Two experiments, the passing motion experiments on the road and in the laboratory, were conducted to construct the robot avoidance motion algorithm by using human avoidance motion. The characteristics of the avoidance algorithm were obtained by the analysis of the result from the experiment on the road. From the result of the experiment in the laboratory, it was found that when the subjects were walking at an almost constant velocity and passed each other, the locus could be approximated into a catenary. The avoidance algorithm was constructed based on the data obtained from all the experiments, and the area of human's space was calculated. In addition, the avoidance motion simulation of the robot using the avoidance algorithm obtained has been performed.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: A new system, which works hierarchically from detecting the position of human faces and their features and to extracting their contours and feature points and is confirmed to be very effective and robust when dealing with images of faces with complex backgrounds.
Abstract: This paper presents a method for automatic processing of human faces from color images. We describe a new system, which works hierarchically from detecting the position of human faces and their features (such as eyes, nose, mouth, etc.) and to extracting their contours and feature points. The position of human faces and their parts are detected from the image by applying the integral projection method, which uses both the color information (skin and hair color) and edge information (intensity and sign). A multiple active contour model is used to extract the contour-lines of facial features. To do this, we use color information in their energy terms. Facial feature points are decided based on the optimized contours. A constructed 3D facial model using these points can be used to generate its facial expression or change its view. The proposed system is confirmed to be very effective and robust when dealing with images of faces with complex backgrounds.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: A new human interface system, for controlling a mobile robot, where a human operator can indicate the vehicle path on the image of front of view of the vehicle, by touching the panel on the display monitor.
Abstract: We proposed a new human interface system, for controlling a mobile robot. In this interface, a human operator can indicate the vehicle path on the image of front of view of the vehicle, by touching the panel on the display monitor. It is expected that this method makes operation of the mobile robot or vehicle easier. We are implementing this interface on our mobile robot to evaluate it. In this paper, we report its design and implementation.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: The problem of induced random delays caused by the communication network is highlighted and sufficient conditions are derived to ensure the closed loop stability of the mobile robot control system.
Abstract: In this paper we deal with the stability analysis of the real time control of mobile robots. Of particular importance, the problem of induced random delays caused by the communication network is highlighted. Sufficient conditions are then derived to ensure the closed loop stability of the mobile robot control system.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: The DPD as a graspable user interface with emotional, non-verbal feedback is a promising candidate for the next generation of dialog techniques.
Abstract: To compare the advantages and disadvantages of a "natural user interface" a field study was carried out at the largest computer fair in Switzerland. Four different computer stations were presented to the public: 1) with a command language; 2) with a mouse; 3) with a touch screen; and 4) a digital playing desk (DPD) interface. With the DPD the user has to play a board game by moving a real chip an a virtual playing field against a virtual player. The task was to win the computer game. The reactions of the virtual player were simulated by "emoticons" as colored comic strip pictures with a corresponding sound pattern. We investigated the effects of these four different interaction techniques. Results of the inquiry show that the touch screen station was rated as the easiest to use interaction technique, followed by the mouse, DPD interface and the command language interface. From the results of the field test we conclude that the DPD as a graspable user interface with emotional, non-verbal feedback is a promising candidate for the next generation of dialog techniques.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: A unique Sensor Glove MKIII is described, which has been developed to measure the grasping force and its distribution, and it was found that the classification of grasping is possible using this sensor and the "contact web" method.
Abstract: It is important to analyze human grasping motions. In these classifications, grasping patterns depend for many parts on their personal definitions, and no unified view has been reached at present. The measured quantities in grasping include the posture of the hand, the grasping force and its distribution. Little has been reported on classifications based on the grasping force and its distribution. First, the paper describes a unique Sensor Glove MKIII, which has been developed to measure the grasping force and its distribution. Next, the Sensor Glove MKIII was used to classify grasping modes based on the distribution of grasping force. As a result, in the "prehensile grasp" classified by Cutkosky, the difference in grasping patterns due to the object shapes used in the experiment can be measured as the difference in pressure distribution patterns. In addition, it was found that the classification of grasping is possible using this sensor and the "contact web" method. Therefore, the result shows that the Sensor Glove MKIII can be useful for classification of grasping patterns.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: In this paper, the human's characteristics of grasping behavior when disturbance forces, inertial forces or moment of rotation are acted on a grasped object were investigated, and it was found that the human controlled the grasping force so as not to rotate the grasped object, the moment of inertia acted on the object can be cancelled successfully.
Abstract: We investigate the human's characteristics of grasping behavior when disturbance forces, inertial forces or moment of rotation are acted on a grasped object. First we examine the grasping behavior when the inertial force of the object is generated. A subject grasps the experimental object with the thumb and index and ring fingers, and moves it up and down repeatedly in the vertical direction. Experimental results show that the human controls the grasping force to compensate for the inertial force acted from the object and uses the force which is just greater than the minimum force required for grasping the object. Next we investigate the grasping characteristics when the moment of rotation is acted on the object. The experimental object has a weight on one side so that its center of gravity is biased. As a result, it is found that the human controls the grasping force so as not to rotate the grasped object, the moment of inertia acted on the object can be cancelled successfully.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: A new approach for detecting faces whose size, position and pose are unknown in an image with a complex background and for estimating their poses, both of which using the color information are described.
Abstract: Detecting human faces in images and estimating the pose of the faces are very important problems in human computer interaction studies. This paper describes a new approach for detecting faces whose size, position and pose are unknown in an image with a complex background and for estimating their poses, both of which using the color information. We use a perceptually uniform chromatic system for representing the color information in order to extracting the skin and hair color regions robustly. The system first detects the "face like" regions from input images using the fuzzy pattern matching method. Then, it estimates the pose of the detected faces and moves the camera according to the estimated pose to obtain images containing the faces in frontal pose. Finally, we verify the face candidates by checking the facial features in it.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: In this article, the physiological signal entrainment in face-to-face interaction was analyzed on the basis of heart rate and its variability as indices, based on the previous known fact that alteration of cardiac rhythm has been used as an indicator of the emotional state.
Abstract: The entrainment between talkers in face-to-face interaction plays an important role in the smooth exchange of information. In this paper, the physiological signal entrainment in face-to-face interaction was analyzed on the basis of heart rate and its variability as indices, based on the previous known fact that alteration of cardiac rhythm has been used as an indicator of the emotional state. The subjects consisted of two pairs of mothers and healthy infants aged 5 and 9 months as a primitive form of communication, and one pair of male students. The existence of physiological signal entrainment was demonstrated from some examples of synchronized time changes of heart rate variability in both mother-infant interaction and adult conversation. This finding suggests the entrainment is biologically essential to human communication and it could be applicable for improved human-robot interaction.