scispace - formally typeset
Search or ask a question

Showing papers presented at "Robot and Human Interactive Communication in 2003"


Proceedings ArticleDOI
19 Dec 2003
TL;DR: It is hypothesize that an appropriate match between a robot's social cues and its task improve the people's acceptance of and cooperation with the robot.
Abstract: A robot's appearance and behavior provide cues to the robot's abilities and propensities. We hypothesize that an appropriate match between a robot's social cues and its task improve the people's acceptance of and cooperation with the robot. In an experiment, people systematically preferred robots for jobs when the robot's humanlikeness matched the sociability required in those jobs. In two other experiments, people complied more with a robot whose demeanor matched the seriousness of the task.

692 citations


Proceedings ArticleDOI
19 Dec 2003
TL;DR: The recognition method and the control method are described to realize the power assist which reflects operator's intention by grasping the interaction betweenoperator's intention and motion information.
Abstract: This paper describes the recognition method and the control method to realize the power assist which reflects operator's intention by grasping the interaction between operator's intention and motion information. The basic control method for HAL had been performed by using myoelectricity which reflects operator's intention. As the application of the basic method, we considered the control method of power assist based on another information by considering the relation between myoelectricity and another information of motion, and the recognition method for the control method. We adopted phase sequence control which generated a series of assist motions by the transition of some fundamental motions called phase. The result of experiments showed the effective power assist which reflected operator's intention by using this control method.

167 citations


Proceedings ArticleDOI
19 Dec 2003
TL;DR: The role of form is discussed in constructing meaningful relationships through The Hug, a conceptual design exploration of form for a robotic product that facilitates intimate communication across distance.
Abstract: As advances in robotics create robust technology capable of being deployed in the home, design serves an important role shaping how robots will be experienced in accessible, appropriate, and compelling manners. The designer's task of shaping technology is fundamentally concerned with the creation of form. Form is the total expression of a product, including physical shape, materials, and behavioral qualities. In creating form, design balances the needs of people, the capabilities of technology, and the context of use to support an activity or action. In this paper we present The Hug, a conceptual design exploration of form for a robotic product that facilitates intimate communication across distance. We discuss the role of form in constructing meaningful relationships through The Hug and other robotic products.

150 citations


Proceedings ArticleDOI
31 Oct 2003
TL;DR: A motion control algorithm referred to as adaptive caster action is proposed to utilize walking helper effectively in an environment such as a home, an office, a hospital, etc.
Abstract: In this paper, we develop a prototype of an intelligent walking support system referred to as walking helper and propose a motion control algorithm for it. Walking Helper consists of an omni-directional mobile base, a body force sensor, a support frame and a cover around the mobile base. By using the omni-directional mobile base and body force sensor, the good maneuverability and the high safety of walking helper are realized. In addition, we propose a motion control algorithm referred to as adaptive caster action to utilize walking helper effectively in an environment such as a home, an office, a hospital, etc. The proposed control algorithm is experimentally applied to the developed walking helper, and the validity of the proposed control algorithm is illustrated by the experimental results.

54 citations


Proceedings ArticleDOI
19 Dec 2003
TL;DR: The steady-hand teleoperation method implements a type of admittance control law on an impedance-type master but require no force sensor, which results in a slave device that is precisely constrained to preferred paths.
Abstract: We present a method for implementing "steady-hand control" on teleoperators where the master device is of the impedance type. Typical steady-hand systems are admittance controlled cooperative robots that can implement very high damping. Such systems are ideal for implementing guidance virtual fixtures, which are constraints in software that assist a user in moving a tool along preferred paths. Our steady-hand teleoperation method implements a type of admittance control law on an impedance-type master but require no force sensor. Combined with guidance virtual fixtures, the system results in a slave device that is precisely constrained to preferred paths. Experimental results demonstrate the desirable behavior of the system. This research is applicable to impedance-type telemanipulation systems, particularly those used in robot-assisted minimally invasive surgery.

52 citations


Proceedings ArticleDOI
19 Dec 2003
TL;DR: In this paper, a conceptual definition of robot anxiety is proposed, which prevents humans from interaction with communication robots in daily life, by taking into account computer anxiety and communication apprehension, and discusses construction of a psychological scale for measuring robot anxiety and reports the current situation of their research on it.
Abstract: This paper proposes a conceptual definition of anxiety which prevents humans from interaction with communication robots in daily life, named with "robot anxiety", by taking into account computer anxiety and communication apprehension. Then, it discusses construction of a psychological scale for measurement of robot anxiety and reports the current situation of our research on it.

43 citations


Proceedings ArticleDOI
19 Dec 2003
TL;DR: A novel type of dual point haptic system which exploits a novel kind of kinematics to achieve a high isotropy which improves the force rendering and the real degree of interaction which can be achieved in this type of system.
Abstract: The present paper deals with a novel type of dual point haptic system. The system exploits a novel kind of kinematics to achieve a high isotropy which improves the force rendering. Other features of the system are its high stiffness, high peak forces together with a zero backlash cable based transmission. The device can be used by means of a set of sizeable thimbles by any pairs of user's fingertips. Such a device has been developed within the EU GRAB project, for testing a novel set of application with blind users. Such a project investigates to which extent a purely haptic environment can be employed from users with visual impairments in terms of the effective capabilities of the device to exchange high level information with the users; the real degree of interaction which can be achieved in this type of system; the effective opportunity for the user to profit from the system using just this type of interaction. The basic system concepts, the application environment and the system performances are given in the following.

38 citations


Proceedings ArticleDOI
19 Dec 2003
TL;DR: Observation of the interaction with human babies/children showed that the robots with the attention coupling capability facilitated in the babies/ children social behavior, including showing, giving, and verbal interactions like asking questions.
Abstract: This paper proposes, "attention coupling", that is spatio-temporal coordination of each other's attention, as a prerequisite for human-robot social interaction, where the human interactant attributes mental states to the robot, and possibly vice versa. As a realization of attention coupling we implemented on our robots the capability of eye-contact (mutually looking into each other's eyes) and joint attention (looking at a shared target together). Observation of the interaction with human babies/children showed that the robots with the attention coupling capability facilitated in the babies/children social behavior, including showing, giving, and verbal interactions like asking questions.

37 citations


Proceedings ArticleDOI
19 Dec 2003
TL;DR: This paper introduces a novel face recognition using support vector machines with the robust feature extracted by kernel principal component analysis (KPCA), which is robust to facial variations.
Abstract: Face recognition problem is challenging because face images can vary considerably in terms of facial expressions, lighting conditions and so on. This paper introduces a novel face recognition using support vector machines with the robust feature extracted by kernel principal component analysis (KPCA), which is robust to facial variations. This method derives firstly an augmented Gabor-face vector based on the Gabor wavelet transformation of face images using different orientation and scale local feature, which is robust to changes in facial expression and pose. KPCA is used to extract the feature of the augmented Gabor-face vector so that the principal components is computed within the space spanned by high-order correlation of input of augmented Gabor-face vector and produce a good performance. Finally, the support vector machine (SVM), which has high generalization capabilities and high performance in tackling small sample size in the pattern recognition task, is used to classify the feature. The comparative experiments in the ORL face database show that this algorithm is more effective than the previous methods.

35 citations


Proceedings ArticleDOI
19 Dec 2003
TL;DR: A PbD system able to deal with assembly operation in a 3D block world and the main objective of the research is to investigate the benefits of a virtual demonstration environment.
Abstract: Service robots require simple programming techniques allowing users with little or no technical expertise to integrate new tasks in a robotic platform. A promising solution for automatic acquisition of robot behaviours is the programming by demonstration (PbD) paradigm. Its aim is to let robot systems learn new behaviours from a human operator demonstration. This paper describes a PbD system able to deal with assembly operation in a 3D block world. The main objective of the research is to investigate the benefits of a virtual demonstration environment. Overcoming some difficulties of real world demonstrations, a virtual environment can improve the effectiveness of the instruction phase. Moreover, the user can also supervise and validate the learned task by means of a simulation module, thereby reducing errors in the generation process. Some experiments involving the whole set of system components demonstrate the viability and effectiveness of the approach.

35 citations


Proceedings ArticleDOI
19 Dec 2003
TL;DR: Because gesture-based control is easy to use and can reduce preparation process to control for rapid system reaction, it is a proper user interface for handheld devices primarily used in mobile environments.
Abstract: This paper is about how to treat the signals of accelerometers to recognize user gestures from detected signals from accelerometers after applying small accelerometers to handheld devices, and about how to precisely recognize gestures to detect user gestures. To use handheld devices in recognizing gestures, overheads arising from the process of recognizing gestures should be little and gestures should be effectively recognized in real operational environments. Therefore, signals detected from accelerometers were treated after classifying them into acceleration and dynamic acceleration, and signal patterns of accelerometers about simple gestures were analyzed. In addition, a device control module was created and its operating process was compared to that of a normal control device to evaluate the usability of gestures recognition. The result was that because gesture-based control is easy to use and can reduce preparation process to control for rapid system reaction, it is a proper user interface for handheld devices primarily used in mobile environments.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: Research on mental commit robot that seeks a different direction from industrial robot, and that is not so rigidly dependent on objective measures such as accuracy and speed is described.
Abstract: This paper describes research on mental commit robot that seeks a different direction from industrial robot, and that is not so rigidly dependent on objective measures such as accuracy and speed. The main goal of this research is to explore a new area in robotics, with an emphasis on human-robot interaction. In the previous research, we categorized robots into four categories in terms of appearance. Then, we introduced a cat robot and a seal robot, and evaluated them by interviewing many people. The results showed that physical interaction improved subjective evaluation. Moreover, a priori knowledge of a subject has much influence into subjective interpretation and evaluation of mental commit robot. In this paper, 133 subjects evaluated the seal robot, Paro by questionnaires in an exhibition at the National Museum of Science and Technology in Stockholm, Sweden. This paper reports the results of statistical analysis of evaluation data.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: A small biped entertainment robot SDR-4X II is expanding its capabilities for the adaptability in home environment with enhanced core technologies and motion creating system which allows it to create SDR's attractive motion performances.
Abstract: A small biped entertainment robot SDR-4X II is expanding its capabilities for the adaptability in home environment with enhanced core technologies. Conspicuous enhancements are safe design and functions such as integrated adaptive fall-over motion control, pinch avoidance motion control and lift-up motion control for safe interaction with human. We have also developed and enhanced new robot actuators, ISA and real-time integrated adaptive motion control system as a comprehensive motion control for SDR to realize the dynamic and smooth/elegant motion performances. One of the significant motion control related technologies is motion creating system which allows us to create SDR's attractive motion performances. In addition, we have explored and developed entertainment applications using these technologies. A singing dance performance and a fast-paced dance performance are introduced as the attractive applications.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: Experimental results show that the system can provide stable simulation of probing and cutting operation and force filtering approach is proposed to eliminate vibration of the haptic device.
Abstract: This paper discusses the development of a dental training system with haptic display capability. The system architecture is proposed firstly concerning two typical operations, probing and cutting, in dental surgery. Triangle mesh model is used for the tooth to reduce the computation time. Real time collision detection is realized between the tooth and a spherical tool. The operation force is determined from the penetration between the tool and the tooth. Material removal from the tooth is realized using vertex deformation method. Force filtering approach is proposed to eliminate vibration of the haptic device. Experiment results show that the system can provide stable simulation of probing and cutting operation.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: It is shown that the inclusion of posture attributes in the form of spatial relationships between the right hand and other parts of the human body improves the recognition rate in a significant way.
Abstract: Visual recognition of gestures is an important field of study in human-robot interaction research. Although there exist several approaches in order to recognize gestures, on-line learning of visual gestures does not have received the same special attention. For teaching a new gesture, a recognition model that can be trained with just a few examples is required. In this paper we propose an extension to naive Bayesian classifiers for gesture recognition that we call dynamic naive Bayesian classifiers. The observation variables in these combine motion and posture information of the user's right hand. We tested the model with a set of gestures for commanding a mobile robot, and compare it with hidden Markov models. When the number of training samples is high, the recognition rate is similar with both types of models; but when the number of training samples is low, dynamic naive classifiers have a better performance. We also show that the inclusion of posture attributes in the form of spatial relationships between the right hand and other parts of the human body improves the recognition rate in a significant way.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: This study evaluated the skill transfer capabilities when the haptic feedback is used to improve the learning process based on some of the RR system features and indicated that the user's performance is enhanced considerably when both visual and haptic information are provided.
Abstract: Up to now, the skill transfer capabilities of haptics have not only been at an initial stage of development, but also their evolution has been under-investigated in terms of users' impact and achievable results. The present paper is concerned with the concept of reactive robots (RR) system. A RR is a bi-directional system capable of understanding the meaning of motions and transferring skills among users and interfaces according to the interpreted motions. In this paper, the reactive robot control was used for replicating Japanese characters and the recognition system was used for evaluating the stochastic user's performance based on the hidden Markov model. This system may help to understand better the learning processes. Using such a kind of system, an application for learning Japanese handwriting has been developed and tested. We evaluated the skill transfer capabilities when the haptic feedback is used to improve the learning process based on some of the RR system features. Findings from this study indicate that the user's performance is enhanced considerably when both visual and haptic information are provided.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: A model-based control algorithm for the device without using biological signals is proposed, that is, supporting knee joint moment is calculated based on the antigravity term of necessary knee joint moments, which is estimated based on a human model.
Abstract: In this paper, a wearable walking support system for people who have difficulties in walking because of weakened lower extremities is proposed. We propose a wearable walking support device, referred to as wearable walking helper, for supporting antigravity muscles on lower extremities, and a model-based control algorithm for the device without using biological signals, that is, supporting knee joint moment is calculated based on the antigravity term of necessary knee joint moment, which is estimated based on a human model. The control algorithm is implemented in the. wearable walking helper and experimental results illustrate the potential of the proposed system.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: This research develops a behavior learning method ICS (interactive classifier system) using interactive evolutionary computation considering an operator's teaching cost and investigates teacher's physical and mental load and proposed a teaching method based on timing of instruction using ICS.
Abstract: We have proposed a fast learning method that enables a mobile robot to acquire autonomous behaviors from interaction between human and robot. In this research we develop a behavior learning method ICS (interactive classifier system) using interactive evolutionary computation considering an operator's teaching cost. As a result, a mobile robot is able to quickly learn rules by directly teaching from an operator. ICS is a novel evolutionary robotics approach using classifier system. In this paper, we investigate teacher's physical and mental load and proposed a teaching method based on timing of instruction using ICS.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: The analysis by synthesis confirms that the variation of the pause and lags of utterance to communicative actions brings different communicative effects, i.e. about 0.3 sec lag is desirable for familiar greetings, and the longer lag is for polite greetings.
Abstract: The timing to generate communicative action and utterance in face-to-face greeting interaction is analyzed by synthesis for applying to a robot-human interaction support. The analysis by synthesis is performed by using an embodied robot system and confirms that the variation of the pause and lags of utterance to communicative actions brings different communicative effects, i.e. about 0.3 sec lag is desirable for familiar greetings, and the longer lag is for polite greetings. This result demonstrates the importance of timing in robot-human embodied interaction and the applicability in advanced human-robot communication.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: This work proposes an extension of the heuristic evaluation method for improving usability in direct multimodal human-robot interactions without the use of intermediate interfaces like GUIs.
Abstract: Traditionally, usability inspection methods have been applied for improving the performance of man-machine interactions. In particular, one of the most popular techniques for usability inspection is the heuristic evaluation method. This method has been applied in the past in robotic interfaces for analyzing graphical user interfaces (GUIs) that manipulate robots. In this work we propose an extension of this method for improving usability in direct multimodal human-robot interactions without the use of intermediate interfaces like GUIs.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: This paper discusses about mechanical system which is important and original for a small biped entertainment robot which is used in home environment, and proposes the ideas against falling-over which makes the robot as safe as possible.
Abstract: SDR-4XII is the latest prototype model which is a small humanoid type robot. We reported the outline of the robot SDR-4X last year. SDR-4X II is the improved model of SDR-4X. In this paper we discuss about mechanical system which is important and original for a small biped entertainment robot which is used in home environment. One technology is the actuator technology which we originally developed named Intelligent Servo Actuator (ISA). We explain the specification and the important technical points. Another technology is the design of actuators alignment in the body which enables dynamic motion performance. Next technology is the sensor system which supports the high performance of the robot, especially the detection of outside objects, ability of stable walking motion and safe interaction with human. The robot is used in normal home environment, so we should strongly consider the falling-over of the robot. We propose the ideas against falling-over which makes the robot as safe as possible.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: The ultrasonic sensor disk is proposed as one of the sensor disks embedding the sensor suit made of soft and elastic fabrics embedded with arrays of MEMS sensors such as strain gauges, ultrasonic sensors and optical fiber sensors, to measure different kinds of human muscle conditions.
Abstract: Many researchers are studying and developing various kinds of man-machine systems. Especially, a wearable robot, such as an exoskeleton power suit, is one of the most remarkable fields. In this field, more accurate and reliable sensing system for detecting human motion intention is strongly required. In most of conventional man-machine systems, torque sensors, tactile pressure sensors and EMG sensors are utilized in a man-machine interface to detect human motion intention. These sensors, however, have some limitations. For example, it is hard to install and secure torque sensors on the joints of a human body. It is not easy to correlate the data from a tactile pressure sensor to the human motion intention. Although the EMG sensor can detect human motion intention, the sensor system is complex and expensive, and suffers from electric noise. We have been developing an innovative sensor suit which, just like a wet suit, can be conveniently put on by an operator to detect his or her motion intention by non-invasively monitoring his or her muscle conditions such as the shape, the stiffness and the density. This sensor suit is made of soft and elastic fabrics embedded with arrays of MEMS sensors such as strain gauges, ultrasonic sensors and optical fiber sensors, to measure different kinds of human muscle conditions. In the previous paper, the muscle stiffness sensor for detecting muscular force was developed according to the fact that the muscle gains its stiffness as it is activated. Its superior performance was reported through experiments in which the sensor was applied for the assisting device for the disable. In this paper, the ultrasonic sensor disk is proposed as one of the sensor disks embedding the sensor suit. This sensor is based on an original principle and non-invasively detects activity of specific muscle. It is clear that the square of ultrasonic transmission speed is in proportion to the elasticity of the object and in inverse proportion to the density. It is estimated that the elasticity and density of the muscle increase or decrease as the muscle is energized. Then, it is hereby expected that the muscular activity is measured by the ultrasonic sensor. In this study, the feasibility of an ultrasonic sensor for detecting muscular force is shown through experiments.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: This paper presents strategies to acquire competencies based on human-robot interactions to enable robot learning of new objects and their functionality, or the acquisition of new competencies as an actor.
Abstract: For the purposes of visual manipulation of objects by a robot, the latter has to learn, about object properties, as well as actions that the robot may apply on it. This paper presents strategies to acquire such competencies based on human-robot interactions. Perception is driven by manipulation from an actor, either human or robotic. Interaction with human teachers facilitates robot learning of new objects and their functionality, or the acquisition of new competencies as an actor. Self-exploration of the world extends the robot's knowledge concerning object properties, and consolidates the execution of learned tasks.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: Early results suggest that training in the developed medical training system is a promising way to teach drilling into a bone to medical students.
Abstract: Bone drilling procedures require a high surgeon skill. The required core skills are: recognizing the drilling end-point, ability of applying constant, sufficient, but non-excessive feeding velocity and thrust force. Although several simulators and training systems were developed for different surgery, a bone drilling medical training system does not exist yet. In this paper, a bone drilling medical training system is proposed and a novel control algorithm for the problem is presented. A graphical user interface is developed to complete a medical training system structure. Experimental results for controller performance are satisfactory. Additional experiments are performed to check if the developed system improves the skill of trainees or not. Early results suggest that training in the developed medical training system is a promising way to teach drilling into a bone to medical students.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: The results indicate that the willingness of bystanders to help a robot not only is a consequence of the robot initiated interaction, but equally depends on the situation and state of occupation people are in when requested to interact with and assist the robot.
Abstract: This paper reports an experimental study in which people who had never encountered our service robot before were requested to assist it with a task. We call these visiting users "bystanders" to differentiate them from people who belong to the social setting and group in which the robot is operated in and thus are familiar with the robot. In our study 32 subjects were exposed to our robot and requested by it to provide a cup of coffee as part of a delivery mission. We anticipated that people in general would help the robot, dependent upon whether they were busy or had received a demonstration of the robot as introduction. Our results indicate that the willingness of bystanders to help a robot not only is a consequence of the robot initiated interaction, but equally depends on the situation and state of occupation people are in when requested to interact with and assist the robot.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: This work investigates skill transfer from assisted to unassisted modes for a Fitts' type targeting task with an underactuated dynamic system and results indicate that transfer of skill is slight but significant for the assisted training modes.
Abstract: Machine-mediated teaching of dynamic task completion is typically implemented with passive intervention via virtual fixtures or active assist by means of record and replay strategies. During interaction with a real dynamic system however, the user relies on both visual and haptic feedback in order to elicit desired motions. This work investigates skill transfer from assisted to unassisted modes for a Fitts' type targeting task with an underactuated dynamic system. Performance, in terms of between target tap times, is measured during an unassisted baseline session and during various types of assisted training sessions. It is hypothesized that passive and active assist modes that are implemented during training of a dynamic task could improve skill transfer to a real environment or unassisted simulation of the task. Results indicate that transfer of skill is slight but significant for the assisted training modes.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: A method of multi-modal language processing that reflects experiences shared by the user and the robot that can interpret even fragmental and ambiguous utterances, and can act and generate utterances appropriate for a given situation is described.
Abstract: This paper describes a method of multi-modal language processing that reflects experiences shared by the user and the robot. Through incremental online optimization in the process of interaction, the robot's system of beliefs, which is represented by a stochastic model, is formed coupling with that of the user. The belief system of the robot consists of belief modules, a confidence that each belief is shared by the user (local confidence), and a confidence that all the belief modules and the local confidence are identical to those of the user (global confidence). Based on the system of beliefs, the robot can interpret even fragmental and ambiguous utterances, and can act and generate utterances appropriate for a given situation.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: Experimental results indicate the validity of the speed control assistance using the estimation of user's attention using the duration of gaze of a robotic wheelchair system as a guide robot.
Abstract: In this paper, we describe a robotic wheelchair system as a guide robot. This system detects the head pose and gaze direction of the user, and recognizes its position and the surrounding environment using a range sensor and a map. Since the system can detect where the user is looking from the measurements, it can estimate the attention of the user on the wheelchair by the duration of gaze. Experimental results indicate the validity of the speed control assistance using the estimation of user's attention.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: A method to estimate a sound source position by fusing the auditory and visual information with Bayesian network in human-robot interaction is proposed and experimental results are presented to show the effectiveness of the proposed method.
Abstract: This paper proposes a method to estimate a sound source position by fusing the auditory and visual information with Bayesian network in human-robot interaction. We firstly integrate multi-channel audio signals and a depth image about the environment to generate a likelihood map for sound source localization. However, this integration, denoted by "MICs", does not always lead to locate a sound source correctly. For correcting the failure in localization, we integrate the likelihood values generated from "MICs" and the skin-color distribution in an image according to the result of classifying audio signal into speech/non-speech categories. The audio classifier is based on the support vector machine(SVM) and the skin-color distribution is modeled with GMM. With the evidences given by MICs, SVMs and GMM, we infer whether pixels in images correspond to sound source or not according to the trained Bayesian network. Finally, experimental results are presented to show the effectiveness of the proposed method.

Proceedings ArticleDOI
19 Dec 2003
TL;DR: A model is obtained which estimates a desired length of movement according to expressions of degree in the same manner as does a person and can lead a robot to a desired goal position guided by several instructions.
Abstract: Ambiguous expressions of degree are frequently used, when instructing another person regarding a task involving movement (,e.g "move it a little" , "lift it more" and so on ). To make a more friendly robot, these ambiguous expressions should be quantified adequately by means of a robot control system that offers effective support. In this paper, we aim at constructing an effective controller coping with such ambiguous instructions and making a robot provide useful support for humans. We discuss how to generate appropriate robot arm movement in terms of our sense of distance based on particular expressions of degree. First, we analyzed human arm movement guided by voice instruction including some kinds of expressions of degree. From this analysis, we then obtained a model which estimates a desired length of movement according to expressions of degree in the same manner as does a person. Finally, we executed experiments to show that our model can lead a robot to a desired goal position guided by several instructions.