scispace - formally typeset
Search or ask a question

Showing papers on "Social robot published in 1995"


Proceedings Article
12 May 1995
TL;DR: The genetic framework of the social potential fields method, a new approach for distributed autonomous control of VLSR systems, is described and it is shown with computer simulations that the method can yield interesting and useful behaviors among robots.
Abstract: A Very Large Scale Robotic (VLSR) system may consist of from hundreds to perhaps tens of thousands or more autonomous robots. The costs of robots are going down, and the robots are getting more compact, more capable, and more flexible. Hence, in the near future, we expect to see many industrial and military applications of VLSR systems in tasks such as assembling, transporting, hazardous inspection, patrolling, guarding and attacking. In this paper, we propose a new approach for distributed autonomous control of VLSR systems. We define simple artificial force laws between pairs of robots or robot groups. The force laws are inverse-power force laws, incorporating both attraction and repulsion. The force laws can be distinct and to some degree they reflect the ‘social relations’ among robots. Therefore we call our method social potential fields. An individual robot's motion is controlled by the resultant artificial force imposed by other robots and other components of the system. The approach is distributed in that the force calculations and motion control can be done in an asynchronous and distributed manner. We also extend the social potential fields model to use spring laws as force laws. This paper presents the first and a preliminary study on applying potential fields to distributed autonomous multi-robot control. We describe the genetic framework of our social potential fields method. We show with computer simulations that the method can yield interesting and useful behaviors among robots, and we give examples of possible industrial and military applications. We also identify theoretical problems for future studies.

528 citations


Proceedings ArticleDOI
05 Aug 1995
TL;DR: This paper presents and experimentally demonstrate an approach that utilizes cooperation at three levels: sensing, action, and control, and takes the advantage of a simple communication protocol to compensate for the robots' noisy and uncertain sensing.
Abstract: This paper deals with the communication in task-sharing between two autonomous six-legged robots equipped with object and goal sensing, and a repertoire of contact and light-following behaviors. The performance of pushing an elongated box towards a goal region is difficult for a single robot and improves significantly when performed cooperatively, but requires careful coordination between the robots. We present and experimentally demonstrate an approach that utilizes cooperation at three levels: sensing, action, and control, and takes the advantage of a simple communication protocol to compensate for the robots' noisy and uncertain sensing.

372 citations


Journal ArticleDOI
TL;DR: By evolving neural controllers for a Khepera robot in computer simulations and then transferring the agents obtained to the real environment, it is shown that an accurate model of a particular robot-environment dynamics can be built by sampling the real world through the sensors and the actuators of the robot.
Abstract: The problem of the validity of simulation is particularly relevant for methodologies that use machine learning techniques to develop control systems for autonomous robots, as, for instance, the artificial life approach known as evolutionary robotics. In fact, although it has been demonstrated that training or evolving robots in real environments is possible, the number of trials needed to test the system discourages the use of physical robots during the training period. By evolving neural controllers for a Khepera robot in computer simulations and then transferring the agents obtained to the real environment we show that (a) an accurate model of a particular robot-environment dynamics can be built by sampling the real world through the sensors and the actuators of the robot; (b) the performance gap between the obtained behaviors in simulated and real environments may be significantly reduced by introducing a "conservative" form of noise; (c) if a decrease in performance is observed when the system is transferred to a real environment, successful and robust results can be obtained by continuing the evolutionary process in the real environment for a few generations.

365 citations


Proceedings ArticleDOI
21 May 1995
TL;DR: A variable impedance control method for robot to cooperate with human is proposed and it is shown that the impedance parameters obtained in the experiment performed by two humans give the best characteristics to the robot for cooperation with the human.
Abstract: Robots are expected to be human-friendly and to execute tasks in cooperation with humans. Control systems for such robots should be designed in order to adapt human characteristics. In this paper, a variable impedance control method for robot to cooperate with human is proposed. First, the human characteristics in a cooperative task between two humans are analyzed. It is confirmed that human characteristics can be expressed by a variable impedance model. Then, we make a robot and a human to execute a cooperative task. It is shown that the impedance parameters obtained in the experiment performed by two humans give the best characteristics to the robot for cooperation with the human.

319 citations


Journal ArticleDOI
TL;DR: As a starting point to study experimentally the development of robots' ‘social relationships’, the investigation of collection and use of body images by means of imitation is proposed, suggesting that it might be a general principle in the evolution of intelligence.

254 citations


Proceedings Article
12 May 1995
TL;DR: This paper addresses the motion planning problem for a robot in presence of movable objects with an overview of a general approach which consists in building a manipulation graph whose connected components characterize the existence of solutions.
Abstract: This paper addresses the motion planning problem for a robot in presence of movable objects. Motion planning in this context appears as a constrained instance of the coordinated motion planning problem for multiple movable bodies. Indeed, a solution path in the configuration space of the robot and all movable objects is a sequence of transit paths where the robot moves alone and transfer paths where a movable object follows the robot. A major problem is to find the set of configurations where the robot has to grasp or release objects. The paper gives an overview of a general approach which consists in building a manipulation graph whose connected components characterize the existence of solutions. Two planners developed at LAAS-CNRS illustrate how the general formulation can be instantiated in specific cases.

175 citations


Patent
15 Mar 1995
TL;DR: In this paper, an autonomous navigation system for a mobile robot or a manipulator which is intended to guide the robot through the workspace to a predetermined target point in spite of incomplete information without colliding with known or unknown obstacles is presented.
Abstract: In an autonomous navigation system for a mobile robot or a manipulator which is intended to guide the robot through the workspace to a predetermined target point in spite of incomplete information without colliding with known or unknown obstacles. All operations are performed on the local navigation level in the robot coordinate system. In the course of this, occupied and unoccupied areas of the workspace are appropriately marked and detected obstacles are covered by safety zones. An intermediate target point is defined in an unoccupied area of the workspace and a virtual harmonic potential field is calculated, whose gradient is followed by the robot. Mobile robots with such an autonomous navigation system can be used as automated transport, cleaning and service systems.

124 citations


Journal ArticleDOI
01 Jan 1995-Robotica
TL;DR: It is shown that the introduction of a quasi-natural potential in Lagrange's formulation of robot dynamics gives rise to the design of hyperstable PID servo-loops, which establish global asymptotic stability of set-point control.
Abstract: After the enthusiasm for creating “intelligent robots” in the early 1980's, progress of robotics research in the past decade has not fulfilled our expectations but revealed various difficulties in understanding motor control by man and implementing intelligent functions in robotic machines. To regain the initiative in the development of intelligent machines, this paper first presents a critical review of the state of the art of robot control and points out the necessity for improving robot servo-loops in order to facilitate skilled and dexterious motions in robotic manipulators and mechanical hands. It is then shown that the introduction of a quasi-natural potential in Lagrange's formulation of robot dynamics gives rise to the design of hyperstable PID servo-loops, which establish global asymptotic stability of set-point control. The hyperstability theoretical framework is then applied to the design of control commands in various control problems, such as hybrid (position/force) control, impedance control, model-based adaptive control, and learning control. In all cases, the passivity concept of residual robot dynamics plays a vital role in conjunction with the concept of feedback connections of two hyperstable nonlinear or linear blocks.

115 citations


01 Jan 1995
TL;DR: The concept of general perception together with the fuzzy controller were tested on a real robot performing wall following and obstacle avoidance missions and some of the ensuing experimental results are presented.
Abstract: This paper presents a new approach to the wall following problem of a mobile robot. Local path planning is based on a so-called concept of general perception, which means that the robot is guided by a representation of its perception only. No map of the environment is used and walls and obstacles are not modelled either. A fuzzy controller then uses the information provided by the concept of general perception to guide the robot along walls of arbitrary shape and around obstacles which are treated as part of a wall, unless the distance between obstacle and wall allows a safe passage. This paper first introduces the concept of general perception and then explains the fuzzy controller in detail. All membership functions and the complete rule base are provided. The concept of general perception together with the fuzzy controller were tested on a real robot performing wall following and obstacle avoidance missions and some of the ensuing experimental results are presented at the end of the paper.

101 citations


Proceedings ArticleDOI
21 May 1995
TL;DR: The problem of sensor-based robot motion planning in unknown environments is addressed and the proposed solution approach prescribes the repeated sequence of two fundamental processes: perception and navigation.
Abstract: The problem of sensor-based robot motion planning in unknown environments is addressed. The proposed solution approach prescribes the repeated sequence of two fundamental processes: perception and navigation. In the former, the robot collects data from its sensors, builds local maps and integrates them with the global maps so far reconstructed, using fuzzy logic operators. During the navigation process, a planner based on the A* algorithm proposes a path from the current position to the goal. The robot moves along this path until one of two termination conditions is verified namely (i) an unexpected obstructing obstacle is detected, or (ii) the robot is leaving the area in which reliable information has been gathered. Experimental results are presented for a Nomad 200 mobile robot.

94 citations


Journal ArticleDOI
TL;DR: This paper describes an approach to learning an indoor robot navigation task through trial-and-error using the explanation-based neural network learning algorithm EBNN, which allows the robot to learn control using dynamic programming.

Journal ArticleDOI
Raja Chatila1
TL;DR: This paper discusses issues related to the design of the control architectures for an autonomous mobile robot capable of performing tasks efficiently and intelligently, i.e. in a manner adapted to its environment, to its own state and to the execution status of its task.

Proceedings ArticleDOI
10 Jul 1995
TL;DR: In this paper, an energy saving motion control strategy suitable for autonomous mobile robots (AMRs) working in environments cluttered with unpredictable obstacles like civil buildings has been proposed and tested on the PARIDE mobile robot developed at the Robotics Laboratory of the Department of Computer Science, University of Pavia (Italy).
Abstract: The paper proposes an energy saving motion control strategy suitable for autonomous mobile robots (AMRs) working in environments cluttered with unpredictable obstacles like civil buildings. The strategy has been tested on the PARIDE mobile robot developed at the Robotics Laboratory of the Department of Computer Science, University of Pavia (Italy).

Book ChapterDOI
04 Dec 1995
TL;DR: The control system for a mobile robot is found to decompose naturally into a set of layered control loops, where the layers are defined by the level of abstraction of the data, and the cycle time of the feed-back control.
Abstract: This paper concerns the application of techniques from estimation theory to the problem of navigation and perception for a mobile robot. After a brief introduction, a hierarchical architecture is presented for the design of a mobile robot navigation system. The control system for a mobile robot is found to decompose naturally into a set of layered control loops, where the layers are defined by the level of abstraction of the data, and the cycle time of the feed-back control. The levels that occur naturally are identified as the level of signal, device, behaviour, and task.

Proceedings ArticleDOI
21 May 1995
TL;DR: This paper introduces the proximity space method as a means for performing real-time, behavior-based control of visual gaze and shows how this method is integrated with robot motion using an intelligent control architecture that can automatically reconfigure the robot's behaviors in response to environmental changes.
Abstract: To interact effectively with humans, mobile robots will need certain skills. One particularly important skill is the ability to pursue moving agents. To do this, the robot needs a robust visual tracking algorithm and an effective obstacle avoidance algorithm, plus a means of integrating these two behaviors in a seamless manner. In this paper, we introduce the proximity space method as a means for performing real-time, behavior-based control of visual gaze. We then show how this method is integrated with robot motion using an intelligent control architecture that can automatically reconfigure the robot's behaviors in response to environmental changes. The resulting implementation pursues people and other robots around our laboratory for extended periods of time.

Proceedings ArticleDOI
05 Aug 1995
TL;DR: Research on a two-armed bipedal robot, an apelike robot, which can perform biped walking, rolling over and standing up, which is designed based on the remote-brained approach in which a robot does not bring its own brain within the body and talks with it by radio links.
Abstract: Focusing attention on flexibility and intelligent reactivity in the real world, it is more important to build, not a robot that won't fall down, but a robot that can get up if it does fall down. This paper presents research on a two-armed bipedal robot, an apelike robot, which can perform biped walking, rolling over and standing up. The robot consists of a head, two arms, and two legs. The control system of the biped robot is designed based on the remote-brained approach in which a robot does not bring its own brain within the body and talks with it by radio links. This remote-brained approach enables a robot to have both a heavy brain with powerful computation and a lightweight body with multiple joints. The robot can keep balance while standing using tracking vision, detect whether it falls down or not by a set of vertical sensors, and perform a getting up motion by coordinating two arms and two legs. The developed system and experimental results are described with illustrated real examples.

Book ChapterDOI
11 Oct 1995
TL;DR: It is shown how control systems that perform a non-trivial sequence of behaviors can be obtained with this methodology by carefully designing the conditions in which the evolutionary process operates.
Abstract: Recently, a new approach that involves a form of simulated evolution has been proposed for the building of autonomous robots However, it is still not clear if this approach may be adequate to face real life problems In this paper we will show how control systems that perform a non-trivial sequence of behaviors can be obtained with this methodology by carefully designing the conditions in which the evolutionary process operates In the experiment described in the paper, a mobile robot is trained to locate, recognize, and grasp a target object The controller of the robot has been evolved in simulation and then downloaded and tested on the real robot

Proceedings ArticleDOI
05 Aug 1995
TL;DR: This paper proposes how to implement the sub-task of opening a door and passing through a door-way using an autonomous mobile robot equipped with a manipulator, and shows experimental results of it.
Abstract: An aim of this research is to realize a behavior of opening a door and passing through a door-way using an autonomous mobile robot equipped with a manipulator. This behavior is realized by integrating several sub-tasks, such as, finding a door knob, pushing the door to open by cooperation between a manipulator and a locomotion. First, we pay attention to one important sub-task, pushing the door. To realize the sub-task, a mobile robot needs to cooperate with an equipped manipulator. In this paper, we propose how to implement the sub-task on the robot, and show experimental results of it.

Proceedings ArticleDOI
05 Aug 1995
TL;DR: This paper presents concrete implementations of different strategies for convoy-like behaviour in a multi-autonomous robot system based around two RWI B12 mobile robots and uses only passive visual sensing for inter-robot communication.
Abstract: This paper deals with coordinating behaviour in a multi-autonomous robot system. When two or more autonomous robots must interact in order to accomplish some common goal, communication between the robots is essential. Different inter-robot communications strategies give rise to different overall system performance and reliability. After a brief consideration of some theoretical approaches to multiple robot collections, we present concrete implementations of different strategies for convoy-like behaviour. The convoy system is based around two RWI B12 mobile robots and uses only passive visual sensing for inter-robot communication. The issues related to different communication strategies are considered.

Proceedings ArticleDOI
21 May 1995
TL;DR: The proposed wall-following algorithm makes a robot able to follow a wall in various shapes such as a square wall, a circular wall etc.
Abstract: Presents a robust method for an autonomous mobile robot with a sonar-ring to follow walls. The sonar-ring consists of multiple ultrasonic range sensors. The proposed wall-following algorithm makes a robot able to follow a wall in various shapes such as a square wall, a circular wall etc. The autonomous mobile robot "Yamabico" is used for experiments after being equipped with a 12 directional sonar-ring. The on-board controller of the robot decides its motion based on sonar-ring range data every 3 centimeters going forward. The authors carried out many experiments with this autonomous mobile robot, and investigated the validity and the limits of this method.

Journal ArticleDOI
01 Sep 1995-Robotica
TL;DR: This paper presents a navigation algorithm containing different action modules; some of these actions use Fuzzy Logic and shows that the method is well adapted to this type of problem.
Abstract: This paper treats, in a general way, the problem of mobile robot navigation in a totally unknown environment. The different aspects of this problem are dealt with one by one. We begin by introducing a simple method for perceiving and analyzing the robot's local environment based on a limited amount of distance information. Using this analysis as our base, we present a navigation algorithm containing different action modules; some of these actions use Fuzzy Logic. The results presented whether experimental or simulation show that our method is well adapted to this type of problem.

Proceedings ArticleDOI
21 Nov 1995
TL;DR: This work presents a promising approach to visual (and more general sensory) robot control, that does not require modeling of robot transfer functions or the use of absolute world coordinate systems, and thus is suitable for use in unstructured environments.
Abstract: Robot manipulators, some thirty years after their commercial introduction, have found widespread application in structured industrial environments, performing, for instance, repetitive tasks in an assembly line. Successful application in unstructured environments however has proven much harder. Yet there are many such tasks where robots would be useful. We present a promising approach to visual (and more general sensory) robot control, that does not require modeling of robot transfer functions or the use of absolute world coordinate systems, and thus is suitable for use in unstructured environments. Our approach codes actions and tasks in terms of desired general perceptions rather than motor sequences. We argue that our vision space approach is particularly suited for easy teaching/programming of a robot. For instance a task can be taught by supplying an image sequence illustrating it. The resulting robot behavior is robust to changes in the environment, dynamically adjusting the motor control rules in response to environmental variation.

Proceedings ArticleDOI
05 Jul 1995
TL;DR: A surprisingly efficient algorithm for piecemeal learning an unknown undirected graph G = (V, E) in which the robot explores every vertex and edge in the graph by traversing at most O(E + Vlt”( 1) ) edges is presented.
Abstract: V1’e study how a mobile robot can piecemeal learn an unknown environment. The robot’s goal is to learn a complete map of its environment, while satisfying the constraint that it must return every so often to its starting position (for refueling, say). The environment is modelled as an arbitrary, undirected graph, which is initially unknown to the robot. We assume that the robot can distinguish vertices and edges that it has already explored. JVe present a surprisingly efficient algorithm for piecemeal learning an unknown undirected graph G = (V, E) in which the robot explores every vertex and edge in the graph by traversing at most O(E + Vlt”( 1) ) edges. This nearly linear algorithm improves on the best previous algorithln, in which the robot traverses at most O(E + V-2) edges. JVe also give an application of piecemeal learning to the problem of searching a graph for a “treasure”. *We gratefully acknowledge support from NSF grant CCR-9310888, ARO grant DAAL03-86-K0171, NSF grant 9z171)41.ASC, Air Force Contract TN DGAFOSR.86.0078, ARPA/Army contract DABT63-93-C-0038, NSF contract 9114440 -CCR, DARPA contract NOOO14-J-9Z.1799, ARPA/ONR contract Nooo I4-92-J-131o, the Siemens Corporation, and a special grant from IBM. The authors can be reached at barucht!blaze. cs. jhu. ectu, margrit@lcs .mit. edu, rivest@theory. lcs .mit. edu, and monat?theory. lcs .mit . edu. t Also at Johns Hopkins University, Baltimore, MD 21218.

Proceedings ArticleDOI
05 Aug 1995
TL;DR: An application of distributed perception for inferring a user's intentions by observing tactile gestures based on the expressiveness of nonverbal communication is presented.
Abstract: Gesture-based programming is a new paradigm to ease the burden of programming robots. By tapping in to the user's wealth of experience with contact transitions, compliance, uncertainty and operations sequencing, we hope to provide a more intuitive programming environment for complex, real-world tasks based on the expressiveness of nonverbal communication. A requirement for this to be accomplished is the ability to interpret gestures to infer the intentions behind them. As a first step toward this goal, this paper presents an application of distributed perception for inferring a user's intentions by observing tactile gestures. These gestures consist of sparse, inexact, physical "nudges" applied to the robot's end effector for the purpose of modifying its trajectory in free space. A set of independent agents-each with its own local, fuzzified, heuristic model of a particular trajectory parameter observes data from a wristforce/torque sensor to evaluate the gestures. The agents then independently determine the confidence of their respective findings and distributed arbitration resolves the interpretation through voting.

Proceedings ArticleDOI
05 Nov 1995
TL;DR: A robot system that finds people, approaches them and then recognizes them is described, which uses a variety of techniques: color vision is used to find people; vision and sonar sensors are used to approach them; and a template-based pattern recognition algorithm is usedto isolate the face.
Abstract: In order for mobile robots to interact effectively with people they will have to recognize faces. We describe a robot system that finds people, approaches them and then recognizes them. The system uses a variety of techniques: color vision is used to find people; vision and sonar sensors are used to approach them; a template-based pattern recognition algorithm is used to isolate the face; and a neural network is used to recognize the face. All of these processes are controlled using an intelligent robot architecture that sequences and monitors the robot's actions. We present the results of many experimental runs using an actual mobile robot finding and recognizing up to six different people.

01 Apr 1995
TL;DR: It is shown how W-learning may be used to define spaces of agent-collections whose action selection is learnt rather than hand-designed, which is the kind of solution-space that may be searched with a genetic algorithm.
Abstract: W-learning is a self-organising action-selection scheme for systems with multiple parallel goals, such as autonomous mobile robots. It uses ideas drawn from the subsumption architecture for mobile robots (Brooks), implementing them with the Q-learning algorithm from reinforcement learning (Watkins). Brooks explores the idea of multiple sensing-and-acting agents within a single robot, more than one of which is capable of controlling the robot on its own if allowed. I introduce a model where the agents are not only autonomous, but are in fact engaged in direct competition with each other for control of the robot. Interesting robots are ones where no agent achieves total victory, but rather the state-space is fragmented among different agents. Having the agents operate by Q-learning proves to be a way to implement this, leading to a local, incremental algorithm (W-learning) to resolve competition. I present a sketch proof that this algorithm converges when the world is a discrete, finite Markov decision process. For each state, competition is resolved with the most likely winner of the state being the agent that is most likely to suffer the most if it does not win. In this way, W-learning can be viewed as `fair' resolution of competition. In the empirical section, I show how W-learning may be used to define spaces of agent-collections whose action selection is learnt rather than hand-designed. This is the kind of solution-space that may be searched with a genetic algorithm.

Proceedings ArticleDOI
20 Jun 1995
TL;DR: This paper describes the approach of teaching the robot by demonstrating in front of it, with special emphasis on the observation system, and illustrates how complimentary sensory data can be used for this purpose.
Abstract: To alleviate the problem of overwhelming complexity in grasp synthesis and path planning associated with robot task planning, we adopt the approach of teaching the robot by demonstrating in front of it. The system has four components: the observation system, the grasping task recognition module, the task translator and the robot system. The observation system comprises an active multibaseline stereo system and a dataglove. The data stream recorded is then used to track object motion; this paper illustrates how complimentary sensory data can be used for this purpose. The data stream is also interpreted by the grasping task recognition module, which produces higher levels of abstraction to describe both the motion and actions taken in the task. The resulting information are provided to the task translator which creates commands for the robot system to replicate the observed task. In this paper we describe how these components work with special emphasis on the observation system. The robot system that we use to perform the grasping tasks comprises the PUMA 560 arm and the Utah/MIT hand. >

Book ChapterDOI
21 Aug 1995
TL;DR: A methodology for designing the representation and the forcement functions that take advantage of implicit domain knowledge in order to accelerate learning in dynamic, situated multiagent domains characterized by multiple goals, noisy perception and action, and inconsistent reinforcement is proposed.
Abstract: This paper discusses why traditional reinforcement learning methods often result in poor performance in dynamic, situated multiagent domains characterized by multiple goals, noisy perception and action, and inconsistent reinforcement. We propose a methodology for designing the representation and the forcement functions that take advantage of implicit domain knowledge in order to accelerate learning in such domains, and demonstrate it experimentally in two different mobile robot domains.

Book ChapterDOI
04 Dec 1995
TL;DR: The effectiveness of robot-manipulators is determined to a great extent by the speed of making this or that movement needed for carrying out the task.
Abstract: The effectiveness of robot-manipulators is determined to a great extent by the speed of making this or that movement needed for carrying out the task. According to this the problem of optimal robot control is often divided into two subproblems solved separately. In the autonomous regime the trajectory planning is fulfilled for providing the robot movement time close to the minimal.

Proceedings ArticleDOI
27 Aug 1995
TL;DR: A new approach to modelling robot systems by a large number of cooperating agents solving a robot task by forming a multi-layered, distributed and reactive (MDR) robot task-level transformer.
Abstract: This paper describes a new approach to modelling robot systems by a large number of cooperating agents solving a robot task. On the one hand the agent model submits to use results of the distributed artificial intelligence (DAI), on the other hand, typical aspects of robot systems must be integrated. Therefore, an agent model will be presented serving the special requirements of robot systems and offering many advantages like robustness, high efficiency, and the chance of integrating reactive and deliberative behavior. The agent model also enables the use of different programming paradigms, ranging from complex over hierarchical to object-oriented agents. Finally, a typical set of agents is presented, forming a multi-layered, distributed and reactive (MDR) robot task-level transformer.