scispace - formally typeset
Search or ask a question

Showing papers on "Mobile robot published in 1987"


01 Jan 1987
TL;DR: The method presented can be generalized to six degrees offreedom and provides a practical means of mating the relationships among objects, as well as estimating the uncertainty associated with the relationships.

1,421 citations


Journal ArticleDOI
01 Jun 1987
TL;DR: In this article, a sonar-based mapping and navigation system for an autonomous mobile robot operating in unknown and unstructured environments is described, where range measurements from multiple points of view are integrated into a sensor level sonar map, using a robust method that combines the sensor information in such a way as to cope with uncertainties and errors in the data.
Abstract: A sonar-based mapping and navigation system developed for an autonomous mobile robot operating in unknown and unstructured environments is described. The system uses sonar range data to build a multileveled description of the robot's surroundings. Sonar readings are interpreted using probability profiles to determine empty and occupied areas. Range measurements from multiple points of view are integrated into a sensor-level sonar map, using a robust method that combines the sensor information in such a way as to cope with uncertainties and errors in the data. The resulting two-dimensional maps are used for path planning and navigation. From these sonar maps, multiple representations are developed for various kinds of problem-solving activities. Several dimensions of representation are defined: the abstraction axis, the geographical axis, and the resolution axis. The sonar mapping procedures have been implemented as part of an autonomous mobile robot navigation system called Dolphin. The major modules of this system are described and related to the various mapping representations used. Results from actual runs are presented, and further research is mentioned. The system is also situated within the wider context of developing an advanced software architecture for autonomous mobile robots.

1,313 citations


Proceedings Article
13 Jul 1987
TL;DR: The reasoning system that controls the robot is designed to exhibit the kind of behavior expected of a rational agent, and is endowed with the psychological attitudes of belief, desire, and intention, resulting in complex goal-directed and reflective behaviors.
Abstract: In this paper, the reasoning and planning capabilities of an autonomous mobile robot are described. The reasoning system that controls the robot is designed to exhibit the kind of behavior expected of a rational agent, and is endowed with the psychological attitudes of belief, desire, and intention. Because these attitudes are explicitly represented, they can be manipulated and reasoned about, resulting in complex goal-directed and reflective behaviors. Unlike most planning systems, the plans or intentions formed by the robot need only be partly elaborated before it decides to act. This allows the robot to avoid overly strong expectations about the environment, overly constrained plans of action, and other forms of overcommitment common to previous planners. In addition, the robot is continuously reactive and has the ability to change its goals and intentions as situations warrant. The system has been tested with SRI's autonomous robot (Flakey) in a space station scenario involving navigation and the performance of emergency tasks.

1,029 citations


Book ChapterDOI
01 Jun 1987
TL;DR: In this article, a 3D Gaussian distribution is used to model triangulation error in stereo vision for a mobile robot that estimates its position by tracking landmarks with on-board cameras.
Abstract: In stereo navigation, a mobile robot estimates its position by tracking landmarks with on-board cameras. Previous systems for stereo navigation have suffered from poor accuracy, in part because they relied on scalar models of measurement error in triangulation. Using three-dimensional (3D) Gaussian distributions to model triangulation error is shown to lead to much better performance. How to compute the error model from image correspondences, estimate robot motion between frames, and update the global positions of the robot and the landmarks over time are discussed. Simulations show that, compared to scalar error models, the 3D Gaussian reduces the variance in robot position estimates and better distinguishes rotational from translational motion. A short indoor run with real images supported these conclusions and computed the final robot position to within two percent of distance and one degree of orientation. These results illustrate the importance of error modeling in stereo vision for this and other applications.

469 citations


Journal ArticleDOI
TL;DR: The kinematic equations of motion of Uranus, a wheeled mobile robot being constructed in the CMU Mobile Robot Laboratory, are formulated and interpreted to interpret the physical conditions which guarantee their existence.
Abstract: We formulate the kinematic equations of motion of wheeled mobile robots incorporating conventional, omnidirectional, and ball wheels.1 We extend the kinematic modeling of stationary manipulators to accommodate such special characteristics of wheeled mobile robots as multiple closed-link chains, higher-pair contact points between a wheel and a surface, and unactuated and unsensed wheel degrees of freedom. We apply the Sheth-Uicker convention to assign coordinate axes and develop a matrix coordinate transformation algebra to derive the equations of motion. We introduce a wheel Jacobian matrix to relate the motions of each wheel to the motions of the robot. We then combine the individual wheel equations to obtain the composite robot equation of motion. We interpret the properties of the composite robot equation to characterize the mobility of a wheeled mobile robot according to a mobility characterization tree. Similarly, we apply actuation and sensing characterization trees to delineate the robot motions producible by the wheel actuators and discernible by the wheel sensors, respectively. We calculate the sensed forward and actuated inverse solutions and interpret the physical conditions which guarantee their existence. To illustrate the development, we formulate and interpret the kinematic equations of motion of Uranus, a wheeled mobile robot being constructed in the CMU Mobile Robot Laboratory.

464 citations


Proceedings ArticleDOI
01 Mar 1987
TL;DR: Motor schemas are proposed as a basic unit of behavior specification for the navigation of a mobile robot and a variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot.
Abstract: Motor schemas are proposed as a basic unit of behavior specification for the navigation of a mobile robot. These are multiple concurrent processes which operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. An implementation strategy based on available tools at UMASS is described. Simulation results show the feasibility of this approach.

418 citations


Journal ArticleDOI
TL;DR: A method by which range data from a sonar rangefinder can be used to determine the two-dimensional position and orientation of a mobile robot inside a room, which is extremely tolerant of noise and clutter.
Abstract: This correspondence describes a method by which range data from a sonar rangefinder can be used to determine the two-dimensional position and orientation of a mobile robot inside a room. The plan of the room is modeled as a list of segments indicating the positions of walls. The algorithm works by correlating straight segments in the range data against the room model, then eliminating implausible configurations using the sonar barrier test, which exploits physical constraints on sonar data. The approach is extremely tolerant of noise and clutter. Transient objects such as furniture and people need not be included in the room model, and very noisy, low-resolution sensors can be used. The algorithm's performance is demonstrated using a Polaroid Ultrasonic Rangefinder.

310 citations


Book ChapterDOI
01 Mar 1987
TL;DR: In this paper, a methodology for the kinematic modeling of wheeled mobile robots is introduced, which is applied to Uranus, a wheeled omnidirectional mobile robot developed at Carnegie Mellon University.
Abstract: We have introduced a methodology for the kinematic modeling of wheeled mobile robots. In this paper, we apply our methodology to Uranus, an omnidirectional wheeled mobile robot which is being developed in the Robotics Institute of Carnegie Mellon University. We assign coordinate systems to specify the transformation matrices and write the kinematic equations-of-motion. We illustrate the actuated inverse and sensed forward solutions; i.e., the calculation of actuator velocities from robot velocities and robot velocities from sensed wheel velocities. We apply the actuated inverse and sensed forward solutions to the kinematic control of Uranus by: calculating in real-time the robot position from shaft encoder readings (i.e., dead reckoning); formulating an algorithm to detect wheel slippage; and developing an algorithm for feedback control.

187 citations


Proceedings ArticleDOI
01 Mar 1987
TL;DR: A robot system capable of locating a part in an unstructured pile of objects, choose a grasp on the part, plan a motion to reach the part safely, and plan amotion to place the part at a commanded position is described.
Abstract: We describe a robot system capable of locating a part in an unstructured pile of objects, choose a grasp on the part, plan a motion to reach the part safely, and plan a motion to place the part at a commanded position. The system requires as input a polyhedral world model including models of the part to be manipulated, the robot arm, and any other fixed objects in the environment. In addition, the system builds a depth map, using structured light, of the area where the part is to be found initially. Any other objects present in that area do not have to be modeled.

185 citations


Proceedings ArticleDOI
25 Feb 1987
TL;DR: The subsumption architecture, a design methodology for multiprocessor control systems based on a new decomposition of problems into task-achieving behaviors, is described and demonstrated by showing how two different control systems can be built on top of the same set of core behaviors.
Abstract: In this paper we describe the subsumption architecture, a design methodology for multiprocessor control systems. The architecture is based on a new decomposition of problems into task-achieving behaviors. This decomposition allows us to incrementally construct more competent robots by adding new behaviors. We demonstrate this by showing how two different control systems can be built on top of the same set of core behaviors. We have implemented both systems and have run them on a real mobile robot. The details and performance of each will be discussed.

152 citations


Journal ArticleDOI
01 Dec 1987
TL;DR: An algorithm is presented to navigate a robot in an unexplored terrain that is arbitrarily populated with disjoint convex polygonal obstacles in the plane and it is proven to yield a convergent solution to each path of traversal.
Abstract: The problem of navigating an autonomous mobile robot through unexplored terrain of obstacles is discussed. The case when the obstacles are "known" has been extensively studied in literature. Completely unexplored obstacle terrain is considered. In this case, the process of navigation involves both learning the information about the obstacle terrain and path planning. An algorithm is presented to navigate a robot in an unexplored terrain that is arbitrarily populated with disjoint convex polygonal obstacles in the plane. The navigation process is constituted by a number of traversals; each traversal is from an arbitrary source point to an arbitrary destination point. The proposed algorithm is proven to yield a convergent solution to each path of traversal. Initially, the terrain is explored using a rather primitive sensor, and the paths of traversal made may be suboptimal. The visibility graph that models the obstacle terrain is incrementally constructed by integrating the information about the paths traversed so far. At any stage of learning, the partially learned terrain model is represented as a learned visibility graph, and it is updated after each traversal. It is proven that the learned visibility graph converges to the visibility graph with probability one when the source and destination points are chosen randomly. Ultimately, the availability of the complete visibility graph enables the robot to plan globally optimal paths and also obviates the further usage of sensors.

Proceedings Article
23 Aug 1987
TL;DR: The original problem of finding collision-free smooth trajectories, i.e. with never backing up, for a circular mobile robot whose the turning radius is lower bounded is studied.
Abstract: Most mobile robots are subject to kinematic constraints (non-holonomic Joints), i.e., the number of degrees of freedom is less than the number of configuration parameters. Such navigate in very constrained space, but at the expense of backing up maneuvers [Laumond 86]. In this paper we study the original problem of finding collision-free smooth trajectories, i.e. with never backing up, for a circular mobile robot whose the turning radius is lower bounded.

Journal ArticleDOI
01 Dec 1987
TL;DR: The critical geometric dimensions of a standard pattern are used here to locate the relative position of the mobile robot with respect to the pattern; by doing so, the method does not depend on values of any intrinsic camera parameters, except the focal length.
Abstract: As mobile robots are taking on more and more of the tasks that were normally delegated to humans, they need to acquire higher degrees of autonomous operation, which calls for accurate and efficient position determination and/or verification The critical geometric dimensions of a standard pattern are used here to locate the relative position of the mobile robot with respect to the pattern; by doing so, the method does not depend on values of any intrinsic camera parameters, except the focal length In addition, this method has the advantages of simplicity and flexibility This standard pattern is also provided with a unique identification code, using bar codes, that enables the system to find the absolute location of the pattern These bar codes also assist in the scanning algorithms to locate the pattern in the environment A thorough error analysis and experimental results obtained through software simulation are presented, as well as the current direction of our work

Journal ArticleDOI
TL;DR: In this paper, a computer-controlled vehicle that is part of a mobile nursing robot system is described, which applies a motion control strategy that attempts to avoid slippage and min imize position errors.
Abstract: A computer-controlled vehicle that is part of a mobile nursing robot system is described. The vehicle applies a motion control strategy that attempts to avoid slippage and min imize position errors. A cross-coupling control algorithm that guaran tees a zero steady-state orientation error (assuming no slippage) is proposed and a stab ility analysis of the control system is presented. Results of experiments performed on a prototype vehicle verify the theoretical analysis.

Proceedings ArticleDOI
19 Oct 1987
TL;DR: The newly proposed 'robotic system in this paper can be reconfigurable dynamically to a given task, so that the level of the flexibility and adaptability is much higher than that of the conventionals.
Abstract: A new concept of robotic systems, "Dynamically Reconfigurable Robotic System(DRRS)" is shown in this paper. Each cell of the robotic module in DRRS can detach itself and combine them autonomously depending on a task, such as manipulators or mobile robots, so that the system can reorganize the optimal total shape, unlike robots developed so far which cannot reorganize automatically by changing the linkage of arms, replacing some links with others or reforming shapes in order to adapt itself to the change of working environments and demands. The newly proposed 'robotic system in this paper can be reconfigurable dynamically to a given task, so that the level of the flexibility and adaptability is much higher than that of the conventionals. DRRS has many unique adavantages, such as optimal shaping under circumstances, fault tolerance, self repairing and others. Some demonstrations can be shown experimentally and a decision method for such cell structured manipulator configurations is also proposed.

Proceedings ArticleDOI
01 Mar 1987
TL;DR: The MIT mobile robot project has adopted a layered approach to building robust control systems for autonomous mobile robots, which is operational on three completely different types of hardware and is now working on a fourth, by way of a special purpose silicon compiler for it.
Abstract: The MIT mobile robot project has adopted a layered approach to building robust control systems for autonomous mobile robots. We build a complete control system for a very simple task achieving behavior for our robots, then lay additional complete control systems on top of it to achieve higher level tasks. We have this architecture operational on three completely different types of hardware (a general purpose computer, a special purpose multi-processor, and processor-less logic network) and are now working on a fourth, by way of a special purpose silicon compiler for it. Exactly the same high level language specification files are used on three of these four architectures. The control architect is thus freed from even the most global considerations of the target architecture.

Proceedings ArticleDOI
01 Jan 1987
TL;DR: In this article, the development of a keyboard playing robot WABOT-2 (WAseda roBots-2) with a focus on the mechanisms of arm-and-hand which has 21 degrees of freedom in total, their hierarchically structured control computer system, the information processing method at the high level computer and finger-arm coordination control which realizes the autonomous movement of WABOTS-2.
Abstract: Advanced robots will have to not only have 'hard' functions but also have 'soft' functions. Therefore, the purpose of this study is to realize 'soft' functions of robots such as dexterity, speediness and intelligence by the development of an anthropomorphic intelligent robot playing keyboard instrument. This paper describes the development of keyboard playing robot WABOT-2(WAseda roBOT-2) with a focus on the mechanisms of arm-and-hand which has 21 degrees of freedom in total, their hierarchically structured control computer system, the information processing method at the high level computer and finger-arm coordination control which realizes the autonomous movement of WABOT-2.

Proceedings ArticleDOI
01 Mar 1987
TL;DR: The various perception, planning, and control components of the CODGER software system for integrating these components into a single system, synchronizing the data flow between them in order to maximize parallelism are described.
Abstract: This paper describes the current status of the Autonomous Land Vehicle research at Carnegie-Mellon University's Robotics Institute, focusing primarily on the system architecture. We begin with a discussion of the issues concerning outdoor navigation, then describe the various perception, planning, and control components of our system that address these issues. We describe the CODGER software system for integrating these components into a single system, synchronizing the data flow between them in order to maximize parallelism. Our system is able to drive a robot vehicle continuously with two sensors, a color camera and a laser rangefinder, on a network of sidewalks, up a bicycle slope, and through a curved road through an area populated with trees. Finally, we discuss the results of our experiments, as well as problems uncovered in the process and our plans for addressing them.

Proceedings ArticleDOI
01 Dec 1987
TL;DR: This paper uses occupancy grids to combine range information from sonar and one-dimensional stereo into a two-dimensional map of the vicinity of a robot.
Abstract: Multiple range sensors are essential in mobile robot navigation systems. This introduces the problem of integrating noisy range data from multiple sensors and multiple robot positions into a common description of the environment. We propose a cellular representation called the occupancy grid as a solution to this problem. In this paper, we use occupancy grids to combine range information from sonar and one-dimensional stereo into a two-dimensional map of the vicinity of a robot. Each cell in the map contains a probabilistic estimate of whether it is empty or occupied by an object in the environment. These estimates are obtained from sensor models that describe the uncertainty in the range data. A Bayesian estimation scheme is used to update the existing map with successive range profiles from each sensor. This representation is simple to manipulate, treats different sensors uniformly, and models uncertainty in the sensor data and in the robot position. It also provides a basis for motion planning and creation of higherlevel object descriptions.

Book ChapterDOI
01 Mar 1987
TL;DR: The architecture of a Stanford's autonomous mobile robot is described including its distributed computing system, locomotion, and sensing, and some of the issues in the representation of a world model are explored.
Abstract: A mobile robot architecture must include sensing, planning, and locomotion which are tied together by a model or map of the world based on sensor information, apriori knowledge and generic models. The architecture of a Stanford's autonomous mobile robot is described including its distributed computing system, locomotion, and sensing. Additionally, some of the issues in the representation of a world model are explored. Sensor models are used to update the world model in a uniform manner, and uncertainty reduction is discussed.

Journal ArticleDOI
TL;DR: The current status of autonomous land vehicle (ALV) research at F Carnegie Mellon University’s Robotics Institute is described, with an autonomous mobile robot system capable of operating in outdoor environments and a navigation system working at two test sites and on two experimental vehicles.
Abstract: ocusing primarily on system architecture, this article describes the current status of autonomous land vehicle (ALV) research at F Carnegie Mellon University’s Robotics Institute. We will (1) discuss issues concerning outdoor navigation; (2) describe our system’s perception, planning, and control components that address these issues; (3) examine Codger, the software system that integrates these components into a single system, synchronizing the dataflow between them (thereby rnaximizing parallelism); and (4) present the results of our experiments, problems uncovered in the process, and plans for addressing those problems. Carnegie Mellon’s ALV group has created an autonomous mobile robot system capable of operating in outdoor environments. Using two sensors-a color camera and a laser range finder-our system can drive a robot vehicle continuously on a network of sidewalks, up a bicycle slope, and over a curved road through an area populated with trees. The complexity of real-world domains and requirements for continuous and real-time motion require that such robot systems provide architectural support for multiple sensors and parallel processing-capabilities not found in simpler robot systems. At CMU, we are studying mobile robot system architecture and have developed a navigation system working at two test sites and on two experimental vehicles.

Proceedings ArticleDOI
01 Mar 1987
TL;DR: A control scheme is presented to improve the flexibility of redundant robots by utilizing a proper utilization of redundancy and the feasibility and effectiveness of this control scheme are demonstrated through simulation.
Abstract: The joint velocities required to move the robot end-effector with a desired speed depend on the direction of motion. Robot's mobility, i.e., its ability to move, is better in the directions requiring lower joint velocities. When the robot is near a singularity configuration, the joint velocities required to attain the end-effector velocity in certain directions are extremely high. Thus arbitrary directional changes become more difficult. Robot's flexibility, defined as its ability to change the direction of the end-effector motion, is low in the vicinity of singular configurations. Addition of redundant joints can greatly enhance their flexibility. However, this requires a proper utilization of redundancy. A control scheme is presented to improve the flexibility of redundant robots. The feasibility and effectiveness of this control scheme are demonstrated through simulation.

01 Jun 1987
TL;DR: In this article, the authors proposed a new concept of a gnat-sized autonomous robot with on-board sensors, brains, actuators and power supplies, all fabricated on a single piece of silicon.
Abstract: A new concept in mobile robots is proposed, namely that of a gnat-sized autonomous robot with on-board sensors, brains, actuators and power supplies, all fabricated on a single piece of silicon. Recent breakthroughs in computer architectures for intelligent robots, sensor integration algorithms and micromachining techniques for building onchip micromotors, combined with the ever decreasing size of integrated logic, sensors and power circuitry have led to the possibility of a new generation of mobile robots which will vastly change the way we think about robotics. Forget about today's first generation robots: costly, bulky machines with parts acquired from many different vendors. What will appear will be cheap, mass produced, slimmed down, integrated robots that need no maintenance, no spare parts and no special care. The cost advantages of these robots will create new worlds of applications. Gnat robots will offer a new approach in using automation technology. We will begin to think in terms of massive parallelism: using millions of simple, cheap, gnat robots in place of one large complicated robot. Furthermore, disposable robots will even become realistic. This paper outlines how to build gnat robots. It discusses the technology thrusts that will be required for developing such machines and sets forth some strategies for design. A close look is taken at the tradeoffs involved in choosing components of the system: locomiotion options, power sources, types of sensors and architectures for intelligence. A.I. Laboratory Working Papers are produced for internal circulation, and may contain information that is, for example, too preliminary or too detailed for formal publication. It is not intended that they should be considered papers to which reference can be made in the literature. 1 Where Did All The Robots Go? If you've been keeping up with your reading of Time magazine or watching of the Saturday morning cartoons, you were probably disappointed last Christmas when you didn't get a robot that did the dishes, washed the windows and swept the floors. Where did all the robots go? We've become conditioned to believe that soon robot-helpers would be permeating our society, but it hasn't turned out that way. Why don't we see more robots in everyday life? The main reason is money. Robot technology is very expensive for the level of intelligence attainable. Many hard problems need to be solved in sensory perception and intelligent control before robots will achieve higher levels of competence. No market exists today for such costly machines of limited capabilities. Therefore I propose that we work on building very cheap robots with the capabilities we can produce now and then see what happens later, much the same way as when microprocessors were first introduced (as video games, etc.). What makes robots expensive? Mobile robots today contain mostly motors and batteries while all the sensors and computers come in a very tiny package. The battery-motor system has a certain runaway characteristic. Big motors tend to need big batteries which weigh down the chassis, so larger motors are called for, which require heftier batteries ... and on and on it goes. Meanwhile, all the intelligence and sensing mechanisms fit onto a few square inches of silicon. Mobile robots that are used as sensor platforms, exploration robots or sentries, as opposed to heavy lift arm-type robots, pay especially heavy penalties for carrying around large loads of motors and batteries. If mobile robots are half motors and batteries, what takes up the other half of the physical space? The answer is connectors: power connectors, signal connectors, bus interfaces whatever it takes to hook up one vendor's computer to another vendor's motor to another vendor's sensor to yet another vendor's battery. All these interfaces between parts from various suppliers mean added cost and complexity and the assurance of the necessity of planning for spare parts and maintenance during the lifetime of the robot. Due to mass production and integrated circuit technology, processors and many types of sensors have declined in both price and size over the past few years while motors and batteries have enjoyed no such benefits and remain the most costly and bulky components of a robot system. In order to minimize the size and cost of a robot, we propose to use ever smaller and lighter motors and batteries until we find a limit for building the smallest robots possible. Recent advances in silicon micromachining technology have brought about the appearance of micromechanical motors. These motors are on the order of a few hundreds of microns in diameter and are actually etched on-chip [2,301. One might question the usefulness of such tiny motors, but if all we wanted to do was to locomote the chip on which they were fabricated, then we would have a system in which the motors were of the same scale as the sensors and processors. Putting an entire robot system on-chip would allow for mass production using IC fabrication technology and costs would

Book ChapterDOI
Takero Hongo1, Hideo Arakawa1, Gunji Sugimoto1, Koichi Tange1, Yuzo Yamamoto1 
TL;DR: In this article, an automatically guided vehicle, traveling without fixed guide ways, has been developed and the construction of the vehicle, the control algorithm, and its general performance are described.
Abstract: An automatically guided vehicle, traveling without fixed guide ways, has been developed. In this paper, the construction of the vehicle, the control algorithm, and its general performance are described.

Journal ArticleDOI
01 Apr 1987
TL;DR: An algorithm based on the Quine-McCluskey method of finding prime implicants in a logical expression is used to isolate all the largest rectangular free convex areas in a specified environment.
Abstract: An automated path planning algorithm for a mobile robot in a structured environment is presented. An algorithm based on the Quine-McCluskey method of finding prime implicants in a logical expression is used to isolate all the largest rectangular free convex areas in a specified environment. The free convex areas are represented as nodes in a graph, and a graph traversal strategy that dynamically allocates costs to graph paths is used. Complexity of the algorithm and a strategy to trade optimality for smaller computation time are discussed.

01 Jul 1987
TL;DR: The reasoning system that controls the robot is designed to exhibit the kind of behavior expected of a rational agent, and is endowed with the psychological attitudes of belief, desire, and intention, resulting in complex goal-directed and reflective behaviors.
Abstract: : In this paper, the reasoning and planning capabilities of an autonomous mobile robot are described. The reasoning system that controls the robot is designed to exhibit the kind of behavior expected of a rational agent, and is endowed with the psychological attitudes of belief, desire, and intention. Because these attitudes are explicitly represented, they can be manipulated and reasoned about, resulting in complex goal-directed and reflective behaviors. Unlike most planning systems, the plans or intentions formed by the robot need only be partly elaborated before it decides to act. This allows the robot to avoid overly strong expectations about the environment, overly constrained plans of action, and other forms of overcommitment common to previous planners. In addition, the robot is continuously reactive and has the ability to change its goals and intentions as situations warrant. Thus, while the system architecture allows for reasoning about the means and ends in much the same way as traditional planners, it also possesses the reactivity required for survival in highly dynamic and uncertain worlds. The system has been tested with SRI's autonomous robot (Flakey) in a space station scenario involving navigation and the performance of emergency tasks.

Proceedings ArticleDOI
01 Mar 1987
TL;DR: A decentralized adaptive control scheme is presented for multilink robot arms that allows local high gain feedback control to stabilize the overall system, and cause the arm to track reference trajectories to any accuracy desired.
Abstract: A decentralized adaptive control scheme is presented for multilink robot arms. The structure of dynamic and force interactions in the robot arm allows local high gain feedback control to stabilize the overall system, and cause the arm to track reference trajectories to any accuracy desired. The adaptive law is locally stable with an attractive set that is adjustable a-priori using minimal knowledge of robot parameters. Robot manipulators control using decentralized approach is highly desirable in that it facilitates real-time implementation.

Proceedings ArticleDOI
01 Jan 1987
TL;DR: In this paper, the authors present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results.
Abstract: Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.

Proceedings ArticleDOI
S. Walter1
01 Mar 1987
TL;DR: A method for quickly obtaining multiple sonar range readings is presented for obstacle detection in commercial automatic guided vehicles (AGVs) and ultrasonic pulse-echo devices.
Abstract: Ultrasonic pulse-echo devices have shown utility for obstacle detection in numerous mobile robot research efforts, as well as in commercial automatic guided vehicles (AGVs) A method for quickly obtaining multiple sonar range readings is presented Thirty ultrasonic transducers are arranged in a ring and controlled by a computer The system is predominantly composed of commercially available parts Hardware specifications and operation details of this system are outlined

01 Jul 1987
TL;DR: Researchers propose to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves, to allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors.
Abstract: A numerical representation of uncertain and incomplete sensor knowledge called Certainty Grids has been used successfully in several mobile robot control programs, and has proven itself to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. Researchers propose to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors. The approach can correctly model the fuzziness of each reading, while at the same time combining multiple measurements to produce sharper map features, and it can deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the same dimension and used to detect and track moving objects.