scispace - formally typeset
Search or ask a question

Showing papers on "Mobile robot navigation published in 2000"


Journal ArticleDOI
TL;DR: This paper uses a sample-based version of Markov localization, capable of localizing mobile robots in an any-time fashion, to demonstrate drastic improvements in localization speed and accuracy when compared to conventional single-robot localization.
Abstract: This paper presents a statistical algorithm for collaborative mobile robot localization. Our approach uses a sample-based version of Markov localization, capable of localizing mobile robots in an any-time fashion. When teams of robots localize themselves in the same environment, probabilistic methods are employed to synchronize each robot's belief whenever one robot detects another. As a result, the robots localize themselves faster, maintain higher accuracy, and high-cost sensors are amortized across multiple robot platforms. The technique has been implemented and tested using two mobile robots equipped with cameras and laser range-finders for detecting other robots. The results, obtained with the real robots and in series of simulation runs, illustrate drastic improvements in localization speed and accuracy when compared to conventional single-robot localization. A further experiment demonstrates that under certain conditions, successful localization is only possible if teams of heterogeneous robots collaborate during localization.

789 citations


Journal ArticleDOI
TL;DR: Inspired by the insect’s navigation system, mechanisms for path integration and visual piloting that were successfully employed on the mobile robot Sahabot 2 are developed.

514 citations


Journal ArticleDOI
TL;DR: A method for reducing stereo vision disparity images to two-dimensional map information is presented to reduce errors by segmenting disparity images based on continuous disparity surfaces to reject “spikes” caused by stereo mismatches.
Abstract: This paper describes a working vision-based mobile robot that navigates and autonomously explores its environment while building occupancy grid maps of the environment. We present a method for reducing stereo vision disparity images to two-dimensional map information. Stereo vision has several attributes that set it apart from other sensors more commonly used for occupancy grid mapping. We discuss these attributes, the errors that some of them create, and how to overcome them. We reduce errors by segmenting disparity images based on continuous disparity surfaces to reject “spikes” caused by stereo mismatches. Stereo vision processing and map updates are done at 5 Hz and the robot moves at speeds of 300 cm/s.

465 citations


Proceedings ArticleDOI
24 Apr 2000
TL;DR: A new class of potential functions for multiple robots that enables homogeneous large-scale robot teams to arrange themselves in geometric formations while navigating to a goal location through an obstacle field are presented.
Abstract: Potential function approaches to robot navigation provide an elegant paradigm for expressing multiple constraints and goals in mobile robot navigation problems. As an example, a simple reactive navigation strategy can be generated by combining repulsion from obstacles with attraction to a goal. Advantages of this approach can also be extended to multirobot teams. In this paper we present a new class of potential functions for multiple robots that enables homogeneous large-scale robot teams to arrange themselves in geometric formations while navigating to a goal location through an obstacle field. The approach is inspired by the way molecules "snap" into place as they form crystals; the robots are drawn to particular "attachment sites" positioned with respect to other robots. We refer to these potential functions as "social potentials" because they are constructed with respect to other agents. Initial results, generated in simulation, illustrate the viability of the approach.

353 citations


Journal ArticleDOI
TL;DR: A gesture interface for the control of a mobile robot equipped with a manipulator uses a camera to track a person and recognize gestures involving arm motion and is combined with the Viterbi algorithm for the recognition of gestures defined through arm motion.
Abstract: Service robotics is currently a highly active research area in robotics, with enormous societal potential. Since service robots directly interact with people, finding “natural” and easy-to-use user interfaces is of fundamental importance. While past work has predominately focussed on issues such as navigation and manipulation, relatively few robotic systems are equipped with flexible user interfaces that permit controlling the robot by “natural” means. This paper describes a gesture interface for the control of a mobile robot equipped with a manipulator. The interface uses a camera to track a person and recognize gestures involving arm motion. A fast, adaptive tracking algorithm enables the robot to track and follow a person reliably through office environments with changing lighting conditions. Two alternative methods for gesture recognition are compared: a template based approach and a neural network approach. Both are combined with the Viterbi algorithm for the recognition of gestures defined through arm motion (in addition to static arm poses). Results are reported in the context of an interactive clean-up task, where a person guides the robot to specific locations that need to be cleaned and instructs the robot to pick up trash.

347 citations


Journal ArticleDOI
01 Dec 2000
TL;DR: A method for the visual-based navigation of a mobile robot in indoor environments, using a single omnidirectional (catadioptric) camera is proposed, which significantly simplifies the solution to navigation problems, by eliminating any perspective effects.
Abstract: Proposes a method for the visual-based navigation of a mobile robot in indoor environments, using a single omnidirectional (catadioptric) camera. The geometry of the catadioptric sensor and the method used to obtain a bird's eye (orthographic) view of the ground plane are presented. This representation significantly simplifies the solution to navigation problems, by eliminating any perspective effects. The nature of each navigation task is taken into account when designing the required navigation skills and environmental representations. We propose two main navigation modalities: topological navigation and visual path following. Topological navigation is used for traveling long distances and does not require knowledge of the exact position of the robot but rather, a qualitative position on the topological map. The navigation process combines appearance based methods and visual servoing upon some environmental features. Visual path following is required for local, very precise navigation, e.g., door traversal, docking. The robot is controlled to follow a prespecified path accurately, by tracking visual landmarks in bird's eye views of the ground plane. By clearly separating the nature of these navigation tasks, a simple and yet powerful navigation system is obtained.

343 citations


Journal ArticleDOI
TL;DR: The review shows that biomimetic systems make significant contributions to two fields of research: first, they provide a real world test of models of biological navigation behaviour; second, they make new navigation mechanisms available for technical applications, most notably in the field of indoor robot navigation.

342 citations


Patent
22 Nov 2000
TL;DR: In this paper, an autonomous mobile robot system allocates mapping, localization, planning and control functions to at least one navigator robot and allocates task performance functions to one or more functional robots.
Abstract: An autonomous mobile robot system allocates mapping, localization, planning and control functions to at least one navigator robot and allocates task performance functions to one or more functional robots. The at least one navigator robot maps the work environment, localizes itself and the functional robots within the map, plans the tasks to be performed by the at least one functional robot, and controls and tracks the at least one functional robot during task performance. The at least one navigator robot performs substantially all calculations for mapping, localization, planning and control for both itself and the functional robots. In one implementation, the at least one navigator robot remains stationary while controlling and moving the at least one functional robot in order to simplify localization calculations. In one embodiment, the at least one navigator robot is equipped with sensors and sensor processing hardware required for these tasks, while the at least one functional robot is not equipped with sensors or hardware employed for these purposes.

317 citations



Proceedings ArticleDOI
24 Apr 2000
TL;DR: This paper reports on the extension on the systems that were previously developed that were necessary to achieve autonomous navigation in this domain and the algorithms have been tested on the outdoor prototype rover, Bullwinkle, and have recently driven 100 m at a speed of 15 cm/sec.
Abstract: Autonomous planetary rovers operating in vast unknown environments must operate efficiently because of size, power and computing limitations. Recently, we have developed a rover capable of efficient obstacle avoidance and path planning. The rover uses binocular stereo vision to sense potentially cluttered outdoor environments. Navigation is performed by a combination of several modules that each "vote" for the next best action for the robot to execute. The key distinction of our system is that it produces globally intelligent behavior with a small computational resource - all processing and decision making are done on a single processor. These algorithms have been tested on our outdoor prototype rover, Bullwinkle, and have recently driven the rover 100 m at a speed of 15 cm/sec. In this paper we report on the extension on the systems that we have previously developed that were necessary to achieve autonomous navigation in this domain.

281 citations


Book ChapterDOI
01 Jan 2000
TL;DR: The current state of the art in distributed mobile robot systems is surveyed, principally on research that has been demonstrated in physical robot implementations and identifies some key open issues in multi-robot team research.
Abstract: As research progresses in distributed robotic systems, more and more aspects of multi-robot systems are being explored. This article surveys the current state of the art in distributed mobile robot systems. Our focus is principally on research that has been demonstrated in physical robot implementations. We have identified eight primary research topics within multi-robot systems — biological inspirations, communication, architectures, localization/mapping/exploration, object transport and manipulation, motion coordination, reconfigurable robots, and learning — and discuss the current state of research in these areas. As we describe each research area, we identify some key open issues in multi-robot team research. We conclude by identifying several additional open research issues in distributed mobile robotic systems.


Journal ArticleDOI
TL;DR: In this paper, the authors combine experimental findings on ants and bees, and build on earlier models, to give an account of how these insects navigate using path integration, and how path integration interacts with other modes of navigation.
Abstract: We combine experimental findings on ants and bees, and build on earlier models, to give an account of how these insects navigate using path integration, and how path integration interacts with other modes of navigation. At the core of path integration is an accumulator. This is set to an initial state at the nest and is updated as the insect moves so that it always reports the insect's current position relative to the nest. Navigation that uses path integration requires, in addition, a way of storing states of the accumulator at significant places for subsequent recall as goals, and a means of computing the direction to such goals. We discuss three models of how path integration might be used for this process, which we call vector navigation. Vector navigation is the principal means of navigating over unfamiliar terrain, or when landmarks are unavailable. Under other conditions, insects often navigate by landmarks, and ignore the output of the vector navigation system. Landmark navigation does not interfere with the updating of the accumulator. There is an interesting symmetry in the use of landmarks and path integration. In the short term, vector navigation can be independent of landmarks, and landmark navigation needs no assistance from path integration. In the longer term, visual landmarks help keep path vector navigation calibrated, and the learning of visual landmarks is guided by path integration.

Proceedings ArticleDOI
16 Jun 2000
TL;DR: Omni-directional images provide the means of having adequate representations to support both accurate or qualitative navigation, since landmarks remain visible in all images, as opposed to a small field-of-view standard camera.
Abstract: We describe a method for visual based robot navigation with a single omni-directional (catadioptic) camera. We show how omni-directional images can be used to generate the representations needed for two main navigation modalities: Topological Navigation and Visual Path Following. Topological Navigation relies on the robot's qualitative global position, estimated from a set of omni-directional images obtained during a training stage (compressed using PCA). To deal with illumination changes, an eigenspace approximation to the Hausdorff measure is exploited. We present a method to transform omni-directional images to Bird's Eye Views that correspond to scaled orthographic views of the ground plane. These images are used to locally control the orientation of the robot, through visual servoing. Visual Path Following is used to accurately control the robot along a prescribed trajectory, by using bird's eye views to track landmarks on the ground plane. Due to the simplified geometry of these images, the robot's pose can be estimated easily and used for accurate trajectory following. Omni-directional images facilitate landmark based navigation, since landmarks remain visible in all images, as opposed to a small field-of-view standard camera. Also, omni-directional images provide the means of having adequate representations to support both accurate or qualitative navigation. Results are described in the paper.

Proceedings ArticleDOI
31 Oct 2000
TL;DR: A solution to realtime motion control that can competently maneuver a robot at optimal speed even as it explores a new region or encounters new obstacles is presented.
Abstract: Despite many decades of research into mobile robot control, reliable, high-speed motion in complicated, uncertain environments remains an unachieved goal. In this paper we present a solution to realtime motion control that can competently maneuver a robot at optimal speed even as it explores a new region or encounters new obstacles. The method uses a navigation function to generate a gradient field that represents the optimal (lowest-cost) path to the goal at every point in the workspace. Additionally, we present an integrated sensor fusion system that allows incremental construction of an unknown or uncertain environment. Under modest assumptions, the robot is guaranteed to get to the goal in an arbitrary static unexplored environment, as long as such a path exists. We present preliminary experiments to show that the gradient method is better than expert human controllers in both known and unknown environments.

Book ChapterDOI
01 Jan 2000
TL;DR: A new approach to the cooperative localization problem, namely distributed multi-robot localization, is presented and the improvement in localization accuracy is presented.
Abstract: This paper presents a new approach to the cooperative localization problem, namely distributed multi-robot localization. A group of M robots is viewed as a single system composed of robots that carry, in general, different sensors and have different positioning capabilities. A single Kalman filter is formulated to estimate the position and orientation of all the members of the group. This centralized schema is capable of fusing information provided by the sensors distributed on the individual robots while accommodating independencies and interdependencies among the collected data. In order to allow for distributed processing, the equations of the centralized Kalman filter are treated so that this filter can be decomposed into M modified Kalman filters each running on a separate robot. The distributed localization algorithm is applied to a group of 3 robots and the improvement in localization accuracy is presented.

Journal ArticleDOI
01 May 2000-Robotica
TL;DR: This paper deals with the autonomous climbing robot which uses the “caterpillar” concept to climb in complex 3D metallic-based structures to ensure stable self-support and to optimise the robot consumption during the inspection.
Abstract: Often inspection and maintenance work involve a large number of highly dangerous manual operations, especially within industrial fields such as shipbuilding and construction. This paper deals with the autonomous climbing robot which uses the “caterpillar” concept to climb in complex 3D metallic-based structures. During its motion the robot generates in real-time the path and grasp planning in order to ensure stable self-support to avoid the environment obstacles, and to optimise the robot consumption during the inspection. The control and monitoring of the robot is achieved through an advanced Graphical User Interface to allow an effective and user friendly operation of the robot. The experiments confirm its advantages in executing the inspection operations.

Proceedings ArticleDOI
24 Apr 2000
TL;DR: This paper presents a graph theoretic method that is applicable to data association problems where the features are observed via a batch process and described in the context of two possible navigation applications: metric map building with simultaneous localisation, and topological map based localisation.
Abstract: Data association is the process of relating features observed in the environment to features viewed previously or to features in a map. This paper presents a graph theoretic method that is applicable to data association problems where the features are observed via a batch process. Batch observations detect a set of features simultaneously or with sufficiently small temporal difference that, with motion compensation, the features can be represented with precise relative coordinates. This data association method is described in the context of two possible navigation applications: metric map building with simultaneous localisation, and topological map based localisation. Experimental results are presented using an indoor mobile robot with a 2D scanning laser sensor. Given two scans from different unknown locations, the features common to both scans are mapped to each other and the relative change in pose (position and orientation) of the vehicle between the two scans is obtained.

Patent
15 Sep 2000
TL;DR: In this article, a navigation system for tracking the position of an object includes a GPS receiver (28) responsive to GPS signals for periodically providing navigation state measurement updates (162) to a navigator update unit (29).
Abstract: A navigation system for tracking the position of an object includes a GPS receiver (28) responsive to GPS signals for periodically providing navigation state measurement updates (162) to a navigator update unit (29). The system also includes a dead-reekoning sensor (98) responsive to movement of the object for providing movement measurements (164) to a sensor update unit (61). The sensor update unit (61) receives movement measurements (164) provided by the dead-reckoning sensor (98) and the navigation measurements (16) from the navigation update unit (29). The position change measurements (165) provided by the sensor update unit (61) and the dead-reckoning measurements of the navigation update unit (29) are utilized by a navigations propagation unit (110) to calculate a new or modified dead reckoning measurement.

Book ChapterDOI
01 Jan 2000
TL;DR: A simple, abstract formalism is presented to express the key concepts of route-based navigation in a common scientific language and develops the notion of a route graph, which can serve as the basis for complex navigational knowledge.
Abstract: Navigation has always been an interdisciplinary topic of research, because mobile agents of different types are inevitably faced with similar navigational problems. Therefore, human navigation can readily be compared to navigation in other biological organisms or in artificial mobile agents like autonomous robots. One such navigational strategy, route-based navigation, in which an agent moves from one location to another by following a particular route, is the focus of this paper. Drawing on the research from cognitive psychology and linguistics, biology, and robotics, we present a simple, abstract formalism to express the key concepts of route-based navigation in a common scientific language. Starting with the distinction of places and route segments, we develop the notion of a route graph, which can serve as the basis for complex navigational knowledge. Implications and constraints of the model are discussed along the way, together with examples of different instantiations of parts of the model in different mobile agents. By providing this common conceptual framework, we hope to advance the interdisciplinary discussion of spatial navigation.

Patent
07 Apr 2000
TL;DR: An enhanced global positioning system and map navigation process, which incorporates the GPS position data and geospatial map data is presented in this paper, which includes GPS-based efficient geosapatial database access and query and a time-space filtering method to fully fuse the GPS positioning data and the geospheric map data to obtain enhanced navigation performance.
Abstract: An enhanced global positioning system and map navigation process, which incorporates the GPS position data and geospatial map data This enhanced navigation process includes GPS-based efficient geospatial database access and query and a time-space filtering method to fully fuse the GPS position data and the geospatial map data to obtain enhanced navigation performance The system features both portability and ease-of-use It also accommodates mission specific database creation and value-adding techniques The system includes personal navigation, land vehicle navigation, marine navigation, etc

Proceedings ArticleDOI
03 Sep 2000
TL;DR: A segmentation method for line extraction in 2D range images using a prototype-based fuzzy clustering algorithm in a split-and-merge framework that aims to be used in mobile robots navigation systems for dynamic map building.
Abstract: This paper presents a segmentation method for line extraction in 2D range images. It uses a prototype-based fuzzy clustering algorithm in a split-and-merge framework. The split-and-merge structure allows one to use the fuzzy clustering algorithm without any previous knowledge on the number of prototypes. This algorithm aims to be used in mobile robots navigation systems for dynamic map building. Simulation results show its good performance compared to some classical approaches.

Proceedings ArticleDOI
31 Oct 2000
TL;DR: The required characteristics of the view for the view sequence are discussed, the former method of generating views is evaluated, and the disparity view sequence is applied for outdoor navigation.
Abstract: Recently, view-based or appearance-based approaches have been attracting the interests of computer vision research. Based on a similar idea, we have proposed a view-based navigation method using a model of the route called the "view sequence." It contains a sequence of frontal views along a route memorized in the teaching run, and the recognition of the environment is realized based on the matching of the current view and memorized view sequence. In this paper, we discuss the required characteristics of the view for the view sequence, and evaluate our former method of generating views. Then we confirm that the stereo disparity satisfies the requirement of the view sequence through an experiment, and the disparity view sequence is applied for outdoor navigation. The experimental results indicate such views other than normal camera images can be utilized for our view-based navigation method.

Patent
22 Nov 2000
TL;DR: In this article, a navigation feature identifies one or more places of the specified type that are convenient for both users to travel to, and provides the users with instructions for traveling to a selected place.
Abstract: A feature provided by a navigation system or a navigation services provider provides navigation-related services for plural users who have related needs. In one embodiment, a first user specifies a type of place at which to meet a second user who is at a location some distance away from the first user. The navigation feature identifies one or more places of the specified type that are convenient for both users to travel to. The navigation feature may also provide the users with instructions for traveling to a selected place.

Journal ArticleDOI
01 Mar 2000
TL;DR: The well-formulated and well-known laws of electrostatic fields are used to prove that the proposed approach generates an approximately optimal path (based on cell resolution) in a real-time frame.
Abstract: Proposes a solution to the two-dimensional (2-D) collision fee path planning problem for an autonomous mobile robot utilizing an electrostatic potential field (EPF) developed through a resistor network, derived to represent the environment. No assumptions are made about the amount of information contained in the a priori environment map (it may be completely empty) or the shape of the obstacles. The well-formulated and well-known laws of electrostatic fields are used to prove that the proposed approach generates an approximately optimal path (based on cell resolution) in a real-time frame. It is also proven through the classical laws of electrostatics that the derived potential function is a global navigation function (as defined by Rimon and Koditschek, 1992), that the field is free of all local minima and that all paths necessarily lead to the goal position. The complexity of the EPF generated path is shown to be O(mn/sub M/), where m is the total number of polygons in the environment and n/sub M/ is the maximum number of sides of a polygonal object. The method is tested both by simulation and experimentally on a Nomad200 mobile robot platform equipped with a ring of sixteen sonar sensors.

Proceedings ArticleDOI
10 Jul 2000
TL;DR: The paper presents the data fusion system for mobile robot navigation using an Extended Kalman Filter and Adaptive Fuzzy Logic System to fused Odometry and sonar signals, which is more accurate than any of the original signals considered separately.
Abstract: Autonomous robots and vehicles need accurate positioning and localization for their guidance, navigation and control. Often, two or more different sensors are used to obtain reliable data useful for control systems. The paper presents the data fusion system for mobile robot navigation. Odometry and sonar signals are fused using an Extended Kalman Filter (EKF) and Adaptive Fuzzy Logic System (AFLS). The signals used during navigation cannot be always considered as white noise signals. On the other hand, colored signals will cause the EKF to diverge. The AFLS was used to adapt the gain and therefore prevent Kalman filter divergence. The fused signal is more accurate than any of the original signals considered separately. The enhanced more accurate signal is used to guide and navigate the robot.

Proceedings ArticleDOI
24 Apr 2000
TL;DR: A navigation algorithm, which integrates a virtual obstacle concept with a potential-field-based method to manoeuvre cylindrical mobile robots in unknown or unstructured environments, is presented.
Abstract: Presents a navigation algorithm, which integrates a virtual obstacle concept with a potential-field-based method to manoeuvre cylindrical mobile robots in unknown or unstructured environments. This study focuses on the real-time feature of the navigation algorithm for fast moving mobile robots. We mainly consider the potential-field method in conjunction with virtual obstacle concept as the basis of our navigation algorithm. Simulation and experiments of our algorithm shows good performance and ability to overcome the local minimum problem associated with potential field methods.

Journal ArticleDOI
01 May 2000
TL;DR: A hybrid interception scheme, which combines a navigation-based interception technique with a conventional trajectory tracking method is proposed herein for intercepting fast-maneuvering objects.
Abstract: Presents an approach to online, robot-motion planning for moving-object interception. The proposed approach utilizes a navigation-guidance-based technique, that is robust and computationally efficient for the interception of fast-maneuvering objects. Navigation-based techniques were originally developed for the control of missiles tracking free-flying targets. Unlike a missile, however, the end-effector of a robotic arm is connected to the ground, via a number of links and joints, subject to kinematic and dynamic constraints. Also, unlike a missile, the velocity of the robot and the moving object must be matched for a smooth grasp, thus, a hybrid interception scheme, which combines a navigation-based interception technique with a conventional trajectory tracking method is proposed herein for intercepting fast-maneuvering objects. The implementation of the proposed technique is illustrated via numerous simulation examples.

Journal ArticleDOI
TL;DR: The autonomous robot system, the web-based interfaces, and how they communicate with the robot are described, which includes recommendations for putting future mobile robots on the web.
Abstract: We have been running an experiment in web-based interaction with an autonomous indoor mobile robot. The robot, called Xavier, can accept commands to travel to different offices in our building, broadcasting camera images as it travels. The experiment, which was originally designed to test a new navigation algorithm, has proven very successful. This article describes the autonomous robot system, the web-based interfaces, and how they communicate with the robot. It highlights lessons learned during this experiment in web-based robotics and includes recommendations for putting future mobile robots on the web.

Book ChapterDOI
11 Dec 2000
TL;DR: Some functionalities that are currently running on board the Marsokhod model robot Lama at LAAS/CNRS are described, and the necessity to integrate various instances of the perception and decision functionalities is focused on.
Abstract: Autonomous long range navigation in partially known planetary-like terrain is an open challenge for robotics. Navigating several hundreds of meters without any human intervention requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to localize itself as it moves, and to schedule, start, control and interrupt these various activities. In this paper, we briefly describe some functionalities that are currently running on board the Marsokhod model robot Lama at LAAS/CNRS. We then focus on the necessity to integrate various instances of the perception and decision functionalities, and on the difficulties raised by this integration.