scispace - formally typeset
Search or ask a question

Showing papers on "Mobile robot navigation published in 2005"


Book
17 Jan 2005
TL;DR: After the introduction of fast moving vehicles, and later when defensive or hostile weapons came into use, it was not sufficient to know where the platform was located but it was really vital to be aware of its momentary alignment, in a three dimensional space.
Abstract: photographing -not to mention walking in the city -plus those of us engaged with defense activities can state it is more convenient to get lost if one knows where this happ ens. Perhaps this is one of the key reasons why methods and technologies for navigation have been an area of continuing efforts and interest. After the introduction of fast moving vehicles, and later when defensive or hostile weapons came into use, it was not sufficient to know where the platform was located but it was really vital to be aware of its momentary alignment, of course , in a three dimensional space. New challenges were put to the shoulders of the navigator. When time, equipment. and location allow, navigation rel ying on external references such as radio beacons on ground or up in the space orbits are often preferred. However, such cooperative systems may not be available, or their performance is inadequat e for the short time constants of platform motion. We are thus forced to use autonomous navigation modes. It is here that inertial navigation systems have.

657 citations


Journal ArticleDOI
TL;DR: A technique for learning collections of trajectories that characterize typical motion patterns of persons and how to incorporate the probabilistic belief about the potential trajectories of persons into the path planning process of a mobile robot is proposed.
Abstract: Whenever people move through their environments they do not move randomly. Instead, they usually follow specific trajectories or motion patterns corresponding to their intentions. Knowledge about such patterns enables a mobile robot to robustly keep track of persons in its environment and to improve its behavior. In this paper we propose a technique for learning collections of trajectories that characterize typical motion patterns of persons. Data recorded with laser-range finders are clustered using the expectation maximization algorithm. Based on the result of the clustering process, we derive a hidden Markov model that is applied to estimate the current and future positions of persons based on sensory input. We also describe how to incorporate the probabilistic belief about the potential trajectories of persons into the path planning process of a mobile robot. We present several experiments carried out in different environments with a mobile robot equipped with a laser-range scanner and a camera system. The results demonstrate that our approach can reliably learn motion patterns of persons, can robustly estimate and predict positions of persons, and can be used to improve the navigation behavior of a mobile robot.

430 citations


Journal ArticleDOI
TL;DR: An incremental SLAM algorithm is introduced that is derived from multigrid methods used for solving partial differential equations, which has an update time that is linear in the number of estimated features for typical indoor environments, even when closing very large loops.
Abstract: This paper addresses the problem of simultaneous localization and mapping (SLAM) by a mobile robot. An incremental SLAM algorithm is introduced that is derived from multigrid methods used for solving partial differential equations. The approach improves on the performance of previous relaxation methods for robot mapping, because it optimizes the map at multiple levels of resolution. The resulting algorithm has an update time that is linear in the number of estimated features for typical indoor environments, even when closing very large loops, and offers advantages in handling nonlinearities compared with other SLAM algorithms. Experimental comparisons with alternative algorithms using two well-known data sets and mapping results on a real robot are also presented.

406 citations


Proceedings ArticleDOI
18 Apr 2005
TL;DR: A feature detection system for real-time identification of lines, circles and people legs from laser range data is developed and a new method suitable for arc/circle detection is proposed: the Inscribed Angle Variance (IAV).
Abstract: A feature detection system has been developed for real-time identification of lines, circles and people legs from laser range data. A new method suitable for arc/circle detection is proposed: the Inscribed Angle Variance (IAV). Lines are detected using a recursive line fitting method. The people leg detection is based on geometrical relations. The system was implemented as a plugin driver in Player, a mobile robot server. Real results are presented to verify the effectiveness of the proposed algorithms in indoor environment with moving objects.

205 citations


01 Jan 2005
TL;DR: A novel system for autonomous mobile robot navigation with only an omnidirectional camera as sensor is presented, able to build automatically and robustly accurate topologically organised environment maps of a complex, natural environment.
Abstract: In this work we present a novel system for autonomous mobile robot navigation. With only an omnidirectional camera as sensor, this system is able to build automatically and robustly accurate topologically organised environment maps of a complex, natural environment. It can localise itself using such a map at each moment, including both at startup (kidnapped robot) or using knowledge of former localisations. The topological nature of the map is similar to the intuitive maps humans use, is memory-efficient and enables fast and simple path planning towards a specified goal. We developed a real-time visual servoing technique to steer the system along the computed path. A key technology making this all possible is the novel fast wide baseline feature matching, which yields an efficient description of the scene, with a focus on man-made environments.

198 citations


Proceedings ArticleDOI
18 Apr 2005
TL;DR: This paper derives the equations for this estimator for the most general relative observation between two robots and considers three special cases of relative observations and the structure of the filter for each case.
Abstract: In this paper we consider the problem of simultaneously localizing all members of a team of robots. Each robot is equipped with proprioceptive sensors and exteroceptive sensors. The latter provide relative observations between the robots. Proprioceptive and exteroceptive data are fused with an Extended Kalman Filter. We derive the equations for this estimator for the most general relative observation between two robots. Then we consider three special cases of relative observations and we present the structure of the filter for each case. Finally, we study the performance of the approach through many accurate simulations.

181 citations


Proceedings ArticleDOI
18 Apr 2005
TL;DR: A navigation function through which a group of mobile agents can be coordinated to achieve a particular formation, both in terms of shape and orientation, while avoiding collisions between themselves and with obstacles in the environment is presented.
Abstract: We present a navigation function through which a group of mobile agents can be coordinated to achieve a particular formation, both in terms of shape and orientation, while avoiding collisions between themselves and with obstacles in the environment. Convergence is global and complete, subject to the constraints of the navigation function methodology. Algebraic graph theoretic properties associated with the interconnection graph are shown to affect the shape of the navigation function. The approach is centralized but the potential function is constructed in a way that facilitates complete decentralization. The strategy presented will also serve as a point of reference and comparison in quantifying the cost of decentralization in terms of performance.

158 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper presents the first ever amphibious transition from walking to swimming, and provides an overview of some of the basic capabilities of the vehicle and its associated sensors.
Abstract: We describe recent results obtained with AQUA, a mobile robot capable of swimming, walking and amphibious operation. Designed to rely primarily on visual sensors, the AQUA robot uses vision to navigate underwater using servo-based guidance, and also to obtain high-resolution range scans of its local environment. This paper describes some of the pragmatic and logistic obstacles encountered, and provides an overview of some of the basic capabilities of the vehicle and its associated sensors. Moreover, this paper presents the first ever amphibious transition from walking to swimming.

153 citations


Journal Article
TL;DR: Extensive experiments with a user population show that the added haptic feedback significantly improves operator performance in several ways (reduced collisions, increased minimum distance between the robot and obstacles) without a significant increase in navigation time.
Abstract: We address the problem of teleoperating a mobile robot using shared autonomy: an on-board controller performs obstacle avoidance while the operator uses the manipulandum of a haptic probe to designate the desired speed and rate of turn. Sensors on the robot are used to measure obstacle range information. We describe a strategy to convert such range information into forces, which are reflected to the operator's hand, via the haptic probe. This haptic information provides feedback to the operator in addition to imagery from a front-facing camera mounted on the mobile robot. Extensive experiments with a user population show that the added haptic feedback significantly improves operator performance in several ways (reduced collisions, increased minimum distance between the robot and obstacles) without a significant increase in navigation time.

144 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: A multiple target tracking approach for following and passing persons in the context of human-robot interaction and how navigation and person following are subsumed under it is presented.
Abstract: This paper presents a multiple target tracking approach for following and passing persons in the context of human-robot interaction. The general purpose for the approach is the use in human augmented mapping. This concept is presented and it is described how navigation and person following are subsumed under it. Results from experiments under test conditions and from data collected during a user study are also provided.

136 citations


Patent
03 May 2005
TL;DR: In this paper, a filtering process is based on the output side of a multimedia decoder, where a navigator, located at a server system, monitors the current play position of multimedia content at a consumer system and compares that position with navigation objects.
Abstract: In accordance with the present invention a filtering process is based on the output side of a multimedia decoder. A navigator, located at a server system, monitors the current play position of multimedia content at a consumer system and compares that position with navigation objects. Each navigation object defines a start position, a stop position, and a filtering action to perform on the portion of the multimedia content that begins at the start position and ends at the stop position. When the current play position falls within the portion of multimedia content defined by a particular navigation object, the navigator sends the filtering action to the consumer system for processing. Filtering actions include skipping, muting, reframing, etc., the portion of multimedia content defined by a navigation object. Alternatively, the navigator may be located at a consumer system and the server system may provide access to the navigation objects (e.g. download) so that the consumer system monitors and filters the multimedia content based on the received navigation objects. A variety of systems may be used to implement the present invention, such as computer systems (consumer and server), television systems, and audio systems.

Proceedings ArticleDOI
03 Oct 2005
TL;DR: It is suggested that through simple changes in a robot's persona, the robot can elicit different levels of information from users-less if the robot's goal is efficient speech, more, if the robots' goal is redundancy, description, explanation, and elaboration.
Abstract: A conversational robot can take on different personas that have more or less common ground with users. With more common ground, communication is more efficient. We studied this process experimentally. A "male" or "female" robot queried users about romantic dating norms. We expected users to assume a female robot knows more about dating norms than a male robot. If so, users should describe dating norms efficiently to a female robot but elaborate on these norms to a male robot. Users, especially women discussing norms for women, used more words explaining dating norms to the male robot than to a female robot. We suggest that through simple changes in a robot's persona, we can elicit different levels of information from users-less if the robot's goal is efficient speech, more, if the robot's goal is redundancy, description, explanation, and elaboration.

Journal ArticleDOI
01 Dec 2005
TL;DR: A layered goal-oriented motion planning strategy using fuzzy logic is developed for a mobile robot navigating in an unknown environment and is implemented on a real mobile robot, Koala, and tested in various environments.
Abstract: Most conventional motion planning algorithms that are based on the model of the environment cannot perform well when dealing with the navigation problem for real-world mobile robots where the environment is unknown and can change dynamically. In this paper, a layered goal-oriented motion planning strategy using fuzzy logic is developed for a mobile robot navigating in an unknown environment. The information about the global goal and the long-range sensory data are used by the first layer of the planner to produce an intermediate goal, referred to as the way-point, that gives a favorable direction in terms of seeking the goal within the detected area. The second layer of the planner takes this way-point as a subgoal and, using short-range sensory data, guides the robot to reach the subgoal while avoiding collisions. The resulting path, connecting an initial point to a goal position, is similar to the path produced by the visibility graph motion planning method, but in this approach there is no assumption about the environment. Due to its simplicity and capability for real-time implementation, fuzzy logic has been used for the proposed motion planning strategy. The resulting navigation system is implemented on a real mobile robot, Koala, and tested in various environments. Experimental results are presented which demonstrate the effectiveness of the proposed fuzzy navigation system.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: In this paper, a complete system for outdoor robot navigation is presented that uses only monocular vision and a three dimensional map of the trajectory and the environment is built.
Abstract: In this paper, a complete system for outdoor robot navigation is presented. It uses only monocular vision. The robot is first guided on a path by a human. During this learning step, the robot records a video sequence. From this sequence, a three dimensional map of the trajectory and the environment is built. When this map has been computed, the robot is able to follow the same trajectory by itself. Experimental results carried out with an urban electric vehicle are shown and compared to the ground truth.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper presents a new method for incremental mapping using fingerprints of places that permits a reliable, compact, and distinctive environment-modeling and makes navigation and localization easier for the robot.
Abstract: Even today, robot mapping is one of the biggest challenges in mobile robotics. Geometric or topological maps can be used by a robot to navigate in the environment. Automatic creation of such maps is still problematic if the robot tries to map large environments. This paper presents a new method for incremental mapping using fingerprints of places. This type of representation permits a reliable, compact, and distinctive environment-modeling and makes navigation and localization easier for the robot. Experimental results for incremental mapping using a mobile robot equipped with a multi-sensor system composed of two 180/spl deg/ laser range finders and an omni-directional camera are also reported.

01 Jan 2005
TL;DR: In this paper, a hierarchical formulation of POMDPs for autonomous robot navigation is proposed, which can be solved in real-time, and is memory efficient, and can effectively model large environments at a fine resolution.
Abstract: This paper proposes a new hierarchical formulation of POMDPs for autonomous robot navigation that can be solved in real-time, and is memory efficient. It will be referred to in this paper as the Robot Navigation-Hierarchical POMDP (RN-HPOMDP). The RN-HPOMDP is utilized as a unified framework for autonomous robot navigation in dynamic environments. As such, it is used for localization, planning and local obstacle avoidance. Hence, the RN-HPOMDP decides at each time step the actions the robot should execute, without the intervention of any other external module for obstacle avoidance or localization. Our approach employs state space and action space hierarchy, and can effectively model large environments at a fine resolution. Finally, the notion of the reference POMDP is introduced. The latter holds all the information regarding motion and sensor uncertainty, which makes the proposed hierarchical structure memory efficient and enables fast learning. The RN-HPOMDP has been experimentally validated in real dynamic environments.

01 Jan 2005
TL;DR: Preliminary data is provided related to the impact of vehicle navigation system use on the formation of drivers' cognitive maps by utilizing indirect rather than direct measures of cognitive map development.
Abstract: Vehicle navigation systems aim to support drivers in strategic - e.g., route choice - and tactical components of the overall driving task, and. as such, they provide a relatively novel means by which individuals acquire and use spatial information. There has been considerable interest from researchers and practitioners in the design and evaluation of user interface for vehicle navigation systems. This emphasis is to be expected given that this technology is arguably the most sophisticated with which drivers have had to interact with vehicles. A mediating factor critical to these issues concerns the extent to which drivers develop a cognitive map when using a vehicle navigating system. The level of such internal knowledge will inevitably affect dependency on an external source of navigation information. Although there have been numerous mentions on the importance of this issue, remarkably few empirical studies have been undertaken. Furthermore, existing studies have been limited in three key aspects: requiring drivers passively to watch videos of interconnected routes while listening to navigation decisions, rather than actively partaking in navigation tasks; artificially motivating participants to learn the area to which they are traveling, by continually testing cognitive map development; and utilizing indirect rather than direct measures of cognitive map development. This paper aims to provide some preliminary data related to the impact of vehicle navigation system use on the formation of drivers' cognitive maps.

Proceedings ArticleDOI
09 May 2005
TL;DR: This talk will provide an overview of the approach to multi-robot exploration and mapping, which was developed within the CentiBots project, and results from this evaluation demonstrate that the system is highly robust and that the maps generated by the robots are more accurate thanMaps generated by a human.
Abstract: Efficient exploration of unknown environments is a fundamental problem in mobile robotics. As autonomous exploration and map building becomes increasingly robust on single robots, the next challenge is to extend these techniques to large teams of robots. This talk will provide an overview of our approach to multi-robot exploration and mapping, which we developed within the CentiBots project. This project aimed at fielding 100 robots in an indoor exploration and surveillance task. A general solution to distributed exploration must consider some difficult issues, including limited communication between robots, no assumptions about relative start locations of the robots, and dynamic assignments of processing tasks. The focus of this talk will be on our current solutions to the problems of robot localization, map building, and coordinated exploration. As part of the CentiBots project, our system was evaluated rigorously by an outside team. We present results from this evaluation that demonstrate that the system is highly robust and that the maps generated by our robots are more accurate than maps generated by a human.

Book ChapterDOI
01 Jan 2005
TL;DR: This paper introduces the application of a sensor network to navigate a flying robot and uses this system in a large-scale outdoor experiment with Mote sensors to guide an autonomous helicopter along a path encoded in the network.
Abstract: This paper introduces the application of a sensor network to navigate a flying robot. We have developed distributed algorithms and efficient geographic routing techniques to incrementally guide one or more robots to points of interest based on sensor gradient fields, or along paths defined in terms of Cartesian coordinates. The robot itself is an integral part of the localization process which establishes the positions of sensors which are not known a priori. We use this system in a large-scale outdoor experiment with Mote sensors to guide an autonomous helicopter along a path encoded in the network.

Proceedings ArticleDOI
18 Apr 2005
TL;DR: A new replanning algorithm is presented that generates equivalent paths to Focussed Dynamic A* while requiring about half its computation time and incrementally repairs previous paths and focusses these repairs towards the current robot position.
Abstract: Mobile robots are often required to navigate environments for which prior maps are incomplete or inaccurate. In such cases, initial paths generated for the robots may need to be amended as new information is received that is in conflict with the original maps. The most widely used algorithm for performing this path replanning is Focussed Dynamic A* (D*), which is a generalization of A* for dynamic environments. D* has been shown to be up to two orders of magnitude faster than planning from scratch. In this paper, we present a new replanning algorithm that generates equivalent paths to D* while requiring about half its computation time. Like D*, our algorithm incrementally repairs previous paths and focusses these repairs towards the current robot position. However, it performs these repairs in a novel way that leads to improved efficiency.

Proceedings ArticleDOI
02 Dec 2005
TL;DR: The aim is to build a planner that takes explicitly into account the human partner by reasoning about his accessibility, his vision field and potential shared motions, and developing an algorithmic framework able to integrate knowledge acquired through the trials.
Abstract: Robot navigation in the presence of humans raises new issues for motion planning and control since the humans safety and comfort must be taken explicitly into account We claim that a human-aware motion planner must not only elaborate safe robot paths, but also plan good, socially acceptable and legible paths Our aim is to build a planner that takes explicitly into account the human partner by reasoning about his accessibility, his vision field and potential shared motions This paper focuses on a navigation planner that takes into account the humans existence explicitly This planner is part of a human-aware motion and manipulation planning and control system that we aim to develop in order to achieve motion and manipulation tasks in a collaborative way with the human We are conducting research in a multidisciplinary perspective, (1) running user studies and (2) developing an algorithmic framework able to integrate knowledge acquired through the trials We illustrate here a first step by implementing a human-friendly approach motion by the robot

Proceedings Article
09 Jul 2005
TL;DR: This paper presents a novel approach to estimate typical configurations of dynamic areas in the environment of a mobile robot by clusters local grid maps to identify the possible configurations.
Abstract: Whenever mobile robots act in the real world, they need to be able to deal with non-static objects. In the context of mapping, a common technique to deal with dynamic objects is to filter out the spurious measurements corresponding to such objects. In this paper, we present a novel approach to estimate typical configurations of dynamic areas in the environment of a mobile robot. Our approach clusters local grid maps to identify the possible configurations. We furthermore describe how these clusters can be utilized within a Rao-Blackwellized particle filter to localize a mobile robot in a non-static environment. In practical experiments carried out with a mobile robot in a typical office environment, we demonstrate the advantages of our approach compared to alternative techniques for mapping and localization in dynamic environments.

Proceedings ArticleDOI
29 Jul 2005
TL;DR: This paper proposes an edge-based text region extraction algorithm, which is robust with respect to font sizes, styles, color/intensity, orientations, effects of illumination, reflections, shadows, perspective distortion, and the complexity of image backgrounds.
Abstract: Scene text is an important feature to be extracted, especially in vision-based mobile robot navigation as many potential landmarks such as nameplates and information signs contain text. This paper proposes an edge-based text region extraction algorithm, which is robust with respect to font sizes, styles, color/intensity, orientations, effects of illumination, reflections, shadows, perspective distortion, and the complexity of image backgrounds. Performance of the proposed algorithm is compared against a number of widely used text localization algorithms and the results show that this method can quickly and effectively localize and extract text regions from real scenes and can be used in mobile robot navigation under an indoor environment to detect text based landmarks.

Patent
27 Sep 2005
TL;DR: In this paper, a movable-body navigation information display unit is presented, where a driver can intuitively recognize a relation between navigation information and a real picture or a real landscape.
Abstract: A movable-body navigation information display unit is provided. In the movable-body navigation information display unit, a driver can intuitively and accurately recognize a relation between navigation information and a real picture or a real landscape. In addition, it is possible to avoid a state that visibility of a caution-needed picture such as a pedestrian in the real picture and a real picture of a road construction site is inhibited by an image of the navigation information. An image data creating section (405) matches road shape data with a road shape model to estimate posture data. In addition, the image data creating section creates picture (image) data for accurately compositing and displaying the image of the navigation information in an appropriate position in a real picture (or in a real landscape) of a road ahead of a movable body, and displays the navigation information as a three-dimensional icon or the like. A picture display section (5) performs display based on the picture data.

Proceedings ArticleDOI
18 Apr 2005
TL;DR: The method uses a novel combination of a 3D occupancy grid for robust sensor data interpretation and a 2.5D height map for fine resolution floor values for humanoid robot QRIO to generate detailed maps for autonomous navigation.
Abstract: With the development of biped robots, systems became able to navigate in a 3 dimensional world, walking up and down stairs, or climbing over small obstacles. We present a method for obtaining a labeled 2.5D grid map of the robot's surroundings. Each cell is marked either as floor or obstacle and contains a value telling the height of the floor or obstacle. Such height maps are useful for path planning and collision avoidance. The method uses a novel combination of a 3D occupancy grid for robust sensor data interpretation and a 2.5D height map for fine resolution floor values. We evaluate our approach using stereo vision on the humanoid robot QRIO and show the advantages over previous methods. Experimental results from navigation runs on an obstacle course demonstrate the ability of the method to generate detailed maps for autonomous navigation.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: In this article, a hierarchical map from images is constructed from a large collection of omnidirectional images taken at many locations in a building using a metric based on visual landmarks (SIFT features) and geometrical constraints.
Abstract: This paper addresses the problem of automatic construction of a hierarchical map from images. Our approach departs from a large collection of omnidirectional images taken at many locations in a building. First, a low-level map is built that consists of a graph in which relations between images are represented. For this, we use a metric based on visual landmarks (SIFT features) and geometrical constraints. Then, we use a graph partitioning method to cluster nodes and in this way construct the high-level map. Experiments on real data show that meaningful higher and lower level maps are obtained, which can be used for accurate localization and planning.

Proceedings ArticleDOI
18 Apr 2005
TL;DR: These algorithms combine brightness information (in the form of edgels) with 3-D data from a commercial stereo system to detect and precisely localize curbs and stairways for autonomous navigation.
Abstract: We present algorithms to detect and precisely localize curbs and stairways for autonomous navigation. These algorithms combine brightness information (in the form of edgels) with 3-D data from a commercial stereo system. The overall system (including stereo computation) runs at about 4 Hz on a 1 GHz laptop. We show experimental results and discuss advantages and shortcomings of our approach.

Proceedings ArticleDOI
12 Oct 2005
TL;DR: This paper intends to develop and experiment various task planners and interaction schemes, that will allow the robot to select and perform its tasks while taking into account explicitly the constraints imposed by the presence of humans, their needs and preferences.
Abstract: Human-robot interaction requires explicit reasoning on the human environment and on the robot capacities to achieve its tasks in a collaborative way with a human partner.This paper focuses on organization of the robot decisional abilities and more particularly on the management of human interaction as an integral part of the robot control architecture. Such an architecture should be the framework that will allow the robot to accomplish its tasks but also produce behaviors that support its engagement vis-a-vis its human partner and interpret similar behaviors from him.Together and in coherence with this framework, we intend to develop and experiment various task planners and interaction schemes, that will allow the robot to select and perform its tasks while taking into account explicitly the constraints imposed by the presence of humans, their needs and preferences.We have considered a scheme where the robot plans for itself and for the human in order not only (1) to assess the feasibility of the task (at a certain level) before performing it, but also (2) to share the load between the robot and the human and (3) to explain/illustrate a possible course of action.

Proceedings ArticleDOI
18 Apr 2005
TL;DR: This paper presents a navigation framework for wheeled mobile robots in indoor environments where the robot is controlled by a visual servoing law adapted to its nonholonomic constraint, and real experiment results illustrate the validity of the presented framework.
Abstract: When navigating in an unknown environment for the first time, a natural behavior consists in memorizing some key views along the performed path, in order to use these references as checkpoints for a future navigation mission taking a similar path. This assumption is used in this paper as the basis of a navigation framework for wheeled mobile robots in indoor environments. During a human-guided teleoperated learning step, the robot performs paths which are sampled and stored as a set of ordered key images, acquired by a standard embedded camera. The set of these obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual route. When running autonomously, the robot is controlled by a visual servoing law adapted to its nonholonomic constraint. Based on the regulation of successive homographies, this control guides the robot along the reference visual route without explicitly planning any trajectory. Real experiment results illustrate the validity of the presented framework.

Journal ArticleDOI
TL;DR: The paper presents the results of the tests to demonstrate that the system enables multiple robots to roam freely searching for and successfully finding targets in an unknown environment containing obstacles without hitting the obstacles or one another.
Abstract: This paper describes a mobile robot navigation control system based on fuzzy logic. Fuzzy rules embedded in the controller of a mobile robot enable it to avoid obstacles in a cluttered environment that includes other mobile robots. So that the robots do not collide against one another, each robot also incorporates a set of collision prevention rules implemented as a Petri Net model within its controller. The navigation control system has been tested in simulation and on actual mobile robots. The paper presents the results of the tests to demonstrate that the system enables multiple robots to roam freely searching for and successfully finding targets in an unknown environment containing obstacles without hitting the obstacles or one another.