Topic
Mobile robot navigation
About: Mobile robot navigation is a research topic. Over the lifetime, 14713 publications have been published within this topic receiving 263092 citations.
Papers published on a yearly basis
Papers
More filters
••
21 May 1995TL;DR: The paradigm, called plan-merging paradigm, is presented and illustrated through its application to planning, execution and control of a large fleet of a autonomous mobile robots for load transport tasks in a structured environment.
Abstract: This paper presents an approach we have recently developed for multi-robot cooperation. It is based on a paradigm where robots incrementally merge their plans into a set of already coordinated plans. This is done through exchange of information about their current state and their future actions. This leads to a generic framework which can be applied to a variety of tasks and applications. The paradigm, called plan-merging paradigm, is presented and illustrated through its application to planning, execution and control of a large fleet of a autonomous mobile robots for load transport tasks in a structured environment.
115 citations
••
13 Jan 2004TL;DR: This paper describes the concepts behind the \bf BPN (BMW Personal Navigator), an entirely implemented system that combines a desktop event and route planner, a car navigation system, and a multi-modal, in- and outdoor pedestrian navigation system for a PDA.
Abstract: Navigation services can be found in different situations and contexts: while connected to the web through a desktop PC, in cars, and more recently on PDAs while on foot. These services are usually well designed for their specific purpose, but fail to work in other situations. In this paper we present an approach that connects a variety of specialized user interfaces to achieve a personal navigation service spanning different situations. We describe the concepts behind the \bf BPN (BMW Personal Navigator), an entirely implemented system that combines a desktop event and route planner, a car navigation system, and a multi-modal, in- and outdoor pedestrian navigation system for a PDA. Rather than designing for one unified UI, we focus on connecting specialized UIs for desktop, in-car and on-foot use.
115 citations
••
01 Mar 2000TL;DR: The well-formulated and well-known laws of electrostatic fields are used to prove that the proposed approach generates an approximately optimal path (based on cell resolution) in a real-time frame.
Abstract: Proposes a solution to the two-dimensional (2-D) collision fee path planning problem for an autonomous mobile robot utilizing an electrostatic potential field (EPF) developed through a resistor network, derived to represent the environment. No assumptions are made about the amount of information contained in the a priori environment map (it may be completely empty) or the shape of the obstacles. The well-formulated and well-known laws of electrostatic fields are used to prove that the proposed approach generates an approximately optimal path (based on cell resolution) in a real-time frame. It is also proven through the classical laws of electrostatics that the derived potential function is a global navigation function (as defined by Rimon and Koditschek, 1992), that the field is free of all local minima and that all paths necessarily lead to the goal position. The complexity of the EPF generated path is shown to be O(mn/sub M/), where m is the total number of polygons in the environment and n/sub M/ is the maximum number of sides of a polygonal object. The method is tested both by simulation and experimentally on a Nomad200 mobile robot platform equipped with a ring of sixteen sonar sensors.
115 citations
••
21 May 2001TL;DR: This paper shows that a system based on only camera ego-motion estimates will accumulate errors with super-linear growth in the distance travelled, owing to increasing orientation errors, and describes a methodology for long-distance rover navigation that meets these goals using robust estimation.
Abstract: Robust navigation for mobile robots over long distances requires an accurate method for tracking the robot position in the environment. Techniques for position estimation by determining the camera ego-motion from monocular or stereo sequences have been previously described. However, long-distance navigation requires a very high level of robustness and a very low rate of error growth. In this paper, we describe a methodology for long-distance rover navigation that meets these goals using robust estimation. We show that a system based on only camera ego-motion estimates will accumulate errors with super-linear growth in the distance travelled, owing to increasing orientation errors. When an absolute orientation sensor is incorporated, the error growth can be reduced to a linear function of the distance travelled. We tested these techniques using both extensive simulation and hundreds of real rover images and achieved a low, linear rate of error growth.
114 citations
••
TL;DR: An approach that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system is described.
Abstract: This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.
114 citations