scispace - formally typeset
Search or ask a question

Showing papers on "Mobile robot navigation published in 2010"


Proceedings ArticleDOI
03 May 2010
TL;DR: This paper describes a navigation system that allowed a robot to complete 26.2 miles of autonomous navigation in a real office environment, including an efficient Voxel-based 3D mapping algorithm that explicitly models unknown space.
Abstract: This paper describes a navigation system that allowed a robot to complete 26.2 miles of autonomous navigation in a real office environment. We present the methods required to achieve this level of robustness, including an efficient Voxel-based 3D mapping algorithm that explicitly models unknown space. We also provide an open-source implementation of the algorithms used, as well as simulated environments in which our results can be verified.

536 citations


Journal ArticleDOI
TL;DR: This work investigated the persistent navigation and mapping problem in the context of an autonomous robot that performs mock deliveries in a working office environment over a two-week period and found the solution was based on the biologically inspired visual SLAM system, RatSLAM.
Abstract: The challenge of persistent navigation and mapping is to develop an autonomous robot system that can simultaneously localize, map and navigate over the lifetime of the robot with little or no human intervention. Most solutions to the simultaneous localization and mapping (SLAM) problem aim to produce highly accurate maps of areas that are assumed to be static. In contrast, solutions for persistent navigation and mapping must produce reliable goal-directed navigation outcomes in an environment that is assumed to be in constant flux. We investigate the persistent navigation and mapping problem in the context of an autonomous robot that performs mock deliveries in a working office environment over a two-week period. The solution was based on the biologically inspired visual SLAM system, RatSLAM. RatSLAM performed SLAM continuously while interacting with global and local navigation systems, and a task selection module that selected between exploration, delivery, and recharging modes. The robot performed 1,143 delivery tasks to 11 different locations with only one delivery failure (from which it recovered), traveled a total distance of more than 40 km over 37 hours of active operation, and recharged autonomously a total of 23 times.

302 citations


Proceedings ArticleDOI
03 May 2010
TL;DR: This paper equips BigDog with a laser scanner, stereo vision system, and perception and navigation algorithms, and uses these sensors and algorithms to perform autonomous navigation to goal positions in unstructured forest environments.
Abstract: BigDog is a four legged robot with exceptional rough-terrain mobility. In this paper, we equip BigDog with a laser scanner, stereo vision system, and perception and navigation algorithms. Using these sensors and algorithms, BigDog performs autonomous navigation to goal positions in unstructured forest environments. The robot perceives obstacles, such as trees, boulders, and ground features, and steers to avoid them on its way to the goal. We describe the hardware and software implementation of the navigation system and summarize performance. During field tests in unstructured wooded terrain, BigDog reached its goal position 23 of 26 runs and traveled over 130 meters at a time without operator involvement.

242 citations


Journal ArticleDOI
TL;DR: This paper presents a pioneering novel solution to the problem of combined positioning and map matching with integrity provision at the lane level by means of a multiple-hypothesis particle-filter-based algorithm.
Abstract: Lane-level positioning and map matching are some of the biggest challenges for navigation systems. Additionally, in safety applications or in those with critical performance requirements (such as satellite-based electronic fee collection), integrity becomes a key word for the navigation community. In this scenario, it is clear that a navigation system that can operate at the lane level while providing integrity parameters that are capable of monitoring the quality of the solution can bring important benefits to these applications. This paper presents a pioneering novel solution to the problem of combined positioning and map matching with integrity provision at the lane level. The system under consideration hybridizes measurements from a global navigation satellite system (GNSS) receiver, an odometer, and a gyroscope, along with the road information stored in enhanced digital maps, by means of a multiple-hypothesis particle-filter-based algorithm. A set of experiments in real environments in France and Germany shows the very good results obtained in terms of positioning, map matching, and integrity consistency, proving the feasibility of our proposal.

188 citations


Proceedings ArticleDOI
10 May 2010
TL;DR: This work introduces a visitor's companion robot agent, as a natural task for such symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans.
Abstract: Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor's companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.

167 citations


Journal ArticleDOI
TL;DR: The application of Learning from Demonstration to this task for the Crusher autonomous navigation platform is explored, using expert examples of desired navigation behavior and mappings from both online and offline perceptual data to planning costs are learned.
Abstract: Rough terrain autonomous navigation continues to pose a challenge to the robotics community. Robust navigation by a mobile robot depends not only on the individual performance of perception and planning systems, but on how well these systems are coupled. When traversing complex unstructured terrain, this coupling (in the form of a cost function) has a large impact on robot behavior and performance, necessitating a robust design. This paper explores the application of Learning from Demonstration to this task for the Crusher autonomous navigation platform. Using expert examples of desired navigation behavior, mappings from both online and offline perceptual data to planning costs are learned. Challenges in adapting existing techniques to complex online planning systems and imperfect demonstration are addressed, along with additional practical considerations. The benefits to autonomous performance of this approach are examined, as well as the decrease in necessary designer effort. Experimental results are presented from autonomous traverses through complex natural environments.

164 citations


Proceedings ArticleDOI
04 Jul 2010
TL;DR: A robotic wheelchair navigation system which is specially designed for confined spaces is proposed and uses the Monte Carlo technique to find a minimum path within the confined environment and takes into account the variance propagation in the predicted path for ensuring the safe driving of the robot.
Abstract: In the present work, a robotic wheelchair navigation system which is specially designed for confined spaces is proposed. In confined spaces, the movements of wheelchairs are restricted by the environment more than other unicycle type vehicles. For example, if the wheelchair is too close to a wall, it can not rotate freely because the front or back may collide with the wall. The navigation system is composed by a path planning module and a control module; both use the environment and robot information provided by a SLAM algorithm to attain their objectives. The planning strategy uses the Monte Carlo technique to find a minimum path within the confined environment and takes into account the variance propagation in the predicted path for ensuring the safe driving of the robot. The objective of the navigation system is to drive the robotic wheelchair within the confined environment in order to reach a desired orientation or posture.

156 citations


Proceedings ArticleDOI
03 May 2010
TL;DR: A novel and robust method for place recognition based on range images that matches a given 3D scan against a database using point features and scores potential transformations by comparing significant points in the scans.
Abstract: The problem of place recognition appears in different mobile robot navigation problems including localization, SLAM, or change detection in dynamic environments. Whereas this problem has been studied intensively in the context of robot vision, relatively few approaches are available for three-dimensional range data. In this paper, we present a novel and robust method for place recognition based on range images. Our algorithm matches a given 3D scan against a database using point features and scores potential transformations by comparing significant points in the scans. A further advantage of our approach is that the features allow for a computation of the relative transformations between scans which is relevant for registration processes. Our approach has been implemented and tested on different 3D data sets obtained outdoors. In several experiments we demonstrate the advantages of our approach also in comparison to existing techniques.

140 citations


Book ChapterDOI
Martin Pielot1, Susanne Boll1
17 May 2010
TL;DR: It is found that the Tactile Wayfinder freed the participants' attention but could not keep up with the navigation system in terms of navigation performance, and no significant difference was found in the acquisition of spatial knowledge.
Abstract: In this paper we report on a field study comparing a commercial pedestrian navigation system to a tactile navigation system called Tactile Wayfinder. Similar to previous approaches the Tactile Wayfinder uses a tactile torso display to present the directions of a route's waypoints to the user. It advances those approaches by conveying the location of the next two waypoints rather than the next one only, so the user already knows how the route continues when reaching a waypoint. Using a within-subjects design, fourteen participants navigated along two routes in a busy city centre with the Tactile Wayfinder and a commercial pedestrian navigation system. We measured the acquisition of spatial knowledge, the level of attention the participants had to devote to the navigation task, and the navigation performance. We found that the Tactile Wayfinder freed the participants' attention but could not keep up with the navigation system in terms of navigation performance. No significant difference was found in the acquisition of spatial knowledge. Instead, a good general sense of direction was highly correlated with good spatial knowledge acquisition and a good navigation performance.

134 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the navigation system of a flexible AGV intended for operation in partially structured warehouses and with frequent changes in the floor plant layout, which is achieved by incorporating a high degree of on-board autonomy and by decreasing the amount of manual work required by the operator when establishing the a priori knowledge of the environment.
Abstract: The research presented in this paper approaches the issue of navigation using an automated guided vehicle (AGV) in industrial environments. The work describes the navigation system of a flexible AGV intended for operation in partially structured warehouses and with frequent changes in the floor plant layout. This is achieved by incorporating a high degree of on-board autonomy and by decreasing the amount of manual work required by the operator when establishing the a priori knowledge of the environment. The AGV's autonomy consists of the set of automatic tasks, such as planner, perception, path planning and path tracking, that the industrial vehicle must perform to accomplish the task required by the operator. The integration of these techniques has been tested in a real AGV working on an industrial warehouse environment.

124 citations


Proceedings ArticleDOI
17 Jun 2010
TL;DR: An low-cost indoor navigation system, which is based on mobile terminals, supporting technology Near Field Communication (NFC), and Java program access to Radio Frequency Identification (RFID) tags, is developed.
Abstract: One of the most important limitations for people with visual impairment is the inability of unassisted navigation and orientation in unfamiliar buildings. An low-cost indoor navigation system, which is based on mobile terminals, supporting technology Near Field Communication (NFC), and Java program access to Radio Frequency Identification (RFID) tags, is developed. The proposed navigation system enables users to imagine the map of the rooms (dimensions, relative position of points of interest). This information is stored in RFID tags in WAP Binary eXtensible Markup Language (WBXML) format. The system allows leaving audio messages that are recorded in RFID tags in Adaptive Multi Rate (AMR) format. Voice enabled navigation, that is familiar to users with visual disabilities, is used.

Journal ArticleDOI
TL;DR: This paper presents a novel semi-autonomous navigation strategy designed for low throughput interfaces that is successfully tested both in simulation and with a real robot, and a feasibility study for the use of a BCI confirms the potential of such an interface.

Patent
18 Jun 2010
TL;DR: In this paper, a robotic system that includes a robot and a remote station is described, where the remote station can generate control commands that are transmitted to the robot through a broadband network.
Abstract: A robotic system that includes a robot and a remote station. The remote station can generate control commands that are transmitted to the robot through a broadband network. The control commands can be interpreted by the robot to induce action such as robot movement or focusing a robot camera. The robot can generate reporting commands that are transmitted to the remote station through the broadband network. The reporting commands can provide positional feedback or system reports on the robot.

Proceedings ArticleDOI
07 Sep 2010
TL;DR: A novel approach to pedestrian navigation using bearing-based haptic feedback, where people are guided in the general direction of their destination via vibration, but additional exploratory navigation is stimulated by varying feedback based on the potential for taking alternative routes.
Abstract: In this article we describe a novel approach to pedestrian navigation using bearing-based haptic feedback. People are guided in the general direction of their destination via vibration, but additional exploratory navigation is stimulated by varying feedback based on the potential for taking alternative routes. We describe two mobile prototypes that were created to examine the possible benefits of the approach. The successful use of this exploratory navigation method is demonstrated in a realistic field trial, and we discuss the results and interesting participant behaviours that were recorded.

Book ChapterDOI
01 Jan 2010
TL;DR: A technique for mobile robot model predictive control that utilizes the structure of a regionalmotion plan to effectively search the local continuum for an improved solution to solve the problem of path following and obstacle avoidance through geometric singularities and discontinuities.
Abstract: As mobile robots venture into more difficult environments, more complex state-space paths are required to move safely and efficiently. The difference between mission success and failure can be determined by a mobile robots capacity to effectively navigate such paths in the presence of disturbances. This paper describes a technique for mobile robot model predictive control that utilizes the structure of a regionalmotion plan to effectively search the local continuum for an improved solution. The contribution, a receding horizon model-predictive control (RHMPC) technique, specifically addresses the problem of path following and obstacle avoidance through geometric singularities and discontinuities such as cusps, turn-in-place, and multi-point turn maneuvers in environments where terrain shape and vehicle mobility effects are non-negligible. The technique is formulated as an optimal controller that utilizes a model-predictive trajectory generator to relax parameterized control inputs initialized from a regional motion planner to navigate safely through the environment. Experimental results are presented for a six-wheeled skid-steered field robot in natural terrain.

Proceedings ArticleDOI
10 Apr 2010
TL;DR: This paper considers navigation as a social activity among drivers and navigators to improve design of in-vehicle navigation systems and identifies overarching practices that differ greatly from the literature on individual navigation.
Abstract: The design of in-vehicle navigation systems fails to take into account the social nature of driving and automobile navigation. In this paper, we consider navigation as a social activity among drivers and navigators to improve design of such systems. We explore the implications of moving from a map-centered, individually-focused design paradigm to one based upon collaborative human interaction during the navigation task. We conducted a qualitative interaction design study of navigation among three types of teams: parents and their teenage children, couples, and unacquainted individuals. We found that collaboration varied among these different teams, and was influenced by social role, as well as the task role of driver or navigator. We also found that patterns of prompts, maneuvers, and confirmations varied among the three teams. We identify overarching practices that differ greatly from the literature on individual navigation. From these discoveries, we present design implications that can be used to inform future navigation systems.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: A vision-based navigation and localization system using two biologically-inspired scene understanding models which are studied from human visual capabilities to accurately localize the robot and perform visual feedback control to direct its heading and go to a user-provided goal location.
Abstract: We present a vision-based navigation and localization system using two biologically-inspired scene understanding models which are studied from human visual capabilities: (1) Gist model which captures the holistic characteristics and layout of an image and (2) Saliency model which emulates the visual attention of primates to identify conspicuous regions in the image. Here the localization system utilizes the gist features and salient regions to accurately localize the robot, while the navigation system uses the salient regions to perform visual feedback control to direct its heading and go to a user-provided goal location. We tested the system on our robot, Beobot2.0, in an indoor and outdoor environment with a route length of 36.67m (10,890 video frames) and 138.27m (28,971 frames), respectively. On average, the robot is able to drive within 3.68cm and 8.78cm (respectively) of the center of the lane.

Journal ArticleDOI
TL;DR: The predictive navigation paradigm is proposed where probabilistic planning is integrated with obstacle avoidance along with future motion prediction of humans and/or other obstacles to solve the problem of autonomous robot navigation in dynamic and congested environments.
Abstract: This paper considers the problem of autonomous robot navigation in dynamic and congested environments The predictive navigation paradigm is proposed where probabilistic planning is integrated with obstacle avoidance along with future motion prediction of humans and/or other obstacles Predictive navigation is performed in a global manner with the use of a hierarchical Partially Observable Markov Decision Process (POMDP) that can be solved on-line at each time step and provides the actual actions the robot performs Obstacle avoidance is performed within the predictive navigation model with a novel approach by deciding paths to the goal position that are not obstructed by other moving objects movement with the use of future motion prediction and by enabling the robot to increase or decrease its speed of movement or by performing detours The robot is able to decide which obstacle avoidance behavior is optimal in each case within the unified navigation model employed

Journal ArticleDOI
TL;DR: An obstacle collision avoidance technique for the wagon truck pulling robot which uses an omni-directional wheel system as a safe movement technology and a method to reach the goal along a global path computed by path planning without colliding with static and dynamic obstacles.

Journal ArticleDOI
TL;DR: This work considers the problems of a wheeled mobile robot navigation and guidance towards an unknown stationary or maneuvering target using range-only measurements and proposes and studies several methods termed Equiangular Navigation Guidance (ENG) laws.

Journal Article
TL;DR: The technology of intelligent mobile robot path planning is one of the most important robot research areas and the methods are classified into four classes: template based, artificial potential field based, map building based and artificial intelligent based approaches.
Abstract: The technology of intelligent mobile robot path planning is one of the most important robot research areas. In this paper the methods of path planning are classified into four classes:Template based,artificial potential field based,map building based and artificial intelligent based approaches. First,the basic theories of the path planning methods are introduced briefly. Then,the advantages and limitations of the methods are pointed out. Finally,the technology development trends of intelligent mobile robot path planning are given.

Journal ArticleDOI
01 Sep 2010-Robotica
TL;DR: The biologically inspired navigation algorithm is the equiangular navigation guidance (ENG) law combined with a local obstacle avoidance technique which uses a system of active sensors which provides the necessary information about obstacles in the vicinity of the robot.
Abstract: The problem of wheeled mobile robot (WMR) navigation toward an unknown target in a cluttered environment has been considered. The biologically inspired navigation algorithm is the equiangular navigation guidance (ENG) law combined with a local obstacle avoidance technique. The collision avoidance technique uses a system of active sensors which provides the necessary information about obstacles in the vicinity of the robot. In order for the robot to avoid collision and bypass the enroute obstacles, the angle between the instantaneous moving direction of the robot and a reference point on the surface of the obstacle is kept constant. The performance of the navigation strategy is confirmed with computer simulations and experiments with ActivMedia Pioneer 3-DX wheeled robot.

Journal ArticleDOI
TL;DR: A vision-based position and orientation estimation method for aircraft navigation and control that accounts for a limited camera FOV by releasing tracked features that are about to leave the FOV and tracking new features.
Abstract: While a Global Positioning System (GPS) is the most widely used sensor modality for aircraft navigation, researchers have been motivated to investigate other navigational sensor modalities because of the desire to operate in GPS denied environments. Due to advances in computer vision and control theory, monocular camera systems have received growing interest as an alternative/collaborative sensor to GPS systems. Cameras can act as navigational sensors by detecting and tracking feature points in an image. Current methods have a limited ability to relate feature points as they enter and leave the camera field of view (FOV). A vision-based position and orientation estimation method for aircraft navigation and control is described. This estimation method accounts for a limited camera FOV by releasing tracked features that are about to leave the FOV and tracking new features. At each time instant that new features are selected for tracking, the previous pose estimate is updated. The vision-based estimation scheme can provide input directly to the vehicle guidance system and autopilot. Simulations are performed wherein the vision-based pose estimation is integrated with a nonlinear flight model of an aircraft. Experimental verification of the pose estimation is performed using the modelled aircraft.

Journal ArticleDOI
TL;DR: This paper presents software simulation of navigation problems of a mobile robot avoiding obstacles in a static environment using both classical and fuzzy based algorithms.

Proceedings ArticleDOI
10 Dec 2010
TL;DR: This paper overviews the main concepts related to TBN and presents an exhaustive survey of the works reported in the literature, including a table comparing the motion and the measurement models, as well as the probabilistic framework used for the estimation.
Abstract: Terrain Based Navigation (TBN) is a method rooted to the early cruise missile navigation systems, when GPS was not yet available. For decades, TBN has been applied as a complementary system to INS navigation for Unmanned Aerial Vehicles (UAV). In the field of Autonomous Underwater Vehicles (AUVs), it has the potential to bound the drift inherent to dead reckoning navigation, based on INS and/or Doppler Velocity Log (DVL) sensors, as well as to make the navigation beyond the areas of coverture of the acoustic transponder networks, a reality. This paper overviews the main concepts related to TBN and present an exhaustive survey of the works reported in the literature. As a main contribution, a table comparing the motion and the measurement models, as well as the probabilistic framework used for the estimation is reported. An effort has been put on unifying the diverse nomenclature used across the surveyed works. We aim this paper to become an starting point for the researchers interested in this technology, with pointers to the most interested works in the area.

Proceedings ArticleDOI
07 Sep 2010
TL;DR: This work presents a pedestrian navigation system for indoor environments based on the dead reckoning positioning method, 2D barcodes, and data from accelerometers and magnetometers, and the sensing and computing technologies are available in common smart phones.
Abstract: In this work we present a pedestrian navigation system for indoor environments based on the dead reckoning positioning method, 2D barcodes, and data from accelerometers and magnetometers. All the sensing and computing technologies of our solution are available in common smart phones. The need to create indoor navigation systems arises from the inaccessibility of the classic navigation systems, such as GPS, in indoor environments.

Proceedings ArticleDOI
03 May 2010
TL;DR: Results show that positioning accuracy can significantly be improved within urban domains, and a novel approach for incorporating multipath errors into the conventional GPS sensor model by analyzing environmental structures from online generated point clouds is contributed.
Abstract: Autonomous robot navigation in out-door scenarios gains increasing importance in various growing application areas. Whereas in non-urban domains such as deserts the problem of successful GPS-based navigation appears to be almost solved, navigation in urban domains particularly in the close vicinity of buildings is still a challenging problem. In such situations GPS accuracy significantly drops down due to multiple signal reflections with larger objects causing the so-called multipath error. In this paper we contribute a novel approach for incorporating multipath errors into the conventional GPS sensor model by analyzing environmental structures from online generated point clouds. The approach has been validated by experimental results conducted with an allterrain robot operating in scenarios requiring close-to-building navigation. Presented results show that positioning accuracy can significantly be improved within urban domains.

Patent
06 Dec 2010
TL;DR: In this article, the authors present a method for displaying in real-time a video feed of the path ahead while superimposing transparent cartographic information with navigation instructions, aiming to improve the user's navigation experience by making it easier to relate to the real world with 3D maps and representative navigation instructions.
Abstract: A method with which navigation instructions are displayed on a screen. Preferably, using an augmented-reality approach whereby the path to the destination and 3D mapping objects such as buildings and landmarks are highlighted on a video feed of the surrounding environment ahead of the user. The invention is designed to run within embodiments the likes of Personal Digital Assistants (PDAs), smartphones or in-dash vehicle infotainment systems, displaying in real-time a video feed of the path ahead while superimposing transparent cartographic information with navigation instructions. The aim is to improve the user's navigation experience by making it easier to relate to the real world with 3D maps and representative navigation instructions. This method makes it safer to view the navigation screen and the user can locate landmarks, narrow streets and the final destination more easily.

Patent
17 Jun 2010
TL;DR: Personalized navigation as discussed by the authors is a technique for providing personalized navigation through a virtual 3D environment by inputting a representation of the environment that is to be navigated, along with user specified navigation preferences, and outputting a customized navigation experience.
Abstract: Personalized navigation technique embodiments are presented that generally involve providing personalized navigation through a virtual three-dimensional (3D) environment. In one general embodiment this is accomplished by inputting a representation of a virtual 3D environment that is to be navigated, along with a number of user specified navigation preferences, and outputting a customized navigation experience. This navigation experience is produced by first establishing a path through the virtual 3D environment, and then optionally controlling the behavior of a virtual camera which reveals the virtual 3D environment to the user as the user traverses the path. Both the path and the virtual camera behavior are personalized to a user based on the aforementioned navigation preferences.

Patent
26 Jan 2010
TL;DR: In this paper, the maximum movement speed of a component of the robot is limited lower than when the component is outside the cooperative task area, and, the motion of a robot does not enter a robot entry-prohibited area.
Abstract: A production system in which a human and a robot may simultaneously perform a cooperative task in the same area while ensuring human's safety. A robot is positioned at one side of a working table, and an operator is positioned at the other side of the working table. The reachable area of the operator is limited by the working table. An area of the working table is divided into an area where only the operator may perform a task, an area where only the robot may perform a task, and an area where both the operator and the robot may enter. In a cooperation mode, the maximum movement speed of a component of the robot is limited lower than when the component of the robot is outside the cooperative task area, and, the motion of the robot is limited so that the robot does not enter a robot entry-prohibited area.