scispace - formally typeset
Search or ask a question

Showing papers on "Mobile robot navigation published in 2009"


Journal ArticleDOI
01 Jun 2009
TL;DR: A novel proposal to solve the problem of path planning for mobile robots based on Simple Ant Colony Optimization Meta-Heuristic (SACO-MH), named SACOdm, where d stands for distance and m for memory.
Abstract: In the Motion Planning research field, heuristic methods have demonstrated to outperform classical approaches gaining popularity in the last 35 years. Several ideas have been proposed to overcome the complex nature of this NP-Complete problem. Ant Colony Optimization algorithms are heuristic methods that have been successfully used to deal with this kind of problems. This paper presents a novel proposal to solve the problem of path planning for mobile robots based on Simple Ant Colony Optimization Meta-Heuristic (SACO-MH). The new method was named SACOdm, where d stands for distance and m for memory. In SACOdm, the decision making process is influenced by the existing distance between the source and target nodes; moreover the ants can remember the visited nodes. The new added features give a speed up around 10 in many cases. The selection of the optimal path relies in the criterion of a Fuzzy Inference System, which is adjusted using a Simple Tuning Algorithm. The path planner application has two operating modes, one is for virtual environments, and the second one works with a real mobile robot using wireless communication. Both operating modes are global planners for plain terrain and support static and dynamic obstacle avoidance.

366 citations


Journal ArticleDOI
TL;DR: This low-cost indoor navigation system runs on off-the-shelf camera phones and uses built-in cameras to determine user location in real time by detecting unobtrusive fiduciary markers, enabling quick deployment in new environments.
Abstract: This low-cost indoor navigation system runs on off-the-shelf camera phones. More than 2,000 users at four different large-scale events have already used it. The system uses built-in cameras to determine user location in real time by detecting unobtrusive fiduciary markers. The required infrastructure is limited to paper markers and static digital maps, and common devices are used, facilitating quick deployment in new environments. The authors have studied the application quantitatively in a controlled environment and qualitatively during deployment at four large international events. According to test users, marker-based navigation is easier to use than conventional mobile digital maps. Moreover, the users' location awareness in navigation tasks improved. Experiences drawn from questionnaires, usage log data, and user interviews further highlight the benefits of this approach.

258 citations


Journal ArticleDOI
TL;DR: A Bayesian optimization method that dynamically trades off exploration and exploitation for optimal sensing with a mobile robot and is applicable to other closely-related domains, including active vision, sequential experimental design, dynamic sensing and calibration with mobile sensors.
Abstract: We address the problem of online path planning for optimal sensing with a mobile robot. The objective of the robot is to learn the most about its pose and the environment given time constraints. We use a POMDP with a utility function that depends on the belief state to model the finite horizon planning problem. We replan as the robot progresses throughout the environment. The POMDP is high-dimensional, continuous, non-differentiable, nonlinear, non-Gaussian and must be solved in real-time. Most existing techniques for stochastic planning and reinforcement learning are therefore inapplicable. To solve this extremely complex problem, we propose a Bayesian optimization method that dynamically trades off exploration (minimizing uncertainty in unknown parts of the policy space) and exploitation (capitalizing on the current best solution). We demonstrate our approach with a visually-guide mobile robot. The solution proposed here is also applicable to other closely-related domains, including active vision, sequential experimental design, dynamic sensing and calibration with mobile sensors.

249 citations


Journal ArticleDOI
TL;DR: The proposed algorithm that uses only passive RFID is able to estimate the robot's location and orientation more precisely by using trigonometric functions and the IC tags' Cartesian coordinates in a regular gridlike pattern.
Abstract: This paper proposes an efficient method for localization and pose estimation for mobile robot navigation using passive radio-frequency identification (RFID). We assume that the robot is able to identify IC tags and measure the robot's pose based on the relation between the previous and current location according to the IC tags. However, there arises the problem of uncertainty of location due to the nature of the antenna and IC tags. In other words, an error is always present which is relative to the sensing area of the antenna. Many researches have used external sensors in order to reduce the location errors, with few researches presented involving purely RFID driven systems. Our proposed algorithm that uses only passive RFID is able to estimate the robot's location and orientation more precisely by using trigonometric functions and the IC tags' Cartesian coordinates in a regular gridlike pattern. The experimental results show that the proposed method effectively estimates both the location and the pose of a mobile robot during navigation.

241 citations


Proceedings ArticleDOI
04 Apr 2009
TL;DR: This work proposes a novel concept of an in-vehicle navigation display system that displays navigation information directly onto the vehicle's windshield, superimposing it on the driver's view of the actual road.
Abstract: A common effect of aging is decline in spatial cognition. This is an issue for all elders, but particularly for elder drivers. To address this driving issue, we propose a novel concept of an in-vehicle navigation display system that displays navigation information directly onto the vehicle's windshield, superimposing it on the driver's view of the actual road. An evaluation of our simulated version of this display shows that it results in a significant reduction in navigation errors and distraction-related measures compared to a typical in-car navigation display for elder drivers. These results help us understand how context-sensitive information and a simulated augmented reality representation can be combined to minimize the cognitive load in translating between virtual/information spaces and the real world.

237 citations


Proceedings ArticleDOI
10 Oct 2009
TL;DR: This work generalizes the concept of velocity obstacles, which has been used for navigation among dynamic obstacles, and takes into account the constraints of a car-like robot to find controls that will allow collision free navigation in dynamic environments.
Abstract: We address the problem of real-time navigation in dynamic environments for car-like robots. We present an approach to identify controls that will lead to a collision with a moving obstacle at some point in the future. Our approach generalizes the concept of velocity obstacles, which have been used for navigation among dynamic obstacles, and takes into account the constraints of a car-like robot. We use this formulation to find controls that will allow collision free navigation in dynamic environments. Finally, we demonstrate the performance of our algorithm on a simulated car-like robot among moving obstacles.

229 citations


Book ChapterDOI
14 Jul 2009
TL;DR: The development of a mobile robot to assist people in their home is a long term goal of Fraunhofer IPA and current developments aim at applying the robot in an eldercare facility in order to support the personnel in their daily tasks.
Abstract: The development of a mobile robot to assist people in their home is a long term goal of Fraunhofer IPA. As a vision of a future household product, the latest prototype, Care-O-bot® 3, is equipped with the latest industrial state-of-the art hardware components and offers all modern multimedia and interaction equipment as well as most advanced sensors and control. Care-O-bot® 3 has been presented to the public on several occasions where it distributed drinks to the visitors of trade fairs and events. Current developments aim at applying the robot in an eldercare facility in order to support the personnel in their daily tasks.

221 citations


Proceedings ArticleDOI
10 Nov 2009
TL;DR: The COMPANION framework is introduced, which can express an arbitrary number of social conventions and explicitly accounts for these conventions in the planning phase, and it is verified that the method produces human-like behavior in a mobile robot.
Abstract: This paper introduces the COMPANION framework: a Constraint-Optimizing Method for Person-Acceptable NavigatION. In this framework, human social conventions, such as personal space and tending to one side of hallways, are represented as constraints on the robot's navigation. These constraints are accounted for at the global planning level. In this paper, we present the rationale for, and implementation of, this framework, and we describe the experiments we have run in simulation to verify that the method produces human-like behavior in a mobile robot. Our approach is novel in that it can express an arbitrary number of social conventions and explicitly accounts for these conventions in the planning phase.

181 citations


Book
04 Dec 2009
TL;DR: The major contributions of this thesis arise from the formulation of a new approach to the mapping of terrain features that provides improved computational efficiency in the SLAM algorithm.
Abstract: Stefan Bernard Williams Doctor of Philosophy The University of Sydney September 2001 Efficient Solutions to Autonomous Mapping and Navigation Problems This thesis deals with the Simultaneous Localisation and Mapping algorithm as it pertains to the deployment of mobile systems in unknown environments. Simultaneous Localisation and Mapping (SLAM) as defined in this thesis is the process of concurrently building up a map of the environment and using this map to obtain improved estimates of the location of the vehicle. In essence, the vehicle relies on its ability to extract useful navigation information from the data returned by its sensors. The vehicle typically starts at an unknown location with no a priori knowledge of landmark locations. From relative observations of landmarks, it simultaneously computes an estimate of vehicle location and an estimate of landmark locations. While continuing in motion, the vehicle builds a complete map of landmarks and uses these to provide continuous estimates of the vehicle location. The potential for this type of navigation system for autonomous systems operating in unknown environments is enormous. One significant obstacle on the road to the implementation and deployment of large scale SLAM algorithms is the computational effort required to maintain the correlation information between features in the map and between the features and the vehicle. Performing the update of the covariance matrix is of O(n3) for a straightforward implementation of the Kalman Filter. In the case of the SLAM algorithm, this complexity can be reduced to O(n2) given the sparse nature of typical observations. Even so, this implies that the computational effort will grow with the square of the number of features maintained in the map. For maps containing more than a few tens of features, this computational burden will quickly make the update intractable especially if the observation rates are high. An effective map-management technique is therefore required in order to help manage this complexity. The major contributions of this thesis arise from the formulation of a new approach to the mapping of terrain features that provides improved computational efficiency in the SLAM algorithm. Rather than incorporating every observation directly into the global map of the environment, the Constrained Local Submap Filter (CLSF) relies on creating an independent, local submap of the features in the immediate vicinity of the vehicle. This local submap is then periodically fused into the global map of the environment. This representation is shown to reduce the computational complexity of maintaining the global map estimates as well as improving the data association process by allowing the association decisions to be deferred until an improved local picture of the environment is available. This approach also lends itself well to three natural extensions to the representation that are also outlined in the thesis. These include the prospect of deploying multi-vehicle SLAM, the

176 citations


Journal ArticleDOI
TL;DR: A vision-based navigation architecture which combines inertial sensors, visual odometry, and registration of the on-board video to a geo-referenced aerial image is proposed which is capable of providing high-rate and drift-free state estimation for UAV autonomous navigation without the GPS system.
Abstract: This paper investigates the possibility of augmenting an Unmanned Aerial Vehicle (UAV) navigation system with a passive video camera in order to cope with long-term GPS outages. The paper proposes a vision-based navigation architecture which combines inertial sensors, visual odometry, and registration of the on-board video to a geo-referenced aerial image. The vision-aided navigation system developed is capable of providing high-rate and drift-free state estimation for UAV autonomous navigation without the GPS system. Due to the use of image-to-map registration for absolute position calculation, drift-free position performance depends on the structural characteristics of the terrain. Experimental evaluation of the approach based on offline flight data is provided. In addition the architecture proposed has been implemented on-board an experimental UAV helicopter platform and tested during vision-based autonomous flights.

169 citations


Journal ArticleDOI
Loulin Huang1
TL;DR: The potential field method is applied for both path and speed planning, or the velocity planning, for a mobile robot in a dynamic environment where the target and the obstacles are moving.

Proceedings ArticleDOI
04 Apr 2009
TL;DR: This article visually augment two traditional navigation methods, and develops two special-purpose techniques to exploit the connection information provided by the network to help navigate these large spaces.
Abstract: Applications supporting navigation in large networks are used every days by millions of people. They include road map navigators, flight route visualization systems, and network visualization systems using node-link diagrams. These applications currently provide generic interaction methods for navigation: pan-and-zoom and sometimes bird's eye views. This article explores the idea of exploiting the connection information provided by the network to help navigate these large spaces. We visually augment two traditional navigation methods, and develop two special-purpose techniques. The first new technique, called "Link Sliding", provides guided panning when continuously dragging along a visible link. The second technique, called "Bring & Go", brings adjacent nodes nearby when pointing to a node. We compare the performance of these techniques in both an adjacency exploration task and a node revisiting task. This comparison illustrates the various advantages of content-aware network navigation techniques. A significant speed advantage is found for the Bring & Go technique over other methods.

Journal ArticleDOI
01 Jan 2009
TL;DR: In this paper, a fuzzy logic controller with different membership functions is developed and used to navigate mobile robots in a totally unknown environment, which depicts that the robots are able to avoid obstacles as well as negotiate the dead ends and reach the targets efficiently.
Abstract: In this paper, navigation techniques for several mobile robots as many as one thousand robots using fuzzy logic are investigated in a totally unknown environment. Fuzzy logic controllers (FLC) using different membership functions are developed and used to navigate mobile robots. First a fuzzy controller has been used with four types of input members, two types of output members and three parameters each. Next two types of fuzzy controllers have been developed having same input members and output members with five parameters each. Each robot has an array of ultrasonic sensors for measuring the distances of obstacles around it and an infrared sensor for detecting the bearing of the target. These techniques have been demonstrated in various exercises, which depicts that the robots are able to avoid obstacles as well as negotiate the dead ends and reach the targets efficiently. Amongst the techniques developed, FLC having Gaussian membership function is found to be most efficient for mobile robots navigation.

Journal ArticleDOI
TL;DR: The principles and system components for navigation in urban environments, information retrieval through natural human-robot interaction, the construction of a suitable semantic representation as well as results from the field experiment are described.
Abstract: The Autonomous City Explorer (ACE) project combines research from autonomous outdoor navigation and human-robot interaction. The ACE robot is capable of navigating unknown urban environments without the use of GPS data or prior map knowledge. It finds its way by interacting with pedestrians in a natural and intuitive way and building a topological representation of its surroundings. In a recent experiment the robot managed to successfully travel a 1.5 km distance from the campus of the Technische Universitat Munchen to Marienplatz, the central square of Munich. This article describes the principles and system components for navigation in urban environments, information retrieval through natural human-robot interaction, the construction of a suitable semantic representation as well as results from the field experiment.

Journal ArticleDOI
TL;DR: This research provides a possible seamless pedestrian navigation solution which can be applied to a wide range of areas where the global navigation satellite system (GNSS) signal remains vulnerable.
Abstract: This paper addresses an approach which integrates activity classification and dead reckoning techniques in step-based pedestrian navigation. In the proposed method, the pedestrian is equipped with a prototype wearable sensor module to record accelerations and determine the headings while walking. To improve the step detection accuracy, different types of activities are classified according to extracted features by means of a probabilistic neural network (PNN). The vertical acceleration data, which indicate the periodic vibration during gait cycle are filtered through a wavelet transform before being used to count the steps and assess the step length from which the distance traveled is estimated. By coupling the distance with the azimuth, navigation through pedestrian dead reckoning is implemented. This research provides a possible seamless pedestrian navigation solution which can be applied to a wide range of areas where the global navigation satellite system (GNSS) signal remains vulnerable. Results of two experiments in this paper reveal that the proposed approach is effective in reducing navigation errors and improving accuracy.

Journal ArticleDOI
TL;DR: This paper reduces the position errors for navigation of the inverted pendulum robot by resetting the balance position occasionally while traveling with simple methods without an external observer or alternative sensors.
Abstract: Our goal is to configure an automatic baggage-transportation system by an inverted pendulum robot and realize a navigation function in a real environment. The system consists of two cooperative subsystems: a balancing-and-traveling control subsystem and a navigation subsystem. Position errors of the inverted pendulum robot are often caused by a drift error in the gyro sensor and a change in the center of gravity by a loaded baggage when applying the linear state feedback control method for balancing and traveling. We have reduced the position errors for navigation by resetting the balance position occasionally while traveling with simple methods without an external observer or alternative sensors. In this paper, we state the method and show the experimental results of navigation in a real environment by the implemented robot system.

Proceedings ArticleDOI
10 Oct 2009
TL;DR: A navigation algorithm for mobile robots in unknown rough terrain has been developed that is solely based on stereo images and suitable for wheeled and legged robots.
Abstract: A navigation algorithm for mobile robots in unknown rough terrain has been developed. The algorithm is solely based on stereo images and suitable for wheeled and legged robots. The navigation system is able to guide the robot along a short and safe path to a goal specified by the operator and given in coordinates relative to the starting point of the robot. The algorithm uses visual odometry for localization. The terrain is modeled from stereo images and its traversability is estimated. A D* Lite planner is used for efficiently planning a short and safe path by incorporating terrain traversability in the planning process. The robot actively explores its environment as it follows the path to the goal. The algorithm has been tested on a wheel driven mobile robot and on a six-legged walking robot on rough terrain.

01 Jan 2009
TL;DR: A stereo-based person detection and tracking method for a mobile robot that can robustly follow a specific person while recognizing the target and other persons with occasional occlusions is described.
Abstract: This paper describes a stereo-based person detec- tion and tracking method for a mobile robot that can follow a specific person in dynamic environments. Many previous works on person detection use laser range finders which can provide very accurate range measurements. Stereo-based systems have also been popular, but most of them have not been used for controlling a real robot. We propose a detection method using depth templates of person shape applied to a dense depth image. We also develop an SVM-based verifier for eliminating false positive. For person tracking by a mobile platform, we formulate the tracking problem using the Extended Kalman filter. The robot continuously estimates the position and the velocity of persons in the robot local coordinates, which are then used for appropriately controlling the robot motion. Although our approach is relatively simple, our robot can robustly follow a specific person while recognizing the target and other persons with occasional occlusions. Index Terms— Person detection and tracking, Mobile robot, Stereo.

Journal ArticleDOI
TL;DR: A simple approach for vision-based path following for a mobile robot based upon a novel concept called the funnel lane, the coordinates of feature points during the replay phase are compared with those obtained during the teaching phase in order to determine the turning direction.
Abstract: We present a simple approach for vision-based path following for a mobile robot. Based upon a novel concept called the funnel lane, the coordinates of feature points during the replay phase are compared with those obtained during the teaching phase in order to determine the turning direction. Increased robustness is achieved by coupling the feature coordinates with odometry information. The system requires a single off-the-shelf, forward-looking camera with no calibration (either external or internal, including lens distortion). Implicit calibration of the system is needed only in the form of a single controller gain. The algorithm is qualitative in nature, requiring no map of the environment, no image Jacobian, no homography, no fundamental matrix, and no assumption about a flat ground plane. Experimental results demonstrate the capability of real-time autonomous navigation in both indoor and outdoor environments and on flat, slanted, and rough terrain with dynamic occluding objects for distances of hundreds of meters. We also demonstrate that the same approach works with wide-angle and omnidirectional cameras with only slight modification.

Journal IssueDOI
TL;DR: This paper describes the design, implementation, and experimental results of a navigation system for planetary rovers called Terrain Adaptive Navigation (TANav), designed to enable greater access to and more robust operations within terrains of widely varying slippage.
Abstract: This paper describes the design, implementation, and experimental results of a navigation system for planetary rovers called Terrain Adaptive Navigation (TANav). This system was designed to enable greater access to and more robust operations within terrains of widely varying slippage. The system achieves this goal by using onboard stereo cameras to remotely classify surrounding terrain, predict the slippage of that terrain, and use this information in the planning of a path to the goal. This navigation system consists of several integrated techniques: goodness map generation, terrain triage, terrain classification, remote slip prediction, path planning, High-Fidelity Traversability Analysis, and slip-compensated path following. Results from experiments with an end-to-end onboard implementation of the TANav system in a Mars analog environment are shown and compared to results from experiments with a more traditional navigation system that does not account for terrain properties. © 2009 Wiley Periodicals, Inc.

Proceedings ArticleDOI
10 Oct 2009
TL;DR: A novel approach for detecting low, grass-like vegetation using laser remission values is proposed, and the laser remission is modeled as a function of distance, incidence angle, and material.
Abstract: This paper addresses the problem of vegetation detection from laser measurements. The ability to detect vegetation is important for robots operating outdoors, since it enables a robot to navigate more efficiently and safely in such environments. In this paper, we propose a novel approach for detecting low, grass-like vegetation using laser remission values. In our algorithm, the laser remission is modeled as a function of distance, incidence angle, and material. We classify surface terrain based on 3D scans of the surroundings of the robot. The model is learned in a self-supervised way using vibration-based terrain classification. In all real world experiments we carried out, our approach yields a classification accuracy of over 99%. We furthermore illustrate how the learned classifier can improve the autonomous navigation capabilities of mobile robots.

Journal ArticleDOI
TL;DR: The direction correction algorithm is proposed to triangulate the location of the transponder with the most recent three DOA estimates and theoretical simulation results verify the reliability of the proposed algorithm that quantifies the potential error in the DOA estimation.
Abstract: A self-contained direction sensing radio frequency identification (RFID) reader is developed employing a dual-directional antenna for automated target acquisition and docking of a mobile robot in indoor environments. The dual-directional antenna estimates the direction of arrival (DOA) of signals from a transponder by using the ratio of the received signal strengths between two adjacent antennas. This enables the robot to continuously monitor the changes in transponder directions and ensures reliable docking guidance to the target transponder. One of the technical challenges associated with this RFID direction finding is to sustain the accuracy of the estimated DOA that varies according to environmental conditions. It is often the case that the robot loses its way to the target in a cluttered environment. To cope with this problem, the direction correction algorithm is proposed to triangulate the location of the transponder with the most recent three DOA estimates. Theoretical simulation results verify the reliability of the proposed algorithm that quantifies the potential error in the DOA estimation. Using the algorithm, we validate mobile robot docking to an RFID transponder in an office environment occupied by obstacles.

Journal ArticleDOI
TL;DR: A new fuzzy logic algorithm is developed for mobile robot navigation in local environments that resolves the problem of limit cycles in any type of dead-ends encountered on the way to the target.

Journal ArticleDOI
TL;DR: It is experimentally verified that a robot safely navigates in dynamic indoor environment by adopting the proposed scheme, which clearly indicates the structural procedure on how to model and to exploit the risk of navigation.
Abstract: We present one approach to achieve safe navigation in an indoor dynamic environment. So far, there have been various useful collision avoidance algorithms and path planning schemes. However, those algorithms possess fundamental limitations in that the robot can avoid only ldquovisiblerdquo ones among surrounded obstacles. In a real environment, it is not possible to detect all the dynamic obstacles around the robot. There are many occluded regions due to the limited field of view. In order to avoid collisions, it is desirable to exploit visibility information. This paper proposes a safe navigation scheme to reduce collision risk considering occluded dynamic obstacles. The robot's motion is controlled by the hybrid control scheme. The possibility of collision is dually reflected to path planning and speed control. The proposed scheme clearly indicates the structural procedure on how to model and to exploit the risk of navigation. The proposed scheme is experimentally tested in a real office building. The experimental results show that the robot moves along the safe path to obtain sufficient field of view. In addition, safe speed constraints are applied in motion control. It is experimentally verified that a robot safely navigates in dynamic indoor environment by adopting the proposed scheme.

Journal ArticleDOI
TL;DR: A navigation algorithm that enables mobile robots to retrace routes previously taught under the control of human operators in outdoor environments and requires only odometry and a monocular omnidirectional vision sensor is presented.
Abstract: In this paper we present a navigation algorithm that enables mobile robots to retrace routes previously taught under the control of human operators in outdoor environments. Possible applications include robot couriers, autonomous vehicles, tour guides and robotic patrols. The appearance-based approach presented in the paper is provably convergent, computationally inexpensive compared with map-based approaches and requires only odometry and a monocular omnidirectional vision sensor. A sequence of reference images is recorded during the human-guided route-teaching phase. Before starting the autonomous phase, the robot needs to be positioned at the beginning of the route. During the autonomous phase, the measurement image is compared with reference images using image cross-correlation performed in the Fourier domain to recover the difference in relative orientation. Route following is achieved by compensating for this orientation difference. Over 18 km of experiments performed under varying conditions demonstrate the algorithm's robustness to lighting variations and partial occlusion. Obstacle avoidance is not included in the current system.

Proceedings ArticleDOI
04 Apr 2009
TL;DR: The Rotating Compass is designed, implemented and evaluated - a novel public display for pedestrian navigation that provides clear evidence of the advantages of the new interaction technique when considering task completion time, context switches, disorientation events, usability satisfaction, workload and multi-user support.
Abstract: Important drawbacks of map-based navigation applications for mobile phones are their small screen size and that users have to associate the information provided by the mobile phone with the real word. Therefore, we designed, implemented and evaluated the Rotating Compass - a novel public display for pedestrian navigation. Here, a floor display continuously shows different directions (in a clockwise order) and the mobile phone informs the user when their desired direction is indicated. To inform the user, the mobile phone vibrates in synchronization with the indicated direction. We report an outdoor study that compares a conventional paper map, a navigation application running on a mobile device, navigation information provided by a public display, and the Rotating Compass. The results provide clear evidence of the advantages of the new interaction technique when considering task completion time, context switches, disorientation events, usability satisfaction, workload and multi-user support.

Journal ArticleDOI
TL;DR: An algorithm for lane-level road vehicle navigation that integrates GNSS, dead-reckoning (odometry and gyro), and map data in the fusion process is presented, presenting better results than some state-of-the-art methods of the literature.
Abstract: Nowadays, it is common that road vehicle navigation systems employ maps to represent the vehicle positions in a local reference. The most usual process to do that consists in the estimation of the vehicle positioning by fusing the Global Navigation Satellite System (GNSS) and some other aiding sensors data, and the subsequent projection of these values on the map by applying map-matching techniques. However, it is possible to benefit from map information also during the process of fusing data for positioning. This paper presents an algorithm for lane-level road vehicle navigation that integrates GNSS, dead-reckoning (odometry and gyro), and map data in the fusion process. Additionally, the proposed method brings some benefits for map-matching at lane level because, on the one hand, it allows the tracking of multiple hypothesis and on the other hand, it provides probability values of lane occupancy for each candidate segment. To do this, a new paradigm that describes lanes as piece-wise sets of clothoids was applied in the elaboration of an enhanced map (Emap). Experimental results in real complex scenarios with multiple lanes show the suitability of the proposed algorithm for the problem under consideration, presenting better results than some state-of-the-art methods of the literature.

Journal IssueDOI
TL;DR: The proposed approach proved to be robust for outdoor navigation in cluttered and crowded walkways, first on campus paths and then running the challenge course multiple times between trials and the challenge final.
Abstract: This paper describes an implementation of a mobile robot system for autonomous navigation in outdoor concurred walkways. The task was to navigate through nonmodified pedestrian paths with people and bicycles passing by. The robot has multiple redundant sensors, which include wheel encoders, an inertial measurement unit, a differential global positioning system, and four laser scanner sensors. All the computation was done on a single laptop computer. A previously constructed map containing waypoints and landmarks for position correction is given to the robot. The robot system's perception, road extraction, and motion planning are detailed. The system was used and tested in a 1-km autonomous robot navigation challenge held in the City of Tsukuba, Japan, named “Tsukuba Challenge 2007.” The proposed approach proved to be robust for outdoor navigation in cluttered and crowded walkways, first on campus paths and then running the challenge course multiple times between trials and the challenge final. The paper reports experimental results and overall performance of the system. Finally the lessons learned are discussed. The main contribution of this work is the report of a system integration approach for autonomous outdoor navigation and its evaluation. © 2009 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A complete framework for autonomous vehicle navigation using a single camera and natural landmarks is presented, designed for a generic class of cameras (including conventional, catadioptric, and fisheye cameras).
Abstract: In this paper, we present a complete framework for autonomous vehicle navigation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, usual behavior consists of memorizing some key views along the performed path to use these references as checkpoints for future navigation missions. The navigation framework for the wheeled vehicles presented in this paper is based on this assumption. During a human-guided learning step, the vehicle performs paths that are sampled and stored as a set of ordered key images, as acquired by an embedded camera. The visual paths are topologically organized, providing a visual memory of the environment. Given an image of the visual memory as a target, the vehicle navigation mission is defined as a concatenation of visual path subsets called visual routes. When autonomously running, the control guides the vehicle along the reference visual route without explicitly planning any trajectory. The control consists of a vision-based control law that is adapted to the nonholonomic constraint. Our navigation framework has been designed for a generic class of cameras (including conventional, catadioptric, and fisheye cameras). Experiments with an urban electric vehicle navigating in an outdoor environment have been carried out with a fisheye camera along a 750-m-long trajectory. Results validate our approach.

Book ChapterDOI
11 May 2009
TL;DR: A system to use high- level reasoning to influence the selection of landmarks along a navigation path, and lower-level reasoning to select appropriate images of those landmarks to produce a more natural navigation plan and more understandable images in a fully automatic way is developed.
Abstract: Computer vision techniques can enhance landmark-based navigation by better utilizing online photo collections. We use spatial reasoning to compute camera poses, which are then registered to the world using GPS information extracted from the image tags. Computed camera pose is used to augment the images with navigational arrows that fit the environment. We develop a system to use high-level reasoning to influence the selection of landmarks along a navigation path, and lower-level reasoning to select appropriate images of those landmarks. We also utilize an image matching pipeline based on robust local descriptors to give users of the system the ability to capture an image and receive navigational instructions overlaid on their current context. These enhancements to our previous navigation system produce a more natural navigation plan and more understandable images in a fully automatic way.