scispace - formally typeset
Search or ask a question

Showing papers on "Mobile robot navigation published in 2007"


Journal ArticleDOI
TL;DR: A new real-time localization system for a mobile robot that shows that autonomous navigation is possible in outdoor situation with the use of a single camera and natural landmarks and a three step approach is presented.
Abstract: This paper presents a new real-time localization system for a mobile robot. We show that autonomous navigation is possible in outdoor situation with the use of a single camera and natural landmarks. To do that, we use a three step approach. In a learning step, the robot is manually guided on a path and a video sequence is recorded with a front looking camera. Then a structure from motion algorithm is used to build a 3D map from this learning sequence. Finally in the navigation step, the robot uses this map to compute its localization in real-time and it follows the learning path or a slightly different path if desired. The vision algorithms used for map building and localization are first detailed. Then a large part of the paper is dedicated to the experimental evaluation of the accuracy and robustness of our algorithms based on experimental data collected during two years in various environments.

361 citations


Journal ArticleDOI
Torgny Brogardh1
TL;DR: The main conclusion of the presentation is that industrial robot development is far away from its limits and that a lot of research and development is needed to obtain a more widely use of robot automation in industry.

337 citations


01 Jan 2007
TL;DR: A solution to the problem of finding a path from a start location to a goal location, while minimising one or more parameters such as length of path, energy consumption or journey time is presented based upon an extension to the distance transform path planning methodology.
Abstract: Abs t rac t Much of the focus of the research effort in path planning for mobile robots has centred on the problem of finding a path from a start location to a goal location, while minimising one or more parameters such as length of path, energy consumption or journey time. A path of complete coverage is a planned path in which a robot sweeps all areas of free space in an environment in a systematic and efficient manner. Possible applications for paths of complete coverage include autonomous vacuum cleaners, lawn mowers, security robots, land mine detectors etc. This paper will present a solution to this problem based upon an extension to the distance transform path planning methodology. The solution has been implemented on the self-contained autonomous mobile robot called the Yamabico.

304 citations


Proceedings ArticleDOI
29 Apr 2007
TL;DR: Increased physical navigation on larger displays correlates with reduced virtual navigation and improved user performance and design factors that afford and promote the use of physical navigation in the user interface are identified.
Abstract: In navigating large information spaces, previous work indicates potential advantages of physical navigation (moving eyes, head, body) over virtual navigation (zooming, panning, flying). However, there is also indication of users preferring or settling into the less efficient virtual navigation. We present a study that examines these issues in the context of large, high resolution displays. The study identifies specific relationships between display size, amount of physical and virtual navigation, and user task performance. Increased physical navigation on larger displays correlates with reduced virtual navigation and improved user performance. Analyzing the differences between this study and previous results helps to identify design factors that afford and promote the use of physical navigation in the user interface.

278 citations


Journal ArticleDOI
TL;DR: A new triangular pattern of arranging the RFID tags on the floor has been proposed to reduce the estimation error of the conventional square pattern, and the motion-continuity property of the differential-driving mobile robot has been utilized to improve the localization accuracy of the mobile robot.
Abstract: This paper presents an efficient localization scheme for an indoors mobile robot using Radio-Frequency IDentification (RFID) systems. The mobile robot carries an RFID reader at the bottom of the chassis, which reads the RFID tags on the floor to localize the mobile robot. Each of the RFID tags stores its own absolute position, which is used to calculate the position, orientation, and velocity of the mobile robot. However, a localization system based on RFID technology inevitably suffers from an estimation error. In this paper, a new triangular pattern of arranging the RFID tags on the floor has been proposed to reduce the estimation error of the conventional square pattern. In addition, the motion-continuity property of the differential-driving mobile robot has been utilized to improve the localization accuracy of the mobile robot. According to the conventional approach, two readers are necessary to identify the orientation of the mobile robot. Therefore, this new approach, based on the motion-continuity property of the differential-driving mobile robot, provides a cheap and fast estimation of the orientation. The proposed algorithms used to raise the accuracy of the robot localization are successfully verified through experiments.

258 citations


Journal ArticleDOI
TL;DR: An ecological interface paradigm that combines video, map, and robot-pose information into a 3-D mixed-reality display is presented that is validated in planar worlds by comparing it against the standard interface paradigm in a series of simulated and real-world user studies.
Abstract: Navigation is an essential element of many remote robot operations including search and rescue, reconnaissance, and space exploration. Previous reports on using remote mobile robots suggest that navigation is difficult due to poor situation awareness. It has been recommended by experts in human-robot interaction that interfaces between humans and robots provide more spatial information and better situational context in order to improve an operator's situation awareness. This paper presents an ecological interface paradigm that combines video, map, and robot-pose information into a 3-D mixed-reality display. The ecological paradigm is validated in planar worlds by comparing it against the standard interface paradigm in a series of simulated and real-world user studies. Based on the experiment results, observations in the literature, and working hypotheses, we present a series of principles for presenting information to an operator of a remote robot.

253 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a system for autonomous mobile robot navigation with only an omnidirectional camera as sensor, which is able to build automatically and robustly accurate topologically organized environment maps of a complex, natural environment.
Abstract: In this work we present a novel system for autonomous mobile robot navigation. With only an omnidirectional camera as sensor, this system is able to build automatically and robustly accurate topologically organised environment maps of a complex, natural environment. It can localise itself using such a map at each moment, including both at startup (kidnapped robot) or using knowledge of former localisations. The topological nature of the map is similar to the intuitive maps humans use, is memory-efficient and enables fast and simple path planning towards a specified goal. We developed a real-time visual servoing technique to steer the system along the computed path. A key technology making this all possible is the novel fast wide baseline feature matching, which yields an efficient description of the scene, with a focus on man-made environments.

189 citations


Journal ArticleDOI
TL;DR: An integrated human−robot interaction strategy that ensures the safety of the human participant through a coordinated suite of safety strategies that are selected and implemented to anticipate and respond to varying time horizons for potential hazards and varying expected levels of interaction with the user is presented.
Abstract: Safe planning and control is essential to bringing human-robot interaction into common experience. This paper presents an integrated human?robot interaction strategy that ensures the safety of the human participant through a coordinated suite of safety strategies that are selected and implemented to anticipate and respond to varying time horizons for potential hazards and varying expected levels of interaction with the user. The proposed planning and control strategies are based on explicit measures of danger during interaction. The level of danger is estimated based on factors influencing the impact force during a human-robot collision, such as the effective robot inertia, the relative velocity and the distance between the robot and the human. A second key requirement for improving safety is the ability of the robot to perceive its environment, and more specifically, human behavior and reaction to robot movements. This paper also proposes and demonstrates the use of human monitoring information based on vision and physiological sensors to further improve the safety of the human robot interaction. A methodology for integrating sensor-based information about the user's position and physiological reaction to the robot into medium and short-term safety strategies is presented. This methodology is verified through a series of experimental test cases where a human and an articulated robot respond to each other based on the human's physical and physiological behavior.

188 citations


Proceedings ArticleDOI
10 Apr 2007
TL;DR: A system capable of using an appearance based topological map for navigation and made robust by using the epipolar geometry and a planar floor constraint in computing the necessary heading information to drive robustly in a large environment.
Abstract: Vision systems are used more and more in 'personal' robots interacting with humans, since semantic information about objects and places can be derived from the rich sensory information. Visual information is also used for building appearance based topological maps, which can be used for localization. In this paper we describe a system capable of using this appearance based topological map for navigation. The system is made robust by using the epipolar geometry and a planar floor constraint in computing the necessary heading information. Using this method the robot is able to drive robustly in a large environment. We tested the method on real data under varying environment conditions and compared performance with a human-controlled robot.

168 citations


Proceedings ArticleDOI
10 Apr 2007
TL;DR: The goal of this work is to enable the task of navigating multiple autonomous underwater vehicles (AUVs) over length scales of O(100 km), while maintaining error tolerances commensurate with conventional long-baseline transponder-based navigation systems (i.e., O(1 m), but without the requisite need for deploying, calibrating, and recovering seafloor anchored acoustic transponders.
Abstract: This paper reports recent experimental results in the development and deployment of a synchronous-clock acoustic navigation system suitable for the simultaneous navigation of multiple underwater vehicles. The goal of this work is to enable the task of navigating multiple autonomous underwater vehicles (AUVs) over length scales of O(100 km), while maintaining error tolerances commensurate with conventional long-baseline transponder-based navigation systems (i.e., O(1 m)), but without the requisite need for deploying, calibrating, and recovering seafloor anchored acoustic transponders. Our navigation system is comprised of an acoustic modem-based communication/navigation system that allows for onboard navigational data to be broadcast as a data packet by a source node, and for all passively receiving nodes to be able to decode the data packet to obtain a one-way travel time pseudo-range measurement and ephemeris data. We present results for two different field experiments using a two-node configuration consisting of a global positioning system (GPS) equipped surface ship acting as a global navigation aid to a Doppler-aided AUV. In each experiment, vehicle position was independently corroborated by other standard navigation means. Initial results for a maximum-likelihood sensor fusion framework are reported.

149 citations


Journal ArticleDOI
TL;DR: An algorithm for visual obstacle avoidance of autonomous mobile robot by balancing the amount of left and right side flow to avoid obstacles is developed, this technique allows robot navigation without any collision with obstacles.
Abstract: In this paper we try to develop an algorithm for visual obstacle avoidance of autonomous mobile robot. The input of the algorithm is an image sequence grabbed by an embedded camera on the B21r robot in motion. Then, the optical flow information is extracted from the image sequence in order to be used in the navigation algorithm. The optical flow provides very important information about the robot environment, like: the obstacles disposition, the robot heading, the time to collision and the depth. The strategy consists in balancing the amount of left and right side flow to avoid obstacles, this technique allows robot navigation without any collision with obstacles. The robustness of the algorithm will be showed by some

Patent
20 Jun 2007
TL;DR: In this paper, a method for improved navigation using the global positioning system (GPS) is described. But this method is based on the position information and the destination, and the navigation information is generated by the navigation server.
Abstract: Embodiments of the present invention include systems and methods for improved navigation using the global positioning system (GPS). A method of improved navigation includes transmitting a destination to a navigation server through a wireless communication channel. The method further includes transmitting position information from a GPS-enabled device to the navigation server through the wireless communication channel automatically at a time interval. The method further includes generating navigation information by the navigation server. The navigation information is based on the position information and the destination. The method further includes receiving navigation information on the GPS-enabled device from the navigation server through the wireless communication channel.

Journal ArticleDOI
01 Jul 2007
TL;DR: A neurofuzzy-based approach is proposed, which coordinates the sensor information and robot motion together and can adequately sense the environment around, autonomously avoid static and moving obstacles, and generate reasonable trajectories toward the target in various situations without suffering from the "dead cycle" problems.
Abstract: In this paper, a neurofuzzy-based approach is proposed, which coordinates the sensor information and robot motion together. A fuzzy logic system is designed with two basic behaviors, target seeking and obstacle avoidance. A learning algorithm based on neural network techniques is developed to tune the parameters of membership functions, which smooths the trajectory generated by the fuzzy logic system. Another learning algorithm is developed to suppress redundant rules in the designed rule base. A state memory strategy is proposed for resolving the "dead cycle" problem. Under the control of the proposed model, a mobile robot can adequately sense the environment around, autonomously avoid static and moving obstacles, and generate reasonable trajectories toward the target in various situations without suffering from the "dead cycle" problems. The effectiveness and efficiency of the proposed approach are demonstrated by simulation studies.

Proceedings ArticleDOI
12 Nov 2007
TL;DR: It is shown that by properly addressing the various issues, a localization error of less than 25 cm can be achieved at all points within a realistic indoor localization space.
Abstract: For robots to become more popular for domestic applications, the short comings of current indoor navigation technologies have to be overcome. In this paper, we propose the use of UWB-IR for indoor robot navigation. Various parts of an actual implementation of a UWB-IR based robot navigation system such as system architecture, RF sub-system design, antennas and localization algorithms are discussed. It is shown that by properly addressing the various issues, a localization error of less than 25 cm can be achieved at all points within a realistic indoor localization space.

Patent
01 Aug 2007
TL;DR: In this paper, a modular robot development kit includes an extensible mobile robot platform and a programmable development module that connects to the mobile robot platforms, including a controller that executes robot behaviors concurrently and performs robot actions in accordance with robot control signals received from the development module, as modified by the concurrently running robot behaviors, as a safeguard against performing potentially damaging robot actions.
Abstract: A modular robot development kit includes an extensible mobile robot platform and a programmable development module that connects to the mobile robot platform. The mobile robot platform includes a controller that executes robot behaviors concurrently and performs robot actions in accordance with robot control signals received from the development module, as modified by the concurrently running robot behaviors, as a safeguard against performing potentially damaging robot actions. Also, the user can develop software that is executed on the development module and which transmits the robot control signals to the mobile robot platform over the data communication link using a robot interface protocol. The robot interface protocol encapsulates potentially harmful user-developed software routines from the controller instructions executed by the controller of the mobile robot platform, while nonetheless enabling the user to effectively control the mobile robot platform using the robot control signals of the robot interface protocol.

Journal ArticleDOI
TL;DR: In this paper, a vision-based self-calibration method for a serial robot manipulator, which only requires a ground-truth scale in the reference frame, is proposed.
Abstract: Unlike the traditional robot calibration methods, which need external expensive calibration apparatus and elaborate setups to measure the 3D feature points in the reference frame, a vision-based self-calibration method for a serial robot manipulator, which only requires a ground-truth scale in the reference frame, is proposed in this paper. The proposed algorithm assumes that the camera is rigidly attached to the robot end-effector, which makes it possible to obtain the pose of the manipulator with the pose of the camera. By designing a manipulator movement trajectory, the camera poses can be estimated up to a scale factor at each configuration with the factorization method, where a nonlinear least-square algorithm is applied to improve its robustness. An efficient approach is proposed to estimate this scale factor. The great advantage of this self-calibration method is that only image sequences of a calibration object and a ground-truth length are needed, which makes the robot calibration procedure more autonomous in a dynamic manufacturing environment. Simulations and experimental studies on a PUMA 560 robot reveal the convenience and effectiveness of the proposed robot self-calibration approach.

Journal ArticleDOI
TL;DR: Due to the limited sensor given to the robot, globally optimal navigation is impossible; however, the approach achieves locally optimal navigation, which is the best that is theoretically possible under this robot model.
Abstract: This paper considers what can be accomplished using a mobile robot that has limited sensing. For navigation and mapping, the robot has only one sensor, which tracks the directions of depth discontinuities. There are no coordinates, and the robot is given a motion primitive that allows it to move toward discontinuities. The robot is incapable of performing localization or measuring any distances or angles. Nevertheless, when dropped into an unknown planar environment, the robot builds a data structure, called the gap navigation tree, which enables it to navigate optimally in terms of Euclidean distance traveled. In a sense, the robot is able to learn the critical information contained in the classical shortest-path roadmap, although surprisingly it is unable to extract metric information. We prove these results for the case of a point robot placed into a simply connected, piecewise-analytic planar environment. The case of multiply connected environments is also addressed, in which it is shown that further sensing assumptions are needed. Due to the limited sensor given to the robot, globally optimal navigation is impossible; however, our approach achieves locally optimal (within a homotopy class) navigation, which is the best that is theoretically possible under this robot model.

Journal ArticleDOI
TL;DR: Initial insight of autonomous navigation for mobile robots is provided, a description of the sensors used to detect obstacles and a descriptions of the genetic algorithms used for path planning are provided.
Abstract: Engineers and scientists use instrumentation and measurement equipment to obtain information for specific environments, such as temperature and pressure. This task can be performed manually using portable gauges. However, there are many instances in which this approach may be impractical; when gathering data from remote sites or from potentially hostile environments. In these applications, autonomous navigation methods allow a mobile robot to explore an environment independent of human presence or intervention. The mobile robot contains the measurement device and records the data then either transmits it or brings it back to the operator. Sensors are required for the robot to detect obstacles in the navigation environment, and machine intelligence is required for the robot to plan a path around these obstacles. The use of genetic algorithms is an example of machine intelligence applications to modern robot navigation. Genetic algorithms are heuristic optimization methods, which have mechanisms analogous to biological evolution. This article provides initial insight of autonomous navigation for mobile robots, a description of the sensors used to detect obstacles and a description of the genetic algorithms used for path planning.

Patent
Joshua V. Graessley1
26 Sep 2007
TL;DR: In this article, the system uses touch input to determine if a driver or passenger is operating the navigation system and if the system determines that the driver is operating navigation system, then an action is initiated (e.g., the user interface is locked down, a warning is provided).
Abstract: A navigation system includes a user interface for detecting touch input. The system uses touch input to determine if a driver or passenger is operating the navigation system. If the system determines that the driver is operating the system, then an action is initiated (e.g., the user interface is locked down, a warning is provided). The navigation system allows a passenger in the vehicle to operate the navigation system while the vehicle is in motion. In an aspect, additional or other sensors (e.g., seat sensor, seat belt sensor, infrared sensor) can be used to detect whether a driver or passenger is operating the navigation system while the vehicle is in motion.

01 Jan 2007
TL;DR: An almost self-deployable solution based on Radio-frequency identification tags and inertial Micro Electro Mechanical Sensors is presented and the benefits are evaluated and compared with the pure inertial positioning system.
Abstract: Existing indoor navigation solutions usually rely on pre-installed sensor networks, whereas emergency agents are interested in fully auto-deployable systems. In this paper, an almost self-deployable solution based on Radio-frequency identification tags and inertial Micro Electro Mechanical Sensors is presented. The benefits of the solution are evaluated and compared with the pure inertial positioning system.

Proceedings ArticleDOI
01 Aug 2007
TL;DR: A probabilistic framework for RbD is presented which allows to extract incrementally the essential characteristics of a task described at a trajectory level and to demonstrate the feasibility of this approach, two experiments are presented.
Abstract: Robot programming by demonstration (RbD) covers methods by which a robot learns new skills through human guidance. In this work, we take the perspective that the role of the teacher is more important than just being a model of successful behaviour, and present a probabilistic framework for RbD which allows to extract incrementally the essential characteristics of a task described at a trajectory level. To demonstrate the feasibility of our approach, we present two experiments where manipulation skills are transferred to a humanoid robot by means of active teaching methods that put the human teacher in the loop of the robot's learning. The robot first observes the task performed by the user (through motion sensors) and the robot's skill is then refined progressively by embodying the robot and putting it through the motion (kinesthetic teaching).

Patent
13 Jul 2007
TL;DR: In this article, the present disclosure includes, among other things, systems, methods and program products for user interface navigation, as well as a detailed discussion of the methods and tools used.
Abstract: The present disclosure includes, among other things, systems, methods and program products for user interface navigation.

Journal ArticleDOI
01 Mar 2007-Robotica
TL;DR: One of the possible strategies for the integration of spatial and semantic knowledge in a service robot scenario where a simultaneous localization and mapping (SLAM) and object detection recognition system work in synergy to provide a richer representation of the environment than it would be possible with either of the methods alone is demonstrated.
Abstract: The problem studied in this paper is a mobile robot that autonomously navigates in a domestic environment, builds a map as it moves along and localizes its position in it. In addition, the robot detects predefined objects, estimates their position in the environment and integrates this with the localization module to automatically put the objects in the generated map. Thus, we demonstrate one of the possible strategies for the integration of spatial and semantic knowledge in a service robot scenario where a simultaneous localization and mapping (SLAM) and object detection recognition system work in synergy to provide a richer representation of the environment than it would be possible with either of the methods alone. Most SLAM systems build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. The novelty is the augmentation of this process with an object-recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. The metric map is also split into topological entities corresponding to rooms. In this way, the user can command the robot to retrieve a certain object from a certain room. We present the results of map building and an extensive evaluation of the object detection algorithm performed in an indoor setting.

Journal ArticleDOI
TL;DR: A humanoid robot that expresses its listening attitude and understanding to humans by effectively using its body properties in a route guidance situation is reported, with the results revealing that a robot displaying cooperative behavior received the highest subjective evaluation.
Abstract: This paper reports the findings for a humanoid robot that expresses its listening attitude and understanding to humans by effectively using its body properties in a route guidance situation. A human teaches a route to the robot, and the developed robot behaves similar to a human listener by utilizing both temporal and spatial cooperative behaviors to demonstrate that it is indeed listening to its human counterpart. The robot's software consists of many communicative units and rules for selecting appropriate communicative units. A communicative unit realizes a particular cooperative behavior such as eye-contact and nodding, found through previous research in HRI. The rules for selecting communicative units were retrieved through our preliminary experiments with a WOZ method. An experiment was conducted to verify the effectiveness of the robot, with the results revealing that a robot displaying cooperative behavior received the highest subjective evaluation, which is rather similar to a human listener. A detailed analysis showed that this evaluation was mainly due to body movements as well as utterances. On the other hand, subjects' utterance to the robot was encouraged by the robot's utterances but not by its body movements.

Journal ArticleDOI
TL;DR: Improvements are demonstrated by augmenting an existing self-supervised image segmentation procedure with an additional supervisory input that provides representations of this region at multiple scales and allows the robot to better determine where more examples of this class appear in the image.
Abstract: Autonomous mobile robot navigation, either off-road or on ill-structured roads, presents unique challenges for machine perception. A successful terrain or roadway classifier must be able to learn in a self-supervised manner and adapt to inter- and intra-run changes in the local environment. This paper demonstrates the improvements achieved by augmenting an existing self-supervised image segmentation procedure with an additional supervisory input. Obstacles and roads may differ in appearance at distance because of illumination and texture frequency properties. Reverse optical flow is added as an input to the image segmentation technique to find examples of a region of interest at previous times in the past. This provides representations of this region at multiple scales and allows the robot to better determine where more examples of this class appear in the image.

Journal ArticleDOI
TL;DR: A navigation and planning model for mobile robots is presented, based on a model of the hippocampal and prefrontal interactions, that relies on the definition of a new cell type “transition cells’ that encompasses traditional “place cells”.
Abstract: After a short review of biologically inspired navigation architectures, mainly relying on modeling the hippocampal anatomy, or at least some of its functions, we present a navigation and planning model for mobile robots. This architecture is based on a model of the hippocampal and prefrontal interactions. In particular, the system relies on the definition of a new cell type “transition cells” that encompasses traditional “place cells”.

Patent
11 Jun 2007
TL;DR: In this paper, the authors present a system for high-speed navigation of terrain by an unmanned robot using a map-based data fusion in which sensor information is incorporated into a cost map, which is a specific map type that represents the traversability of a particular environmental area using a numeric value.
Abstract: Systems, methods, and apparatuses for high-speed navigation. The present invention preferably encompasses systems, methods, and apparatuses that provide for autonomous high-speed navigation of terrain by an un-manned robot. By preferably employing a pre-planned route, path, and speed; extensive sensor- based information collection about the local environment; and information about vehicle pose, the robots of the present invention evaluate the relative cost of various potential paths and thus arrive at a path to traverse the environment. The information collection about the local environment allows the robot to evaluate terrain and to identify any obstacles that may be encountered. The robots of the present invention thus employ map-based data fusion in which sensor information is incorporated into a cost map, which is preferably a rectilinear grid aligned with the world coordinate system and is centered on the vehicle. The cost map is a specific map type that represents the traversability of a particular environmental area using a numeric value. The planned path and route provide information that further allows the robot to orient sensors to preferentially scan the areas of the environment where the robot will likely travel, thereby reducing the computational load placed onto the system. The computational ability of the system is further improved by using map-based syntax between various data processing modules of the present invention. By using a common set of carefully defined data types as syntax for communication, it is possible to identify new features for either path or map processing quickly and efficiently.

Proceedings ArticleDOI
10 Apr 2007
TL;DR: This paper demonstrates through extensive experiments that this chlorophyll-detection feature has properties complementary to the color and shape descriptors traditionally used for point cloud analysis, and shows significant improvement in classification performance for tasks relevant to outdoor navigation.
Abstract: A key challenge for autonomous navigation in cluttered outdoor environments is the reliable discrimination between obstacles that must be avoided at all costs, and lesser obstacles which the robot can drive over if necessary. Chlorophyll-rich vegetation in particular is often not an obstacle to a capable off-road vehicle, and it has long been recognized in the satellite imaging community that a simple comparison of the red and near-infrared (NIR) reflectance of a material provides a reliable technique for measuring chlorophyll content in natural scenes. This paper evaluates the effectiveness of using this chlorophyll-detection technique to improve autonomous navigation in natural, off-road environments. We demonstrate through extensive experiments that this feature has properties complementary to the color and shape descriptors traditionally used for point cloud analysis, and show significant improvement in classification performance for tasks relevant to outdoor navigation. Results are shown from field testing onboard a robot operating in off-road terrain.

01 Jan 2007
TL;DR: Inspired by vision-based self-localization approaches, this method utilizes RFID snapshots for the estimation of the robot pose and requires fewer iterations of the underlying particle filter in order to converge to the approximate robot pose.
Abstract: In recent years, radio frequency identification (RFID) has found its way into the field of mobile robot navigation. On the one hand, the technology promises to contribute solutions to common problems in self-localization and mapping such as the data association problem. On the other hand, questions like how to cope with poor or even missing range and bearing information remain open. In this paper, we present a novel method which tackles these challenges: Inspired by vision-based self-localization approaches, it utilizes RFID snapshots for the estimation of the robot pose. Our experiments show that the new technique enables a robot to successfully localize itself in an indoor environment. The accuracy is comparable to the one of a previous approach using an explicit model of detection probabilities. Our method, however, requires fewer iterations of the underlying particle filter in order to converge to the approximate robot pose.

Proceedings Article
01 Jan 2007
TL;DR: The hardware and software integration frameworks used to facilitate the development of these components and to bring them together for the demonstration of the STAIR 1 robot responding to a verbal command to fetch an item are described.
Abstract: The STanford Artificial Intelligence Robot (STAIR) project is a long-term group effort aimed at producing a viable home and office assistant robot. As a small concrete step towards this goal, we showed a demonstration video at the 2007 AAAI Mobile Robot Exhibition of the STAIR 1 robot responding to a verbal command to fetch an item. Carrying out this task involved the integration of multiple components, including spoken dialog, navigation, computer visual object detection, and robotic grasping. This paper describes the hardware and software integration frameworks used to facilitate the development of these components and to bring them together for the demonstration.