scispace - formally typeset
Search or ask a question

Showing papers on "Mobile robot navigation published in 2012"


Proceedings ArticleDOI
14 May 2012
TL;DR: SeqSLAM as mentioned in this paper calculates the best candidate matching location within every local navigation sequence and localization is then achieved by recognizing coherent sequences of these "local best matches" by removing the need for global matching performance by the vision front-end.
Abstract: Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.

756 citations


Journal ArticleDOI
TL;DR: The science and technology for positioning and navigation has experienced a dramatic evolution and the observation of celestial bodies for navigation purposes has been replaced today by the use of electromagnetic waveforms emitted from reference sources.
Abstract: Accurately determining one's position has been a recurrent problem in history [1]. It even precedes the first deep-sea navigation attempts of ancient civilizations and reaches the present time with the issue of legal mandates for the location identification of emergency calls in cellular networks and the emergence of location-based services. The science and technology for positioning and navigation has experienced a dramatic evolution [2]. The observation of celestial bodies for navigation purposes has been replaced today by the use of electromagnetic waveforms emitted from reference sources [3].

155 citations


Journal ArticleDOI
01 Jun 2012
TL;DR: A new motion planning approach is proposed, which uses springs to interconnect two robot modules and allows the modules to cooperatively navigate through difficult segments of the pipes.
Abstract: This paper deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80-100-mm pipelines in an indoor pipeline environment. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to grip the pipe walls. Unique features of this robot are the caterpillar wheels, the analysis of the four-bar mechanism supporting the treads, a closed-form kinematic approach, and an intuitive user interface. In addition, a new motion planning approach is proposed, which uses springs to interconnect two robot modules and allows the modules to cooperatively navigate through difficult segments of the pipes. Furthermore, an analysis method of selecting optimal compliance to assure functionality and cooperation is suggested. Simulation and experimental results are used throughout the paper to highlight algorithms and approaches.

138 citations


Journal ArticleDOI
01 Jun 2012
TL;DR: The overall result was that all participants were able to complete the designed tasks, reporting no failures, which shows the robustness of the system and its feasibility to solve tasks in real settings where joint navigation and visual exploration were needed.
Abstract: This paper reports an electroencephalogram-based brain-actuated telepresence system to provide a user with presence in remote environments through a mobile robot, with access to the Internet. This system relies on a P300-based brain-computer interface (BCI) and a mobile robot with autonomous navigation and camera orientation capabilities. The shared-control strategy is built by the BCI decoding of task-related orders (selection of visible target destinations or exploration areas), which can be autonomously executed by the robot. The system was evaluated using five healthy participants in two consecutive steps: 1) screening and training of participants and 2) preestablished navigation and visual exploration telepresence tasks. On the basis of the results, the following evaluation studies are reported: 1) technical evaluation of the device and its main functionalities and 2) the users' behavior study. The overall result was that all participants were able to complete the designed tasks, reporting no failures, which shows the robustness of the system and its feasibility to solve tasks in real settings where joint navigation and visual exploration were needed. Furthermore, the participants showed great adaptation to the telepresence system.

121 citations


Journal ArticleDOI
TL;DR: This paper introduces a methodology for indoor localization using a commercial smart-phone combining dead reckoning and Wifi signal strength fingerprinting, and outlines an automated procedure for collecting Wifi calibration data that uses a robot equipped with a laser rangefinder and fiber optic gyroscope.

116 citations


Proceedings ArticleDOI
20 May 2012
TL;DR: This paper presents the development of a perception system for indoor environments to allow autonomous navigation for surveillance mobile robots using a artificial neural network to recognize different configurations of the environment.
Abstract: This paper presents the development of a perception system for indoor environments to allow autonomous navigation for surveillance mobile robots. The system is composed by two parts. The first part is a reactive navigation system in which a mobile robot moves avoiding obstacles in environment, using the distance sensor Kinect. The second part of this system uses a artificial neural network (ANN) to recognize different configurations of the environment, for example, path ahead, left path, right path and intersections. The ANN is trained using data captured by the Kinect sensor in indoor environments. This way, the robot becomes able to perform a topological navigation combining internal reactive behavior to avoid obstacles and the ANN to locate the robot in the environment, in a deliberative behavior. The topological map is represented by a graph which represents the configuration of the environment, where the hallways (path ahead) are the edges and locations (left path and intersection, for example) are the vertices. The system also works in the dark, which is a great advantage for surveillance systems. The experiments were performed with a Pioneer P3-AT robot equipped with a Kinect sensor in order to validate and evaluate this approach. The proposed method demonstrated to be a promising approach to autonomous mobile robots navigation.

112 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: This paper presents an integrated approach for robot localization, obstacle mapping, and path planning in 3D environments based on data of an onboard consumer-level depth camera based on state-of-the-art techniques for environment modeling and localization extended for depth camera data.
Abstract: In this paper, we present an integrated approach for robot localization, obstacle mapping, and path planning in 3D environments based on data of an onboard consumer-level depth camera. We rely on state-of-the-art techniques for environment modeling and localization, which we extend for depth camera data. We thoroughly evaluated our system with a Nao humanoid equipped with an Asus Xtion Pro Live depth camera on top of the humanoid's head and present navigation experiments in a multi-level environment containing static and non-static obstacles. Our approach performs in real-time, maintains a 3D environment representation, and estimates the robot's pose in 6D. As our results demonstrate, the depth camera is well-suited for robust localization and reliable obstacle avoidance in complex indoor environments.

108 citations


Journal ArticleDOI
Woojin Chung1, Hoyeon Kim1, Yoonkyu Yoo1, Chang-bae Moon1, Jooyoung Park1 
TL;DR: This paper proposes detection and tracking schemes for human legs by the use of a single laser range finder, and establishes an efficient leg-tracking scheme by exploiting a human walking model to achieve robust tracking under occlusions.
Abstract: The human-friendly navigation of mobile robots is a significant social and technological issue. There are many potential applications of human-following technology. Good examples are human-following shopping carts, porter robots at airports, and museum guide robots. In this paper, we propose detection and tracking schemes for human legs by the use of a single laser range finder. The leg detection algorithm takes an inductive approach by the application of a support vector data description scheme and simple attributes. We establish an efficient leg-tracking scheme by exploiting a human walking model to achieve robust tracking under occlusions. The proposed schemes are successfully verified through experiments.

108 citations


Journal ArticleDOI
01 Jan 2012
TL;DR: The proposed fuzzy-based potential field motion planning approach was able to provide the robot with collision-free path to softly land on the moving target and solve the local minimum problem within any stationary or dynamic environment compared to other potential field-based approaches.
Abstract: A new fuzzy-based potential field method is presented in this paper for autonomous mobile robot motion planning with dynamic environments including static or moving target and obstacles. Two fuzzy Mamdani and TSK models have been used to develop the total attractive and repulsive forces acting on the mobile robot. The attractive and repulsive forces were estimated using four inputs representing the relative position and velocity between the target and the robot in the x and y directions, in one hand, and between the obstacle and the robot, on the other hand. The proposed fuzzy potential field motion planning was investigated based on several conducted MATLAB simulation scenarios for robot motion planning within realistic dynamic environments. As it was noticed from these simulations that the proposed approach was able to provide the robot with collision-free path to softly land on the moving target and solve the local minimum problem within any stationary or dynamic environment compared to other potential field-based approaches.

100 citations


Journal ArticleDOI
TL;DR: A hybrid approach (Roaming Trails), which integrates a priori knowledge of the environment with local perceptions in order to carry out the assigned tasks efficiently and safely, is described, by guaranteeing that the robot can never be trapped in deadlocks even when operating within a partially unknown dynamic environment.

99 citations


Journal ArticleDOI
TL;DR: This work describes a display in which each pixel is a mobile robot of controllable color, and their positioning and motion are used to produce a novel experience.
Abstract: In this article we present a novel display that is created using a group of mobile robots. In contrast to traditional displays that are based on a fixed grid of pixels, such as a screen or a projection, this work describes a display in which each pixel is a mobile robot of controllable color. Pixels become mobile entities, and their positioning and motion are used to produce a novel experience. The system input is a single image or an animation created by an artist. The first stage is to generate physical goal configurations and robot colors to optimally represent the input imagery with the available number of robots. The run-time system includes goal assignment, path planning and local reciprocal collision avoidance, to guarantee smooth, fast and oscillation-free motion between images. The algorithms scale to very large robot swarms and extend to a wide range of robot kinematics. Experimental evaluation is done for two different physical swarms of size 14 and 50 differentially driven robots, and for simulations with 1,000 robot pixels.

Proceedings ArticleDOI
14 May 2012
TL;DR: This work presents a fast, integrated approach to solve path planning in 3D using a combination of an efficient octree-based representation of the 3D world and an anytime search-based motion planner to improve planning speed.
Abstract: Collision-free navigation in cluttered environments is essential for any mobile manipulation system. Traditional navigation systems have relied on a 2D grid map projected from a 3D representation for efficiency. This approach, however, prevents navigation close to objects in situations where projected 3D configurations are in collision within the 2D grid map even if actually no collision occurs in the 3D environment. Accordingly, when using such a 2D representation for planning paths of a mobile manipulation robot, the number of planning problems which can be solved is limited and suboptimal robot paths may result. We present a fast, integrated approach to solve path planning in 3D using a combination of an efficient octree-based representation of the 3D world and an anytime search-based motion planner. Our approach utilizes a combination of multi-layered 2D and 3D representations to improve planning speed, allowing the generation of almost real-time plans with bounded sub-optimality. We present extensive experimental results with the two-armed mobile manipulation robot PR2 carrying large objects in a highly cluttered environment. Using our approach, the robot is able to efficiently plan and execute trajectories while transporting objects, thereby often moving through demanding, narrow passageways.

Proceedings ArticleDOI
21 May 2012
TL;DR: This work examines a human-aware global navigation planner in a path crossing situation and assess the legibility of the resulting navigation behavior and provides a new way of calculating social costs with context dependent costs without increasing the search space.
Abstract: Our objective is to improve legibility of robot navigation behavior in the presence of moving humans. We examine a human-aware global navigation planner in a path crossing situation and assess the legibility of the resulting navigation behavior. We observe planning based on fixed social costs and static search spaces to perform badly in situations where robot and human move towards the same point. To find an improved cost model, we experimentally examine how humans deal with path crossing. Based on the results we provide a new way of calculating social costs with context dependent costs without increasing the search space. Our evaluation shows that a simulated robot using our new cost model moves more similar to humans. This shows how comparison of human and robot behavior can help with assessing and improving legibility.

Journal ArticleDOI
TL;DR: A novel decision-making algorithm is proposed for autonomous mobile robot navigation in an urban area where global positioning system (GPS) measurements are unreliable and an interacting multiple model method is proposed to determine the existence of a curb based on a probability threshold and to accurately estimate the roadside curb position.
Abstract: In this paper, a novel decision-making method is proposed for autonomous mobile robot navigation in an urban area where global positioning system (GPS) measurements are unreliable. The proposed method uses lidar measurements of the road's surface to detect road boundaries. An interacting multiple model method is proposed to determine the existence of a curb based on a probability threshold and to accurately estimate the roadside curb position. The decision outcome is used to determine the source of references suitable for reliable and seamless navigation. The performance of the decision-making algorithm is verified through extensive experiments with a mobile robot autonomously navigating through campus roads with several intersections and unreliable GPS measurements. Our experimental results demonstrate the reliability and good tracking performance of the proposed algorithm for autonomous urban navigation.

Patent
04 May 2012
TL;DR: In this article, shared robot knowledge bases for use with cloud computing systems are presented. But they do not specify how to use these knowledge bases in the context of robot interaction with an object.
Abstract: The present application discloses shared robot knowledge bases for use with cloud computing systems. In one embodiment, the cloud computing system collects data from a robot about an object the robot has encountered in its environment, and stores the received data in the shared robot knowledge base. In another embodiment, the cloud computing system sends instructions for interacting with an object to a robot, receives feedback from the robot based on its interaction with the object, and updates data in the shared robot knowledge base based on the feedback. In yet another embodiment, the cloud computing system sends instructions to a robot for executing an application based on information stored in the shared robot knowledge base. In the disclosed embodiments, information in the shared robot knowledge bases is updated based on robot experiences so that any particular robot may benefit from prior experiences of other robots.

Proceedings ArticleDOI
12 Nov 2012
TL;DR: This paper investigates the motion planning of handovers while accounting for the human mobility, and treats the human motion as part of the planning problem thus enabling to find broader type of handing strategies.
Abstract: For a versatile human-assisting mobile-manipulating robot such as the PR2, handing over objects to humans in possibly cluttered workspaces is a key capability. In this paper we investigate the motion planning of handovers while accounting for the human mobility. We treat the human motion as part of the planning problem thus enabling to find broader type of handing strategies. We formalize the problem and propose an algorithmic solution taking into account the HRI constraints induced by the human receiver presence. Simulation results with the PR2 robot illustrate the efficacy of the approach.

Journal ArticleDOI
TL;DR: The present paper describes the integration of laser-based perception, footstep planning, and walking control of a humanoid robot for navigation over previously unknown rough terrain and the accuracy of the terrain shape measurement.
Abstract: The present paper describes the integration of laser-based perception, footstep planning, and walking control of a humanoid robot for navigation over previously unknown rough terrain. A perception system that obtains the shape of the surrounding environment to an accuracy of a few centimeters is realized based on input obtained using a scanning laser range sensor. A footstep planner decides the sequence of stepping positions using the obtained terrain shape. A walking controller that can cope with a few centimeters of error in terrain shape measurement is achieved by combining the generation of a 40-ms cycle online walking pattern and a ground reaction force controller with sensor feedback. An operational interface was developed to send commands to the robot. A mixed-reality display was adopted to realize an intuitive interface. The navigation system was implemented on the HRP-2, a full-size humanoid robot. The performance of the proposed system for navigation over unknown rough terrain and the accuracy of the terrain shape measurement were investigated through several experiments.

Proceedings ArticleDOI
12 Nov 2012
TL;DR: Results show that legibility as defined here increases perceived safety of both navigation methods while the level of perceived safety differs between them.
Abstract: In the future robots will more and more enter our daily life. If we want to increase their acceptance it is necessary that people feel safe in the surrounding of robots. As a prerequisite we think that the robot's behavior has to be legible in order to achieve such a feeling of perceived safety. With our present experiment we assess the perceived safety participants feel when an autonomous robot is crossing their path. Therefore participants are presented with a video based scenario in first person perspective. The robot is moving with two different navigation algorithms which allows us to test whether the legibility has an influence on the perceived safety and whether the two navigation algorithms differ regarding their resulting legibility and thus perceived safety. Results show that legibility as defined here increases perceived safety of both navigation methods while the level of perceived safety differs between them.

Proceedings ArticleDOI
24 Dec 2012
TL;DR: A mobile manipulation platform operated by a motor-impaired person using input from a head-tracker, single-button mouse is presented, and how the use of autonomous sub-modules improves performance in complex, cluttered environments is shown.
Abstract: We present a mobile manipulation platform operated by a motor-impaired person using input from a head-tracker, single-button mouse. The platform is used to perform varied and unscripted manipulation tasks in a real home, combining navigation, perception and manipulation. The operator can make use of a wide range of interaction methods and tools, from direct tele-operation of the gripper or mobile base to autonomous sub-modules performing collision-free base navigation or arm motion planning. We describe the complete set of tools that enable the execution of complex tasks, and share the lessons learned from testing them in a real user's home. In the context of grasping, we show how the use of autonomous sub-modules improves performance in complex, cluttered environments, and compare the results to those obtained by novice, able-bodied users operating the same system.

Journal ArticleDOI
TL;DR: A novel pattern-based genetic algorithm is proposed that is designed to handle routing and partitioning concurrently for sensor-based multi-robot coverage path planning problem.
Abstract: Sensor-based multi-robot coverage path planning problem is one of the challenging problems in managing flexible, computer-integrated, intelligent manufacturing systems. A novel pattern-based genetic algorithm is proposed for this problem. The area subject to coverage is modeled with disks representing the range of sensing devices. Then the problem is defined as finding a sequence of the disks for each robot to minimize the coverage completion time determined by the maximum time traveled by a robot in a mobile robot group. So the environment needs to be partitioned among robots considering their travel times. Robot turns cause the robot to slow down, turn and accelerate inevitably. Therefore, the actual travel time of a mobile robot is calculated based on the traveled distance and the number of turns. The algorithm is designed to handle routing and partitioning concurrently. Experiments are conducted using P3-DX mobile robots in the laboratory and simulation environment to validate the results.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: It is found out smartPATH outperforms classical ACO (CACO) and GA algorithms (as defined in the literature without modification) for solving the path planning problem both and Bellman-Ford shortest path method.
Abstract: Path planning is a critical combinatorial problem essential for the navigation of a mobile robot Several research initiatives, aiming at providing optimized solutions to this problem, have emerged Ant Colony Optimization (ACO) and Genetic Algorithms (GA) are the two most widely used heuristics that have shown their effectiveness in solving such a problem This paper presents, smartPATH, a new hybrid ACO-GA algorithm to solve the global robot path planning problem The algorithm consists of a combination of an improved ACO algorithm (IACO) for efficient and fast path selection, and a modified crossover operator for avoiding falling into a local minimum Our system model incorporates a Wireless Sensor Network (WSN) infrastructure to support the robot navigation, where sensor nodes are used as signposts that help locating the mobile robot, and guide it towards the target location We found out smartPATH outperforms classical ACO (CACO) and GA algorithms (as defined in the literature without modification) for solving the path planning problem both and Bellman-Ford shortest path method We demonstrate also that smartPATH reduces the execution time up to 649% in comparison with Bellman-Ford exact method and improves the solution quality up to 483% in comparison with CACO

Proceedings ArticleDOI
24 Dec 2012
TL;DR: This work proposes a strategy for robot navigation in a structured, dynamic indoor environment, where the robot reasons about the near future and makes a locally optimal decision at each time step.
Abstract: An autonomous vehicle intended to carry passengers must be able to generate trajectories on-line that are safe, smooth and comfortable. Here, we propose a strategy for robot navigation in a structured, dynamic indoor environment, where the robot reasons about the near future and makes a locally optimal decision at each time step.

Patent
30 Sep 2012
TL;DR: In this paper, a method of displaying navigational instructions when a navigation application is running in a background mode of an electronic device is provided, which displays a non-navigation application in the foreground on a display screen of the electronic device.
Abstract: A method of displaying navigational instructions when a navigation application is running in a background mode of an electronic device is provided. The method displays a non-navigation application in the foreground on a display screen of the electronic device. The method displays a navigation bar without a navigation instruction when the device is not near a navigation point. The method displays the navigation bar with a navigation instruction when the device is near a navigation point. In some embodiments, the method receives a command to switch from running the navigation application in the foreground to running another screen view in the foreground. The method then runs the other screen view in the foreground while displaying a navigation status display on an electronic display of the device.

Proceedings ArticleDOI
12 Nov 2012
TL;DR: A method for robot to human object hand-over is presented that takes into account user comfort, aimed at contributing to the development of socially aware robots.
Abstract: A method for robot to human object hand-over is presented that takes into account user comfort. Comfort is addressed by serving the object to facilitate user's convenience. The object is delivered so that the most appropriate part is oriented towards the person interacting with the robot. This approach, aimed at contributing to the development of socially aware robots, has not been considered in previous works. The robot system also supports sensory-motor skills like object and people detection, robot grasping and motion planning. The experimental setup consists of a six degrees of freedom robot arm with both an eye-in-hand laser scanner and a fixed range sensor. The user interacting with the robot can assume an arbitrary position in front of the robot. Experiments are reported from a user study.

Proceedings ArticleDOI
05 May 2012
TL;DR: A large-scale in-situ study of tactile feedback for pedestrian navigation systems is reported and data collected through anonymous monitoring suggests that tactile feedback is successfully adopted in one third of all trips and has positive effects on the user's level of distraction.
Abstract: In this paper, we report about a large-scale in-situ study of tactile feedback for pedestrian navigation systems. Recent advances in smartphone technology have enabled a number of interaction techniques for smartphone that use tactile feedback to deliver navigation information. The aim is to enable eyes-free usage and avoid distracting the user from the environment. Field studies where participants had to fulfill given navigation tasks, have found these techniques to be efficient and beneficial in terms of distraction. But it is not yet clear whether these findings will replicate in in-situ usage. We, therefore, developed a Google Maps-like navigation application that incorporates interaction techniques proposed in previous work. The application was published for free on the Android Market and so people were able to use it as a navigation system in their everyday life. The data collected through anonymous monitoring suggests that tactile feedback is successfully adopted in one third of all trips and has positive effects on the user's level of distraction.

Patent
30 Jul 2012
TL;DR: In this paper, a navigation device is arranged to dynamically generate multi-dimensional (multidimensional) video signals based on location and directional information of the navigation device by processing at least one source video signal.
Abstract: The present invention relates to a navigation device. The navigation device is arranged to dynamically generate multi-dimensional (multidimensional) video signals based on location and directional information of the navigation device by processing at least one source video signal. The navigation device is further arranged to superimpose navigation directions and/or environment information about surrounding objects onto the generated multidimensional video feed.

Proceedings ArticleDOI
04 Dec 2012
TL;DR: This work presents a combined interface of Virtual Reality (VR) and Augmented Reality (AR) elements with indicators that help to communicate and ensure localization accuracy and found that AR was preferred in case of reliable localization, but with VR, navigation instructions were perceived more accurate in cases of localization and orientation errors.
Abstract: Vision-based approaches for mobile indoor localization do not rely on the infrastructure and are therefore scalable and cheap. The particular requirements to a navigation user interface for a vision-based system, however, have not been investigated so far. Such mobile interfaces should adapt to localization accuracy, which strongly relies on distinctive reference images, and other factors, such as the phone's pose. If necessary, the system should motivate the user to point at distinctive regions with the smartphone to improve localization quality. We present a combined interface of Virtual Reality (VR) and Augmented Reality (AR) elements with indicators that help to communicate and ensure localization accuracy. In an evaluation with 81 participants, we found that AR was preferred in case of reliable localization, but with VR, navigation instructions were perceived more accurate in case of localization and orientation errors. The additional indicators showed a potential for making users choose distinctive reference images for reliable localization.

Proceedings Article
21 May 2012
TL;DR: This paper presents the application of the Hybrid A* algorithm to a nonholonomic mobile outdoor robot in order to plan near optimal paths in mostly unknown and potentially intricate environments.
Abstract: Efficient path planning is one of the main prerequisites for robust navigation of autonomous robots. Especially driving in complex environments containing both streets and unstructured regions is a challenging problem. In this paper we present the application of the Hybrid A* algorithm to a nonholonomic mobile outdoor robot in order to plan near optimal paths in mostly unknown and potentially intricate environments. The implemented algorithm is capable of generating paths with a rate of at least 10 Hz to guarantee real-time behavior.

Patent
15 Mar 2012
TL;DR: In this article, a method for training a robot to execute a robotic task in a work environment includes moving the robot across its configuration space through multiple states of the task and recording motor schema describing a sequence of behavior of the robot.
Abstract: A method for training a robot to execute a robotic task in a work environment includes moving the robot across its configuration space through multiple states of the task and recording motor schema describing a sequence of behavior of the robot. Sensory data describing performance and state values of the robot is recorded while moving the robot. The method includes detecting perceptual features of objects located in the environment, assigning virtual deictic markers to the detected perceptual features, and using the assigned markers and the recorded motor schema to subsequently control the robot in an automated execution of another robotic task. Markers may be combined to produce a generalized marker. A system includes the robot, a sensor array for detecting the performance and state values, a perceptual sensor for imaging objects in the environment, and an electronic control unit that executes the present method.

Journal ArticleDOI
TL;DR: This work proposes a novel UWB navigation system that permits accurate mobile robot (MR) navigation in indoor environments and reaches an accuracy that outperforms traditional sensors technologies used in robot navigation, such as odometer and sonar.
Abstract: Typical indoor environments contain multiple walls and obstacles consisting of different materials. As a result, current narrowband radio frequency (RF) indoor navigation systems cannot satisfy the challenging demands for most indoor applications. The RF ultra wideband (UWB) system is a promising technology for indoor localisation owing to its high bandwidth that permits mitigation of the multipath identification problem. This work proposes a novel UWB navigation system that permits accurate mobile robot (MR) navigation in indoor environments. The navigation system is composed of two sub-systems: the localisation system and the MR control system. The main contributions of this work are focused on estimation algorithm for localisation, digital implementation of transmitter and receiver and integration of both sub-systems that enable autonomous robot navigation. For sub-systems performance evaluation, statics and dynamics experiments were carried out which demonstrated that the proposed system reached an accuracy that outperforms traditional sensors technologies used in robot navigation, such as odometer and sonar.