scispace - formally typeset
Search or ask a question

Showing papers on "Mobile robot navigation published in 2017"


Proceedings ArticleDOI
21 Jul 2017
TL;DR: A neural architecture for navigation in novel environments that learns to map from first-person views and plans a sequence of actions towards goals in the environment, and can also achieve semantically specified goals, such as go to a chair.
Abstract: We introduce a neural architecture for navigation in novel environments. Our proposed architecture learns to map from first-person views and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the planner, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. Our experiments demonstrate that CMP outperforms both reactive strategies and standard memory-based architectures and performs well in novel environments. Furthermore, we show that CMP can also achieve semantically specified goals, such as go to a chair.

521 citations


Proceedings ArticleDOI
01 Sep 2017
TL;DR: Using deep reinforcement learning, this work develops a time-efficient navigation policy that respects common social norms and is shown to enable fully autonomous navigation of a robotic vehicle moving at human walking speed in an environment with many pedestrians.
Abstract: For robotic vehicles to navigate safely and efficiently in pedestrian-rich environments, it is important to model subtle human behaviors and navigation rules (e.g., passing on the right). However, while instinctive to humans, socially compliant navigation is still difficult to quantify due to the stochasticity in people's behaviors. Existing works are mostly focused on using feature-matching techniques to describe and imitate human paths, but often do not generalize well since the feature values can vary from person to person, and even run to run. This work notes that while it is challenging to directly specify the details of what to do (precise mechanisms of human navigation), it is straightforward to specify what not to do (violations of social norms). Specifically, using deep reinforcement learning, this work develops a time-efficient navigation policy that respects common social norms. The proposed method is shown to enable fully autonomous navigation of a robotic vehicle moving at human walking speed in an environment with many pedestrians.

515 citations


Journal ArticleDOI
TL;DR: This paper provides a comprehensive survey of the recent development of the human-centered intelligent robot and presents a survey of existing works on human- centered robots.
Abstract: Intelligent techniques foster the dissemination of new discoveries and novel technologies that advance the ability of robots to assist and support humans. The human-centered intelligent robot has become an important research field that spans all of the robot capabilities including navigation, intelligent control, pattern recognition and human-robot interaction. This paper focuses on the recent achievements and presents a survey of existing works on human-centered robots. Furthermore, we provide a comprehensive survey of the recent development of the human-centered intelligent robot and discuss the issues and challenges in the field.

231 citations


Journal ArticleDOI
23 May 2017
TL;DR: The present article focuses on the study of the intelligent navigation techniques, which are capable of navigating a mobile robot autonomously in static as well as dynamic environments.
Abstract: Mobile robot is an autonomous agent capable of navigating intelligently anywhere using sensor actuator control techniques The applications of the autonomous mobile robot in many fields such as industry space defence and transportation and other social sectors are growing day by day The mobile robot performs many tasks such as rescue operation patrolling disaster relief planetary exploration and material handling etc Therefore an intelligent mobile robot is required that could travel autonomously in various static and dynamic environments Several techniques have been applied by the various researchers for mobile robot navigation and obstacle avoidance The present article focuses on the study of the intelligent navigation techniques which are capable of navigating a mobile robot autonomously in static as well as dynamic environments

175 citations


Journal ArticleDOI
TL;DR: In this approach, the robot makes use of depth information delivered by the vision system to accurately model its surrounding environment through image processing techniques and generates a collision-free optimal path linking an initial configuration of the mobile robot to a final configuration (Target).

169 citations


Journal ArticleDOI
TL;DR: This paper consists on a comprehensive survey on the recent developments for Terrain Based Navigation methods proposed for AUVs, including a brief introduction to the original Terrain based Navigation formulations, as well as a description of the algorithms, and a list of the different implementation alternatives found in the literature.

159 citations


Journal ArticleDOI
TL;DR: The paper proves the SLI by the real and experimental results for the statement “any robotics randomized environment transforms into the array” and presents the new variant of genetic algorithm using the binary codes through matrix for mobile robot navigation (MRN) in static and dynamic environment.

133 citations


Journal ArticleDOI
TL;DR: A set of experiments that tasked individuals with navigating a virtual maze using different methods to simulate an evacuation concluded that a mistake made by a robot will cause a person to have a significantly lower level of trust in it in later interactions.
Abstract: Robots have the potential to save lives in high-risk situations, such as emergency evacuations. To realize this potential, we must understand how factors such as the robot's performance, the riskiness of the situation, and the evacuee's motivation influence his or her decision to follow a robot. In this paper, we developed a set of experiments that tasked individuals with navigating a virtual maze using different methods to simulate an evacuation. Participants chose whether or not to use the robot for guidance in each of two separate navigation rounds. The robot performed poorly in two of the three conditions. The participant's decision to use the robot and self-reported trust in the robot served as dependent measures. A 53% drop in self-reported trust was found when the robot performs poorly. Self-reports of trust were strongly correlated with the decision to use the robot for guidance ( $\phi ({90}) = + 0.745$ ). We conclude that a mistake made by a robot will cause a person to have a significantly lower level of trust in it in later interactions.

121 citations


Journal ArticleDOI
TL;DR: The results show that the developed socially aware navigation framework allows a mobile robot to navigate safely, socially, and proactively while guaranteeing human safety and comfort in crowded and dynamic environments.
Abstract: Safe and social navigation is the key to deploying a mobile service robot in a human-centered environment. Widespread acceptability of mobile service robots in daily life is hindered by robot’s inability to navigate in crowded and dynamic human environments in a socially acceptable way that would guarantee human safety and comfort. In this paper, we propose an effective proactive social motion model (PSMM) that enables a mobile service robot to navigate safely and socially in crowded and dynamic environments. The proposed method considers not only human states (position, orientation, motion, field of view, and hand poses) relative to the robot but also social interactive information about human–object and human group interactions. This allows development of the PSMM that consists of elements of an extended social force model and a hybrid reciprocal velocity obstacle technique. The PSMM is then combined with a path planning technique to generate a motion planning system that drives a mobile robot in a socially acceptable manner and produces respectful and polite behaviors akin to human movements. Note to Practitioners —In this paper, we validated the effectiveness and feasibility of the proposed proactive social motion model (PSMM) through both simulation and real-world experiments under the newly proposed human comfortable safety indices. To do that, we first implemented the entire navigation system using the open-source robot operating system. We then installed it in a simulated robot model and conducted experiments in a simulated shopping mall-like environment to verify its effectiveness. We also installed the proposed algorithm on our mobile robot platform and conducted experiments in our office-like laboratory environment. Our results show that the developed socially aware navigation framework allows a mobile robot to navigate safely, socially, and proactively while guaranteeing human safety and comfort in crowded and dynamic environments. In this paper, we examined the proposed PSMM with a set of predefined parameters selected based on our empirical experiences about the robot mechanism and selected social environment. However, in fact a mobile robot might need to adapt to various contextual and cultural situations in different social environments. Thus, it should be equipped with an online adaptive interactive learning mechanism allowing the robot to learn to auto-adjust their parameters according to such embedded environments. Using machine learning techniques, e.g., inverse reinforcement learning [1] to optimize the parameter set for the PSMM could be a promising research direction to improve adaptability of mobile service robots in different social environments. In the future, we will evaluate the proposed framework based on a wider variety of scenarios, particularly those with different social interaction situations and dynamic environments. Furthermore, various kinds of social cues and signals introduced in [2] and [3] will be applied to extend the proposed framework in more complicated social situations and contexts. Last but not least, we will investigate different machine learning techniques and incorporate them in the PSMM in order to allow the robot to automatically adapt to diverse social environments.

120 citations


Journal ArticleDOI
TL;DR: The proposed adaptive controller only requires the image information from an uncalibrated perspective camera mounted at any position and orientation (attitude) on the follower robot and does not depend on the relative position measurement and communication between the leader and follower.
Abstract: This paper focuses on the problem of vision-based leader–follower formation control of mobile robots. The proposed adaptive controller only requires the image information from an uncalibrated perspective camera mounted at any position and orientation (attitude) on the follower robot. Furthermore, the approach does not depend on the relative position measurement and communication between the leader and follower. First, a new real-time observer is developed to estimate the unknown intrinsic and extrinsic camera parameters as well as the unknown coefficients of the plane where the feature point moves relative to the camera frame. Second, the Lyapunov method is employed to prove the stability of the closed-loop system, where it is shown that convergence of the image error is guaranteed. Finally, the performance of the approach is demonstrated through physical experiments and experimental results.

114 citations


Journal ArticleDOI
TL;DR: A singleton type-1 fuzzy logic system (T1-SFLS) controller and Fuzzy-WDO hybrid for the autonomous mobile robot navigation and collision avoidance in an unknown static and dynamic environment is introduced.

Journal ArticleDOI
TL;DR: Travi-Navi is a vision-guided navigation system that enables a self-motivated user to easily bootstrap and deploy indoor navigation services, without comprehensive indoor localization systems or even the availability of floor maps.
Abstract: We present Travi-Navi—a vision-guided navigation system that enables a self-motivated user to easily bootstrap and deploy indoor navigation services, without comprehensive indoor localization systems or even the availability of floor maps. Travi-Navi records high-quality images during the course of a guider’s walk on the navigation paths, collects a rich set of sensor readings, and packs them into a navigation trace. The followers track the navigation trace, get prompt visual instructions and image tips, and receive alerts when they deviate from the correct paths. Travi-Navi also finds shortcuts whenever possible. In this paper, we describe the key techniques to solve several practical challenges, including robust tracking, shortcut identification, and high-quality image capture while walking. We implement Travi-Navi and conduct extensive experiments. The evaluation results show that Travi-Navi can track and navigate users with timely instructions, typically within a four-step offset, and detect deviation events within nine steps. We also characterize the power consumption of Travi-Navi on various mobile phones.

Journal ArticleDOI
12 Jun 2017-Sensors
TL;DR: The spherical camera for scene capturing is introduced, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions in the “navigation via classification” task, and experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.
Abstract: Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

Journal ArticleDOI
10 Sep 2017-Sensors
TL;DR: Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.
Abstract: In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.

Journal ArticleDOI
TL;DR: This paper designs and realizes a peer-to-peer navigation system (ppNav), on smartphones, which enables the fast-to -deploy navigation services, avoiding the requirements of pre-deployed location services and detailed floorplans.
Abstract: Most of existing indoor navigation systems work in a client/server manner, which needs to deploy comprehensive localization services together with precise indoor maps a prior . In this paper, we design and realize a peer-to-peer navigation system ( ppNav ), on smartphones, which enables the fast-to-deploy navigation services, avoiding the requirements of pre-deployed location services and detailed floorplans. ppNav navigates a user to the destination by tracking user mobility, promoting timely walking tips and alerting potential deviations, according to a previous traveller’s trace experience. Specifically, we utilize the ubiquitous WiFi fingerprints in a novel diagrammed form and extract both radio and visual features of the diagram to track relative locations and exploit fingerprint similarity trend for deviation detection. We further devise techniques to lock on a user to the nearest reference path in case he/she arrives at an uncharted place. Consolidating these techniques, we implement ppNav on commercial mobile devices and validate its performance in real environments. Our results show that ppNav achieves delightful performance, with an average relative error of 0.9 m in trace tracking and a maximum delay of nine samples (about 4.5 s) in deviation detection.

Journal ArticleDOI
TL;DR: A quantitative metric based on people’s personal spaces and comfortableness criteria, is introduced in order to evaluate quantitatively the performance of the robot’'s task.
Abstract: We present a novel robot social-aware navigation framework to walk side-by-side with people in crowded urban areas in a safety and natural way. The new system includes the following key issues: to propose a new robot social-aware navigation model to accompany a person; to extend the Social Force Model, "Extended Social-Force Model", to consider the person and robot's interactions; to use a human predictor to estimate the destination of the person the robot is walking with; and to interactively learning the parameters of the social-aware navigation model using multimodal human feedback. Finally, a quantitative metric based on people's personal spaces and comfortableness criteria, is introduced in order to evaluate quantitatively the performance of the robot's task. The validation of the model is accomplished throughout an extensive set of simulations and real-life experiments. In addition, a volunteers' survey is used to measure the acceptability of our robot companion's behavior.

Journal ArticleDOI
TL;DR: A neural dynamics approach is proposed for complete area coverage navigation by multiple robots using a bioinspired neural network to model the workspace and guide a swarm of robots for the coverage mission.
Abstract: Multiple robots collaboratively achieve a common coverage goal efficiently, which can improve work capacity, share coverage tasks, and reduce completion time. In this paper, a neural dynamics (ND) approach is proposed for complete area coverage navigation by multiple robots. A bioinspired neural network (NN) is designed to model the workspace and guide a swarm of robots for the coverage mission. The dynamics of each neuron in the topologically organized NN is characterized by an ND equation. Each mobile robot regards other robots as moving obstacles. Each robot path is autonomously generated from the neural activity landscape of the NN and the previous robot position. The proposed model algorithm is computationally efficient. The feasibility is validated by simulation, comparison studies, and experiments.

Journal ArticleDOI
01 Jan 2017
TL;DR: This article presents and evaluates a system, which allows a mobile robot to autonomously detect, model, and re-recognize objects in everyday environments without human interaction, in normal indoor scenes.
Abstract: In this article, we present and evaluate a system, which allows a mobile robot to autonomously detect, model, and re-recognize objects in everyday environments. While other systems have demonstrated one of these elements, to our knowledge, we present the first system, which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modeling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally, these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.

Patent
Mark Schnittman1
29 Mar 2017
TL;DR: In this paper, the authors provide an autonomous mobile robot that includes a drive configured to maneuver the robot over a ground surface within an operating environment; a camera mounted on the robot having a field of view including the floor adjacent the mobile robot in the drive direction of the robot; a frame buffer that stores image frames obtained by the camera while the robot is driving; and memory device configured to store a learned data set of a plurality of descriptors corresponding to pixel patches in image frames corresponding to portions of the operating environment and determined by mobile robot sensor events.
Abstract: The present teachings provide an autonomous mobile robot that includes a drive configured to maneuver the robot over a ground surface within an operating environment; a camera mounted on the robot having a field of view including the floor adjacent the mobile robot in the drive direction of the mobile robot; a frame buffer that stores image frames obtained by the camera while the mobile robot is driving; and a memory device configured to store a learned data set of a plurality of descriptors corresponding to pixel patches in image frames corresponding to portions of the operating environment and determined by mobile robot sensor events.


Posted Content
TL;DR: This work presents a method for learning to navigate, to a fixed goal and in a known environment, on a mobile robot that leverages an interactive world model built from a single traversal of the environment, a pre-trained visual feature encoder, and stochastic environmental augmentation.
Abstract: Recently, model-free reinforcement learning algorithms have been shown to solve challenging problems by learning from extensive interaction with the environment. A significant issue with transferring this success to the robotics domain is that interaction with the real world is costly, but training on limited experience is prone to overfitting. We present a method for learning to navigate, to a fixed goal and in a known environment, on a mobile robot. The robot leverages an interactive world model built from a single traversal of the environment, a pre-trained visual feature encoder, and stochastic environmental augmentation, to demonstrate successful zero-shot transfer under real-world environmental variations without fine-tuning.

Proceedings ArticleDOI
17 Mar 2017
TL;DR: A comparison of four most recent ROS-based monocular SLAM-related methods: ORB-SLam, REMODE, LSD-SLAM, and DPPTAM is presented, and their feasibility for a mobile robot application in indoor environment is analyzed.
Abstract: This paper presents a comparison of four most recent ROS-based monocular SLAM-related methods: ORB-SLAM, REMODE, LSD-SLAM, and DPPTAM, and analyzes their feasibility for a mobile robot application in indoor environment. We tested these methods using video data that was recorded from a conventional wide-angle full HD webcam with a rolling shutter. The camera was mounted on a human-operated prototype of an unmanned ground vehicle, which followed a closed-loop trajectory. Both feature-based methods (ORB-SLAM, REMODE) and direct SLAMrelated algorithms (LSD-SLAM, DPPTAM) demonstrated reasonably good results in detection of volumetric objects, corners, obstacles and other local features. However, we met difficulties with recovering typical for offices homogeneously colored walls, since all of these methods created empty spaces in a reconstructed sparse 3D scene. This may cause collisions of an autonomously guided robot with unfeatured walls and thus limits applicability of maps, which are obtained by the considered monocular SLAM-related methods for indoor robot navigation.

Journal ArticleDOI
TL;DR: This work introduces a novel approach to the solution of the navigation problem by mapping an obstacle-cluttered environment to a trivial domain called the point world, where the navigation task is reduced to connecting the images of the initial and destination configurations by a straight line.
Abstract: This work introduces a novel approach to the solution of the navigation problem by mapping an obstacle-cluttered environment to a trivial domain called the point world , where the navigation task is reduced to connecting the images of the initial and destination configurations by a straight line. Due to this effect, the underlying transformation is termed the “ navigation transformation .” The properties of the navigation transformation are studied in this work as well as its capability to provide—through the proposed feedback controller designs—solutions to the motion- and path-planning problems. Notably, the proposed approach enables the construction of temporal stabilization controllers as detailed herein, which provide a time abstraction to the navigation problem. The proposed solutions are correct by construction and, given a diffeomorphism from the workspace to a sphere world, tuning free. A candidate construction for the navigation transformation on sphere worlds is proposed. The provided theoretical results are backed by analytical proofs. The efficiency, robustness, and applicability of the proposed solutions are supported by a series of experimental case studies.

Proceedings ArticleDOI
19 Oct 2017
TL;DR: Compared to audio navigation, participants navigate significantly faster with a free-floating quadcopters and make fewer navigation errors using the quadcopter navigation methods.
Abstract: Although a large number of navigation support systems for visually impaired people have been proposed in the past, navigating through unknown environments is still a major challenge for visually impaired travelers. Existing systems provide navigation information through headphones, speakers or tactile actuators. In this paper, we propose to use small lightweight quadcopters instead to provide navigation information for people with visual impairments. Using a leashed or free-floating quadcopter, the user is navigated by the distinct sound that the quadcopter emits and a haptic stimulus provided by the leash. In a user with 14 visually impaired participants, we compared leashed quadcopter navigation, free-floating quadcopter navigation, and traditional audio navigation. The results show that compared to audio navigation, participants navigate significantly faster with a free-floating quadcopter and make fewer navigation errors using the quadcopter navigation methods.

Proceedings ArticleDOI
25 Jul 2017
TL;DR: The detailed analysis of the triggers and effects of these bugs shows that most of them can be revealed in low-fidelity simulation, and provides insights into interesting navigation scenarios to test as well as into how to address the test oracle problem.
Abstract: The ability to navigate in diverse and previously unknown environments is a critical service of autonomous robots. The validation of the navigation software typically involves test campaigns in the field, which are costly and potentially risky for the robot itself or its environment. An alternative approach is to perform simulation-based testing, by immersing the software in virtual worlds. A question is then whether the bugs revealed in real worlds can also be found in simulation. The paper reports on an exploratory study of bugs in an academic software for outdoor robots navigation. The detailed analysis of the triggers and effects of these bugs shows that most of them can be revealed in low-fidelity simulation. It also provides insights into interesting navigation scenarios to test as well as into how to address the test oracle problem.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: In this paper, the authors present an integrated software and hardware system for autonomous mobile robot navigation in uneven and unstructured indoor environments, which incorporates capabilities of perception and navigation, and employs a variable step size Rapidly Exploring Random Trees that could adjust the step size automatically, eliminating tuning step sizes according to environments.
Abstract: Robots are increasingly operating in indoor environments designed for and shared with people. However, robots working safely and autonomously in uneven and unstructured environments still face great challenges. Many modern indoor environments are designed with wheelchair accessibility in mind. This presents an opportunity for wheeled robots to navigate through sloped areas while avoiding staircases. In this paper, we present an integrated software and hardware system for autonomous mobile robot navigation in uneven and unstructured indoor environments. This modular and reusable software framework incorporates capabilities of perception and navigation. Our robot first builds a 3D OctoMap representation for the uneven environment with the 3D mapping using wheel odometry, 2D laser and RGB-D data. Then we project multilayer 2D occupancy maps from OctoMap to generate the the traversable map based on layer differences. The safe traversable map serves as the input for efficient autonomous navigation. Furthermore, we employ a variable step size Rapidly Exploring Random Trees that could adjust the step size automatically, eliminating tuning step sizes according to environments. We conduct extensive experiments in simulation and real-world, demonstrating the efficacy and efficiency of our system. (Supplemented video link: https://youtu.be/6XJWcsH1fk0).

Patent
29 Jun 2017
TL;DR: In this article, a signal tag device is used for detecting and storing information of trees and crops and positioning information, and assisting positioning; a robot (1) comprising a central processing device (10) for storing and analyzing data information of each part of the robot, and a positioning and navigating device (11) for positioning the robot and providing obstacle-avoiding navigation for the robot according to an electronic map.
Abstract: A pruning robot system, which comprises: a signal tag device (2) for detecting and storing information of trees and crops and positioning information, and assisting positioning; a robot (1) comprising a central processing device (10) for storing and analyzing data information of each part of the robot (1) and issuing action instructions to each part of the robot (1), and a positioning and navigating device (11) for positioning and navigating the robot (1), and for planning a path and providing obstacle-avoiding navigation for the robot (1) according to an electronic map; a cloud platform terminal (3), which is in connection and communication with the central processing device (10) of the robot (1) and is used for storing data of trees and crops as well as detection data of the robot (1), and for planning a path for the robot (1) through computing and experimenting according to the information data; a map building device (4) for building a three-dimensional electronic map corresponding to the plantation through field-detection by the robot (1). The pruning robot system realizes positioning in the plantation, robot (1) path planning, pruning information collection and automatic pruning.

Proceedings ArticleDOI
02 Apr 2017
TL;DR: In this article, a graph-based localization method using Pedestrian Dead Reckoning (PDR) and particle filter is proposed to reduce the instrumentation costs while maintaining a high accuracy.
Abstract: Methods that provide accurate navigation assistance to people with visual impairments often rely on instrumenting the environment with specialized hardware infrastructure. In particular, approaches that use sensor networks of Bluetooth Low Energy (BLE) beacons have been shown to achieve precise localization and accurate guidance while the structural modifications to the environment are kept at minimum. To install navigation infrastructure, however, a number of complex and time-critical activities must be performed. The BLE beacons need to be positioned correctly and samples of Bluetooth signal need to be collected across the whole environment. These tasks are performed by trained personnel and entail costs proportional to the size of the environment that needs to be instrumented. To reduce the instrumentation costs while maintaining a high accuracy, we improve over a traditional regression-based localization approach by introducing a novel, graph-based localization method using Pedestrian Dead Reckoning (PDR) and particle filter. We then study how the number and density of beacons and Bluetooth samples impact the balance between localization accuracy and set-up cost of the navigation environment. Studies with users show the impact that the increased accuracy has on the usability of our navigation application for the visually impaired.

Proceedings ArticleDOI
02 Jul 2017
TL;DR: The presented advantages and disadvantages of the two approaches show that it is important to select the proper algorithm for path planning suitable for a particular application.
Abstract: Mobile robots have been employed extensively in various environments which involve automation and remote monitoring. In order to perform their tasks successfully, navigation from one point to another must be done while avoiding obstacles present in the area. The aim of this study is to demonstrate the efficacy of two approaches in path planning, specifically, probabilistic roadmap (PRM) and genetic algorithm (GA). Two maps, one simple and one complex, were used to compare their performances. In PRM, a map was initially loaded and followed by identifying the number of nodes. Then, initial and final positions were defined. The algorithm, then, generated a network of possible connections of nodes between the initial and final positions. Finally, the algorithm searched this network of connected nodes to return a collision-free path. In GA, a map was also initially loaded followed by selecting the GA parameters. These GA parameters were subjected to explorations as to which set of values will fit the problem. Then, initial and final positions were also defined. Associated cost included the distance or the sum of segments for each of the generated path. Penalties were introduced whenever the generated path involved an obstacle. Results show that both approaches navigated in a collision-free path from the set initial position to the final position within the given environment or map. However, there were observed advantages and disadvantages of each method. GA produces smoother paths which contributes to the ease of navigation of the mobile robots but consumes more processing time which makes it difficult to implement in realtime navigation. On the other hand, PRM produces the possible path in a much lesser amount of time which makes it applicable for more reactive situations but sacrifices smoothness of navigation. The presented advantages and disadvantages of the two approaches show that it is important to select the proper algorithm for path planning suitable for a particular application.

Journal ArticleDOI
TL;DR: A new method based on combination of a modified APF algorithm with fuzzy logic (i.e. FAPF) is proposed to overcome the problems of the classical APF especially the local minima and enhances the navigation in complex environments.
Abstract: The objective of this paper is to develop a path planning algorithm that is able to plan the trajectory of mobile robots from its start point to target point in static and dynamic unknown environments. The classical artificial potential field (APF) method is not sufficient and ineffective for that purpose since it has the problem of local minima. To enhance the performance of the classical APF algorithm and to produce a more efficient and effective path planning for mobile robots, a new method based on combination of a modified APF algorithm with fuzzy logic (i.e. FAPF) is proposed. The proposed algorithm is designed to overcome the problems of the classical APF especially the local minima and enhances the navigation in complex environments. The fuzzy logic controller (FLC) is also used for motion control of the mobile robot. The membership functions of the FLC are optimized with particle swarm optimization (PSO) algorithm for optimality. Simulation models for the proposed path planning and motion control methods are built with MATLAB. Simulation results are obtained and proved that the robot with FAPF navigates with smoother path, react much faster in static and dynamic environments, and avoid obstacles efficiently. The work is compared with other implementations that used conventional PID controllers. All the system is then implemented practically to prove the proposed algorithms and tested in complex and unknown environment.