scispace - formally typeset
Proceedings ArticleDOI

Geometric and visual terrain classification for autonomous mobile navigation

Reads0
Chats0
TLDR
This paper compute geometric features from lidar point clouds and extract pixel-wise semantic labels from a fully convolutional network that is trained using a dataset with a strong focus on urban navigation to create a generalized terrain representation using semantic and geometric features.
Abstract
In this paper, we present a multi-sensory terrain classification algorithm with a generalized terrain representation using semantic and geometric features. We compute geometric features from lidar point clouds and extract pixel-wise semantic labels from a fully convolutional network that is trained using a dataset with a strong focus on urban navigation. We use data augmentation to overcome the biases of the original dataset and apply transfer learning to adapt the model to new semantic labels in off-road environments. Finally, we fuse the visual and geometric features using a random forest to classify the terrain traversability into three classes: safe, risky and obstacle. We implement the algorithm on our four-wheeled robot and test it in novel environments including both urban and off-road scenes which are distinct from the training environments and under summer and winter conditions. We provide experimental result to show that our algorithm can perform accurate and fast prediction of terrain traversability in a mixture of environments with a small set of training data.

read more

Citations
More filters
Journal ArticleDOI

A Sim-to-Real Pipeline for Deep Reinforcement Learning for Autonomous Robot Navigation in Cluttered Rough Terrain

TL;DR: In this article, a sim-to-real pipeline for a mobile robot to learn how to navigate real-world 3D rough terrain environments is presented, using a deep reinforcement learning architecture to learn a navigation policy from training data obtained from the simulated environment and a unique combination of strategies to directly address the reality gap for such environments.
Journal ArticleDOI

Learning-Based Methods of Perception and Navigation for Ground Vehicles in Unstructured Environments: A Review.

TL;DR: In this article, a review on the recent contributions in the robotics literature adopting learning-based methods to solve the problem of environment perception and interpretation with the final aim of the autonomous context-aware navigation of ground vehicles in unstructured environments is presented.
Journal ArticleDOI

Robustifying semantic cognition of traversability across wearable RGB-depth cameras.

TL;DR: A cluster of efficient deep architectures is proposed, which is built using spatial factorizations, hierarchical dilations, and pyramidal representations that demonstrate the augmented robustness of semantically traversable area parsing against the variations of environmental conditions in diverse RGB-D observations, and sensorial factors such as illumination, imaging quality, field of view, and detectable depth range.

Bayesian Generalized Kernel Inference for Terrain Traversability Mapping

TL;DR: A new approach for traversability mapping with sparse lidar scans collected by ground vehicles, which leverages probabilistic inference to build descriptive terrain maps and explores the capabilities of the approach over a variety of data and terrain.
Journal ArticleDOI

Path Planning for UGVs Based on Traversability Hybrid A

TL;DR: In this paper, the authors proposed a path planning method based on the Hybrid A* algorithm and used estimated terrain traversability to find the path that optimizes both traversability and distance for the UGV.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Related Papers (5)