scispace - formally typeset
Proceedings ArticleDOI

AdapNet: Adaptive semantic segmentation in adverse environmental conditions

Reads0
Chats0
TLDR
This paper proposes a novel semantic segmentation architecture and the convoluted mixture of deep experts (CMoDE) fusion technique that enables a multi-stream deep neural network to learn features from complementary modalities and spectra, each of which are specialized in a subset of the input space.
Abstract
Robust scene understanding of outdoor environments using passive optical sensors is a onerous and essential task for autonomous navigation. The problem is heavily characterized by changing environmental conditions throughout the day and across seasons. Robots should be equipped with models that are impervious to these factors in order to be operable and more importantly to ensure safety in the real-world. In this paper, we propose a novel semantic segmentation architecture and the convoluted mixture of deep experts (CMoDE) fusion technique that enables a multi-stream deep neural network to learn features from complementary modalities and spectra, each of which are specialized in a subset of the input space. Our model adaptively weighs class-specific features of expert networks based on the scene condition and further learns fused representations to yield robust segmentation. We present results from experimentation on three publicly available datasets that contain diverse conditions including rain, summer, winter, dusk, fall, night and sunset, and show that our approach exceeds the state-of-the-art. In addition, we evaluate the performance of autonomously traversing several kilometres of a forested environment using only the segmentation for perception.

read more

Citations
More filters
Journal ArticleDOI

Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges

TL;DR: In this article, the authors systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving and provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection.
Journal ArticleDOI

A survey of deep learning techniques for autonomous driving

TL;DR: In this article, the authors survey the current state-of-the-art on deep learning technologies used in autonomous driving, including convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm.
Journal ArticleDOI

A Survey of Deep Learning Techniques for Autonomous Driving

TL;DR: The objective of this paper is to survey the current state‐of‐the‐art on deep learning technologies used in autonomous driving, by presenting AI‐based self‐driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm.
Journal ArticleDOI

Survey on semantic segmentation using deep learning techniques

TL;DR: A survey of semantic segmentation methods by categorizing them into ten different classes according to the common concepts underlying their architectures, and providing an overview of the publicly available datasets on which they have been assessed.
Proceedings ArticleDOI

Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation

TL;DR: This proposal efficiently learns sparse features without the need of an additional validity mask, and works with densities as low as 0.8% (8 layer lidar).
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Related Papers (5)