scispace - formally typeset
Open AccessPosted Content

PointAtrousGraph: Deep Hierarchical Encoder-Decoder with Point Atrous Convolution for Unorganized 3D Points

Reads0
Chats0
TLDR
Experimental results show that the PointAtrousGraph (PAG) outperform previous state-of-the-art methods on various 3D semantic perception applications.
Abstract
Motivated by the success of encoding multi-scale contextual information for image analysis, we propose our PointAtrousGraph (PAG) - a deep permutation-invariant hierarchical encoder-decoder for efficiently exploiting multi-scale edge features in point clouds. Our PAG is constructed by several novel modules, such as Point Atrous Convolution (PAC), Edge-preserved Pooling (EP) and Edge-preserved Unpooling (EU). Similar with atrous convolution, our PAC can effectively enlarge receptive fields of filters and thus densely learn multi-scale point features. Following the idea of non-overlapping max-pooling operations, we propose our EP to preserve critical edge features during subsampling. Correspondingly, our EU modules gradually recover spatial information for edge features. In addition, we introduce chained skip subsampling/upsampling modules that directly propagate edge features to the final stage. Particularly, our proposed auxiliary loss functions can further improve our performance. Experimental results show that our PAG outperform previous state-of-the-art methods on various 3D semantic perception applications.

read more

Citations
More filters
Journal ArticleDOI

Deep Learning for 3D Point Clouds: A Survey

TL;DR: This paper presents a comprehensive review of recent progress in deep learning methods for point clouds, covering three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation.
Posted Content

Deep Learning for 3D Point Clouds: A Survey

TL;DR: Wang et al. as mentioned in this paper presented a comprehensive review of recent progress in deep learning methods for point clouds, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation.
Journal ArticleDOI

Scanning Technologies to Building Information Modelling: A Review

TL;DR: In this article , the authors present a review of the Scan-to-BIM (S2BIM) methodology and different methods involved in point-cloud processing such as sampling, registration and semantic segmentation.
Journal ArticleDOI

DCG-Net: Dynamic Capsule Graph Convolutional Network for Point Clouds

TL;DR: DCG-Net (Dynamic Capsule Graph Network) is introduced to analyze point clouds for the tasks of classification and segmentation based on the dynamic routing mechanism of capsule networks at each layer of a convolutional network.
Journal ArticleDOI

PU-GACNet: Graph Attention Convolution Network for Point Cloud Upsampling

TL;DR: Zhang et al. as mentioned in this paper designed a Graph Attention Convolution (GAC) module as a feature extractor by assigning different attentional weights to combine spatial positions and feature attributes dynamically.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Related Papers (5)