scispace - formally typeset
Open AccessJournal ArticleDOI

DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

Reads0
Chats0
TLDR
This work addresses the task of semantic image segmentation with Deep Learning and proposes atrous spatial pyramid pooling (ASPP), which is proposed to robustly segment objects at multiple scales, and improves the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models.
Abstract
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

read more

Citations
More filters
Proceedings ArticleDOI

Pyramid Scene Parsing Network

TL;DR: This paper exploits the capability of global context information by different-region-based context aggregation through the pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet) to produce good quality results on the scene parsing task.
Book ChapterDOI

Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation

TL;DR: This work extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries and applies the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network.
Posted Content

YOLOv4: Optimal Speed and Accuracy of Object Detection

TL;DR: This work uses new features: WRC, CSP, CmBN, SAT, Mish activation, Mosaic data augmentation, C mBN, DropBlock regularization, and CIoU loss, and combine some of them to achieve state-of-the-art results: 43.5% AP for the MS COCO dataset at a realtime speed of ~65 FPS on Tesla V100.
Proceedings ArticleDOI

Dual Attention Network for Scene Segmentation

TL;DR: New state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset is achieved without using coarse data.
Proceedings ArticleDOI

Deformable Convolutional Networks

TL;DR: Deformable convolutional networks as discussed by the authors augment the spatial sampling locations in the modules with additional offsets and learn the offsets from the target tasks, without additional supervision, which can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard backpropagation.
References
More filters
Posted Content

Laplacian Reconstruction and Refinement for Semantic Segmentation.

TL;DR: A multi-resolution reconstruction architecture, akin to a Laplacian pyramid, that uses skip connections from higher resolution feature maps to successively refine segment boundaries reconstructed from lower resolution maps is described.
Posted Content

Weakly Supervised Semantic Segmentation with Convolutional Networks.

TL;DR: A Convolutional Neural Network-based model is proposed, which is constrained during training to put more weight on pixels which are important for classifying the image, and which beats the state of the art results in weakly supervised object segmentation task by a large margin.
Posted Content

Zoom Better to See Clearer: Human Part Segmentation with Auto Zoom Net.

TL;DR: The "Auto-Zoom Net" (AZN) for human part parsing is proposed, which is a unified fully convolutional neural network structure that parses each human instance into detailed parts and predicts the locations and scales of human instances and their corresponding parts.
Posted Content

Combining the Best of Convolutional Layers and Recurrent Layers: A Hybrid Network for Semantic Segmentation

TL;DR: This work advocates the use of spatially recurrent layers (i.e. ReNet layers) which directly capture global contexts and lead to improved feature representations and develops a novel Hybrid deep ReNet (H-ReNet), which achieves competitive performance on Stanford Background dataset.
Posted Content

Higher Order Potentials in End-to-End Trainable Conditional Random Fields

TL;DR: Two types of higher order potentials can be included in a Conditional Random Field model embedded within a deep network to allow inference with the efficient and differentiable mean-field algorithm, making it possible to implement the CRF model as a stack of layers in adeep network.