scispace - formally typeset
Open AccessJournal ArticleDOI

DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

TLDR
This work addresses the task of semantic image segmentation with Deep Learning and proposes atrous spatial pyramid pooling (ASPP), which is proposed to robustly segment objects at multiple scales, and improves the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models.
Abstract
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

read more

Citations
More filters
Proceedings ArticleDOI

Pyramid Scene Parsing Network

TL;DR: This paper exploits the capability of global context information by different-region-based context aggregation through the pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet) to produce good quality results on the scene parsing task.
Book ChapterDOI

Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation

TL;DR: This work extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries and applies the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network.
Posted Content

YOLOv4: Optimal Speed and Accuracy of Object Detection

TL;DR: This work uses new features: WRC, CSP, CmBN, SAT, Mish activation, Mosaic data augmentation, C mBN, DropBlock regularization, and CIoU loss, and combine some of them to achieve state-of-the-art results: 43.5% AP for the MS COCO dataset at a realtime speed of ~65 FPS on Tesla V100.
Proceedings ArticleDOI

Dual Attention Network for Scene Segmentation

TL;DR: New state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset is achieved without using coarse data.
Proceedings ArticleDOI

Deformable Convolutional Networks

TL;DR: Deformable convolutional networks as discussed by the authors augment the spatial sampling locations in the modules with additional offsets and learn the offsets from the target tasks, without additional supervision, which can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard backpropagation.
References
More filters
Proceedings Article

Decoupled deep neural network for semi-supervised semantic segmentation

TL;DR: In this article, a decoupled architecture for semi-supervised semantic segmentation using heterogeneous annotations is proposed, where labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network.
Proceedings ArticleDOI

Semantic Object Parsing with Local-Global Long Short-Term Memory

TL;DR: LG-LSTM as discussed by the authors incorporates short-distance and long-distance spatial dependencies into the feature learning over all pixel positions, where local guidance from neighboring positions and global guidance from the whole image are imposed on each position to better exploit complex local and global contextual information.
Book ChapterDOI

Pixel-level encoding and depth layering for instance-level semantic labeling

TL;DR: In this paper, a fully convolutional network (FCN) is used to predict semantic labels, depth and an instance-based encoding using each pixel's direction towards its corresponding instance center.
Posted Content

High-performance Semantic Segmentation Using Very Deep Fully Convolutional Networks

TL;DR: A method for high-performance semantic image segmentation based on very deep residual networks, which achieves the state-of-the-art performance and demonstrates that online bootstrapping is critically important for achieving good accuracy.
Proceedings ArticleDOI

Optical Flow with Semantic Segmentation and Localized Layers

TL;DR: This work exploits recent advances in static semantic scene segmentation to segment the image into objects of different types and poses the flow estimation problem using a novel formulation of localized layers, which addresses limitations of traditional layered models for dealing with complex scene motion.