Amulet: Aggregating Multi-level Convolutional Features for Salient Object Detection
Pingping Zhang,Dong Wang,Huchuan Lu,Hongyu Wang,Xiang Ruan +4 more
- pp 202-211
TLDR
Amulet is presented, a generic aggregating multi-level convolutional feature framework for salient object detection that provides accurate salient object labeling and performs favorably against state-of-the-art approaches in terms of near all compared evaluation metrics.Abstract:
Fully convolutional neural networks (FCNs) have shown outstanding performance in many dense labeling problems. One key pillar of these successes is mining relevant information from features in convolutional layers. However, how to better aggregate multi-level convolutional feature maps for salient object detection is underexplored. In this work, we present Amulet, a generic aggregating multi-level convolutional feature framework for salient object detection. Our framework first integrates multi-level feature maps into multiple resolutions, which simultaneously incorporate coarse semantics and fine details. Then it adaptively learns to combine these feature maps at each resolution and predict saliency maps with the combined features. Finally, the predicted results are efficiently fused to generate the final saliency map. In addition, to achieve accurate boundary inference and semantic enhancement, edge-aware feature maps in low-level layers and the predicted results of low resolution features are recursively embedded into the learning framework. By aggregating multi-level convolutional features in this efficient and flexible manner, the proposed saliency model provides accurate salient object labeling. Comprehensive experiments demonstrate that our method performs favorably against state-of-the-art approaches in terms of near all compared evaluation metrics.read more
Citations
More filters
Journal ArticleDOI
Focal Boundary Guided Salient Object Detection
TL;DR: This work proposes a novel deep model—Focal Boundary Guided (Focal-BG) network, designed to jointly learn to segment salient object masks and detect salient object boundaries and demonstrates that the joint modeling of salient object boundary and mask helps to better capture the shape details, especially in the vicinity of object boundaries.
Journal ArticleDOI
Bifurcated Backbone Strategy for RGB-D Salient Object Detection
TL;DR: Wu et al. as discussed by the authors proposed a Bifurcated Backbone Strategy Network (BBS-Net) for RGB-D salient object detection, which regroup the multi-level features into teacher and student features using a BBS.
Journal ArticleDOI
Edge-Aware Multiscale Feature Integration Network for Salient Object Detection in Optical Remote Sensing Images
TL;DR: An edge-aware multiscale feature integration network (EMFI-Net) is proposed for salient object detection by conducting multiscales feature integration under the explicit and implicit assistance of salient edge cues to introduce the edge information to precisely detect salient objects in RSIs.
Journal ArticleDOI
Deep Embedding Features for Salient Object Detection
Yunzhi Zhuge,Yu Zeng,Huchuan Lu +2 more
TL;DR: This paper proposes a novel approach that transforms prior information into an embedding space to select attentive features and filter out outliers for salient object detection and proposes a Guided Filter Refinement Network (GFRN) to jointly optimize the predicted results and the learnable guidance maps.
Proceedings ArticleDOI
Feature Reintegration over Differential Treatment: A Top-down and Adaptive Fusion Network for RGB-D Salient Object Detection
TL;DR: This paper proposes a novel top-down multi-level fusion structure where different fusion strategies are utilized to effectively explore the low-level and high-level features of RGB-D salient object detection.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Book ChapterDOI
U-Net: Convolutional Networks for Biomedical Image Segmentation
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Proceedings ArticleDOI
Fully convolutional networks for semantic segmentation
TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.