scispace - formally typeset
Open AccessProceedings ArticleDOI

Amulet: Aggregating Multi-level Convolutional Features for Salient Object Detection

TLDR
Amulet is presented, a generic aggregating multi-level convolutional feature framework for salient object detection that provides accurate salient object labeling and performs favorably against state-of-the-art approaches in terms of near all compared evaluation metrics.
Abstract
Fully convolutional neural networks (FCNs) have shown outstanding performance in many dense labeling problems. One key pillar of these successes is mining relevant information from features in convolutional layers. However, how to better aggregate multi-level convolutional feature maps for salient object detection is underexplored. In this work, we present Amulet, a generic aggregating multi-level convolutional feature framework for salient object detection. Our framework first integrates multi-level feature maps into multiple resolutions, which simultaneously incorporate coarse semantics and fine details. Then it adaptively learns to combine these feature maps at each resolution and predict saliency maps with the combined features. Finally, the predicted results are efficiently fused to generate the final saliency map. In addition, to achieve accurate boundary inference and semantic enhancement, edge-aware feature maps in low-level layers and the predicted results of low resolution features are recursively embedded into the learning framework. By aggregating multi-level convolutional features in this efficient and flexible manner, the proposed saliency model provides accurate salient object labeling. Comprehensive experiments demonstrate that our method performs favorably against state-of-the-art approaches in terms of near all compared evaluation metrics.

read more

Citations
More filters
Posted Content

Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals

TL;DR: In this article, a two-step framework adopts a predetermined mid-level prior in a contrastive optimization objective to learn pixel embeddings, which can be directly clustered in semantic groups using K-Means on PASCAL.
Journal ArticleDOI

Dynamic Feature Integration for Simultaneous Detection of Salient Object, Edge and Skeleton.

TL;DR: Zhang et al. as discussed by the authors proposed a unified framework for salient object segmentation, edge detection, and skeleton extraction, which allows each task to dynamically choose features at different levels from the shared backbone based on its own characteristics, and designed a task-adaptive attention module to intelligently allocate information for different tasks according to the image content priors.
Journal ArticleDOI

Depth Quality Aware Salient Object Detection

TL;DR: This paper attempts to integrate a novel depth-quality-aware subnet into the classic bistream structure in order to assess the depth quality prior to conducting the selective RGB-D fusion, achieving a much improved complementary status between RGB and D.
Book ChapterDOI

Unsupervised CNN-Based Co-saliency Detection with Graphical Optimization

TL;DR: This paper decomposes co-saliency detection into two sub-tasks, single- image saliency detection and cross-image co-occurrence region discovery corresponding to two novel unsupervised losses, the single-image saliency (SIS) loss and the co- Occurrence (COOC) loss.
Proceedings Article

Memory-oriented Decoder for Light Field Salient Object Detection

TL;DR: A deep-learning-based method where a novel memory-oriented decoder is tailored for light field saliency detection and deeply explore and comprehensively exploit internal correlation of focal slices for accurate prediction by designing feature fusion and integration mechanisms.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Related Papers (5)