scispace - formally typeset
Open AccessProceedings ArticleDOI

Amulet: Aggregating Multi-level Convolutional Features for Salient Object Detection

TLDR
Amulet is presented, a generic aggregating multi-level convolutional feature framework for salient object detection that provides accurate salient object labeling and performs favorably against state-of-the-art approaches in terms of near all compared evaluation metrics.
Abstract
Fully convolutional neural networks (FCNs) have shown outstanding performance in many dense labeling problems. One key pillar of these successes is mining relevant information from features in convolutional layers. However, how to better aggregate multi-level convolutional feature maps for salient object detection is underexplored. In this work, we present Amulet, a generic aggregating multi-level convolutional feature framework for salient object detection. Our framework first integrates multi-level feature maps into multiple resolutions, which simultaneously incorporate coarse semantics and fine details. Then it adaptively learns to combine these feature maps at each resolution and predict saliency maps with the combined features. Finally, the predicted results are efficiently fused to generate the final saliency map. In addition, to achieve accurate boundary inference and semantic enhancement, edge-aware feature maps in low-level layers and the predicted results of low resolution features are recursively embedded into the learning framework. By aggregating multi-level convolutional features in this efficient and flexible manner, the proposed saliency model provides accurate salient object labeling. Comprehensive experiments demonstrate that our method performs favorably against state-of-the-art approaches in terms of near all compared evaluation metrics.

read more

Citations
More filters
Journal ArticleDOI

Hybrid deep learning architecture for rail surface segmentation and surface defect detection

TL;DR: A novel architecture is proposed to fully utilize the complementarity between the RS and the RE to accurately identify the RS with well‐defined boundaries and an innovative hybrid loss consisting of binary cross entropy, structural similarity index measure, and intersection‐over‐union is proposed and equipped into the RBGNet.
Journal ArticleDOI

Video object detection for autonomous driving: Motion-aid feature calibration

TL;DR: An end-to-end deep learning framework, termed as motion-aid feature calibration network (MFCN), for video object detection, which outperforms other competitive video object detectors and achieves a better trade-off between accuracy and runtime speed, demonstrating its potential for use in autonomous driving systems.
Posted Content

Revisiting Salient Object Detection: Simultaneous Detection, Ranking, and Subitizing of Multiple Salient Objects

TL;DR: In this paper, a hierarchical representation of relative saliency and stage-wise refinement is proposed to solve the problem of salient object subitizing, and the proposed approach exceeds performance of any prior work across all metrics considered.
Proceedings ArticleDOI

Deep Light-field-driven Saliency Detection from a Single View.

TL;DR: This paper proposes a high-quality light field synthesis network to produce reliable 4D light field information and proposes a novel light-fielddriven saliency detection network with two purposes, that is, richer saliency features can be produced and geometric information can be considered for integration of multi-view saliency maps in a view-wise attention fashion.
Journal ArticleDOI

Residual Learning for Salient Object Detection

TL;DR: This paper proposes a residual learning strategy and introduces to gradually refine the coarse prediction scale-by-scale to remedy the errors between coarse saliency map and scale-matching ground truth masks, and designs a fully convolutional network which does not need any post-processing.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Related Papers (5)