scispace - formally typeset
Journal ArticleDOI

JWSAA: Joint Weak Saliency and Attention Aware for Person Re-identification

Xin Ning, +3 more
- 17 Sep 2021 - 
- Vol. 453, pp 801-811
Reads0
Chats0
TLDR
A model that has joint weak saliency and attention aware is proposed, which can obtain more complete global features by weakening saliency features and obtains diversifiedsaliency features via attention diversity to improve the performance of the model.
About
This article is published in Neurocomputing.The article was published on 2021-09-17. It has received 99 citations till now. The article focuses on the topics: Salience (neuroscience) & Salient.

read more

Citations
More filters
Journal ArticleDOI

Review of multi-view 3D object recognition methods based on deep learning

TL;DR: A comprehensive review and classification of the latest developments in the deep learning methods for multi-view 3D object recognition is presented, which summarizes the results of these methods on a few mainstream datasets, provides an insightful summary, and puts forward enlightening future research directions.
Journal ArticleDOI

Voxel-based three-view hybrid parallel network for 3D object classification

TL;DR: Wang et al. as mentioned in this paper proposed a voxel-based three-view hybrid parallel network for 3D shape classification, which first obtains the depth projection views of the three-dimensional model from the front view, the top view and the side view, and output its predicted probability value for the category of the 3D model.
Journal ArticleDOI

Quadratic polynomial guided fuzzy C-means and dual attention mechanism for medical image segmentation

TL;DR: Wang et al. as mentioned in this paper proposed a quadratic polynomial guided fuzzy C-means and dual attention mechanism composite network model architecture to address the medical image's high complexity and noise.
Journal ArticleDOI

Image inpainting based on deep learning: A review

TL;DR: The current problems of image inpainting are summarized, the future development trend and research direction are prospected, and the different types of neural network structure based on deep learning methods are summarized.
Journal ArticleDOI

An Improved Encoder-Decoder Network Based on Strip Pool Method Applied to Segmentation of Farmland Vacancy Field

TL;DR: In this paper, a new method that is more suitable for farmland vacancy segmentation is proposed, which uses an improved ResNet network as the backbone of signal transmission, and meanwhile uses data augmentation to improve the performance and robustness of the model.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Proceedings ArticleDOI

Rethinking the Inception Architecture for Computer Vision

TL;DR: In this article, the authors explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
Proceedings ArticleDOI

FaceNet: A unified embedding for face recognition and clustering

TL;DR: A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface.
Book ChapterDOI

CBAM: Convolutional Block Attention Module

TL;DR: Convolutional Block Attention Module (CBAM) as discussed by the authors is a simple yet effective attention module for feed-forward convolutional neural networks, given an intermediate feature map, the module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement.
Related Papers (5)