Loss Function Search for Person Re-identification
TLDR
Li et al. as mentioned in this paper proposed a method of AutoML for loss function search named LFS-ReID for person ReID in the framework of the margin-based softmax loss function.About:
This article is published in Pattern Recognition.The article was published on 2021-11-17 and is currently open access. It has received 8 citations till now. The article focuses on the topics: Softmax function & Computer science.read more
Citations
More filters
Journal ArticleDOI
MPCCL: Multiview predictive coding with contrastive learning for person re-identification
TL;DR: Wang et al. as discussed by the authors proposed multi-view predictive coding to align different representations of the same person to maintain intra-class similarity, which achieves state-of-the-art performance on several benchmark datasets.
Journal ArticleDOI
MPC<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.svg"><mml:msup><mml:mrow /><mml:mn>2</mml:mn></mml:msup></mml:math>L: Multiview Predictive Coding with Contrastive Learning for Person Re-identification
TL;DR: Wang et al. as mentioned in this paper proposed multi-view predictive coding to align different representations of the same person to maintain intra-class similarity, which achieves state-of-the-art performance on several benchmark datasets.
Journal ArticleDOI
Specialized Re-Ranking: A Novel Retrieval-Verification Framework for Cloth Changing Person Re-Identification
TL;DR: Zhang et al. as mentioned in this paper proposed a retrieval-verification framework for learning specialized features for discerning similar images, and a well designed verification network is introduced for comparing similar images.
Journal ArticleDOI
Learning discriminative features for person re-identification via multi-spectral channel attention
Journal ArticleDOI
Inception Convolution and Feature Fusion for Person Search
Hua Ouyang,Jiexian Zeng,Lu Leng +2 more
TL;DR: Zhang et al. as mentioned in this paper proposed a person search methodology based on an inception convolution and feature fusion module (IC-FFM) using Seq-Net (Sequential End-to-end Network) as the benchmark.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Posted Content
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Proceedings ArticleDOI
Rethinking the Inception Architecture for Computer Vision
TL;DR: In this article, the authors explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
Proceedings Article
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke,Sam Gross,Francisco Massa,Adam Lerer,James Bradbury,Gregory Chanan,Trevor Killeen,Zeming Lin,Natalia Gimelshein,Luca Antiga,Alban Desmaison,Andreas Kopf,Edward Z. Yang,Zachary DeVito,Martin Raison,Alykhan Tejani,Sasank Chilamkurthy,Benoit Steiner,Lu Fang,Junjie Bai,Soumith Chintala +20 more
TL;DR: This paper details the principles that drove the implementation of PyTorch and how they are reflected in its architecture, and explains how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.
Journal ArticleDOI
Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning
TL;DR: This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units that are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reInforcement tasks, and they do this without explicitly computing gradient estimates.