scispace - formally typeset
Open AccessJournal ArticleDOI

A Discriminatively Learned CNN Embedding for Person Reidentification

Reads0
Chats0
TLDR
Li et al. as mentioned in this paper proposed a Siamese network that simultaneously computes the identification loss and verification loss, and the network learns a discriminative embedding and a similarity measurement at the same time.
Abstract
In this article, we revisit two popular convolutional neural networks in person re-identification (re-ID): verification and identification models. The two models have their respective advantages and limitations due to different loss functions. Here, we shed light on how to combine the two models to learn more discriminative pedestrian descriptors. Specifically, we propose a Siamese network that simultaneously computes the identification loss and verification loss. Given a pair of training images, the network predicts the identities of the two input images and whether they belong to the same identity. Our network learns a discriminative embedding and a similarity measurement at the same time, thus taking full usage of the re-ID annotations. Our method can be easily applied on different pretrained networks. Albeit simple, the learned embedding improves the state-of-the-art performance on two public person re-ID benchmarks. Further, we show that our architecture can also be applied to image retrieval. The code is available at https://github.com/layumi/2016_person_re-ID.

read more

Citations
More filters
Posted Content

In Defense of the Triplet Loss for Person Re-Identification

TL;DR: It is shown that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.
Proceedings ArticleDOI

Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in Vitro

TL;DR: A simple semisupervised pipeline that only uses the original training set without collecting extra data, which effectively improves the discriminative ability of learned CNN embeddings and proposes the label smoothing regularization for outliers (LSRO).
Proceedings ArticleDOI

Person Transfer GAN to Bridge Domain Gap for Person Re-identification

TL;DR: A Person Transfer Generative Adversarial Network (PTGAN) is proposed to relieve the expensive costs of annotating new training samples and comprehensive experiments show that the domain gap could be substantially narrowed-down by the PTGAN.
Proceedings ArticleDOI

Bag of Tricks and a Strong Baseline for Deep Person Re-Identification

TL;DR: A simple and efficient baseline for person re-identification with deep neural networks by combining effective training tricks together, which achieves 94.5% rank-1 and 85.9% mAP on Market1501 with only using global features.
Journal ArticleDOI

Improving person re-identification by attribute and identity learning

TL;DR: An attribute-person recognition (APR) network is proposed, a multi-task network which learns a re-ID embedding and at the same time predicts pedestrian attributes, and demonstrates that by learning a more discriminative representation, APR achieves competitive re-IDs performance compared with the state-of-the-art methods.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

ImageNet classification with deep convolutional neural networks

TL;DR: A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Related Papers (5)