scispace - formally typeset
S

Shuo Wang

Publications -  12
Citations -  419

Shuo Wang is an academic researcher. The author has contributed to research in topics: Convolutional neural network & Discriminative model. The author has an hindex of 7, co-authored 10 publications receiving 214 citations.

Papers
More filters
Proceedings ArticleDOI

Co-Mining: Deep Face Recognition With Noisy Labels

TL;DR: A novel co-mining strategy that simultaneously uses the loss values as the cue to detect noisy labels, exchange the high-confidence clean faces to alleviate the errors accumulated issue caused by the sample-selection bias, and re-weight the predictedClean faces to make them dominate the discriminative model training in a mini-batch fashion.
Journal ArticleDOI

Mis-Classified Vector Guided Softmax Loss for Face Recognition

TL;DR: This paper develops a novel loss function, which adaptively emphasizes the mis-classified feature vectors to guide the discriminative feature learning and is the first attempt to inherit the advantages of feature margin and feature mining into a unified loss function.
Posted Content

Support Vector Guided Softmax Loss for Face Recognition

TL;DR: A novel loss function, namely support vector guided softmax loss (SV-Softmax), which adaptively emphasizes the mis-classified points (support vectors) to guide the discriminative features learning and results in more discrimiantive features.
Posted Content

Improved Selective Refinement Network for Face Detection

TL;DR: An improved SRN face detector is presented by combining these useful techniques together and obtain the best performance on widely used face detection benchmark WIDER FACE dataset.
Book ChapterDOI

Exclusivity-Consistency Regularized Knowledge Distillation for Face Recognition

TL;DR: A novel position-aware exclusivity is proposed to encourage large diversity among different filters of the same layer to alleviate the low-capability of student network and investigates the effect of several prevailing knowledge for face recognition distillation to conclude that the knowledge of feature consistency is more flexible and preserves much more information than others.