H
Huanzhang Dou
Publications - 9
Citations - 29
Huanzhang Dou is an academic researcher. The author has contributed to research in topics: Computer science & Pattern recognition (psychology). The author has an hindex of 1, co-authored 2 publications receiving 3 citations.
Papers
More filters
Book ChapterDOI
MetaGait: Learning to Learn an Omni Sample Adaptive Representation for Gait Recognition
TL;DR: MetaGait as discussed by the authors injects meta-knowledge into the calibration network of the attention mechanism to improve the adaptiveness from the omni scale, omni-dimension, and omni process perspectives.
Journal ArticleDOI
GaitMPL: Gait Recognition with Memory-Augmented Progressive Learning
TL;DR: This work proposes to solve the hard sample issue with a Memory-augmented Progressive Learning network (GaitMPL), including Dynamic Reweighting Progressive Learning module (DRPL) and Global Structure-Aligned Memory bank (GSAM), which reduces the learning difficulty of hard samples by easy-to-hard progressive learning.
Book ChapterDOI
Adaptive Cross-domain Learning for Generalizable Person Re-identification
TL;DR: Zhang et al. as discussed by the authors presented an Adaptive Cross-domain Learning (ACL) framework equipped with a CrOss-Domain Embedding Block (CODE-Block) to maintain a common feature space for capturing both domain-invariant and the domain-specific features, while dynamically mining the relations across different domains.
Posted Content
VersatileGait: A Large-Scale Synthetic Gait Dataset with Fine-GrainedAttributes and Complicated Scenarios.
Huanzhang Dou,Wenhu Zhang,Pengyi Zhang,Yuhan Zhao,Songyuan Li,Zequn Qin,Fei Wu,Lin Dong,Xi Li +8 more
TL;DR: Zhang et al. as mentioned in this paper automatically created a large-scale synthetic gait dataset (called VersatileGait) by a game engine, which consists of around one million silhouette sequences of 11,000 subjects with fine-grained attributes in various complicated scenarios.
Journal ArticleDOI
Language Adaptive Weight Generation for Multi-task Visual Grounding
TL;DR: Zhang et al. as discussed by the authors proposed an active perception visual grounding framework based on language adaptive weights, called VG-LAW, which serves as an expression-specific feature extractor through dynamic weights generated for various expressions.