scispace - formally typeset
Z

Zhanglin Peng

Researcher at SenseTime

Publications -  30
Citations -  647

Zhanglin Peng is an academic researcher from SenseTime. The author has contributed to research in topics: Normalization (statistics) & Artificial neural network. The author has an hindex of 12, co-authored 27 publications receiving 485 citations. Previous affiliations of Zhanglin Peng include Sun Yat-sen University.

Papers
More filters
Proceedings ArticleDOI

Deep Boosting: Layered Feature Mining for General Image Classification

TL;DR: A novel computational architecture for general image feature mining, which assembles the primitive filters into compositional features in a layer-wise manner, which is able to generate expressive image representations while inducing very discriminate functions for image classification.
Proceedings ArticleDOI

Active Domain Adaptation with Multi-level Contrastive Units for Semantic Segmentation

TL;DR: A novel Active Domain Adaptation scheme with Multi-level Contrastive Units (ADA-MCU) for semantic image segmentation with a simple pixel selection strategy followed with the construction of multi-level contrastive units is introduced to optimize the model for both domain adaptation and active supervised learning.
Journal ArticleDOI

Multi-Stage Spatio-Temporal Aggregation Transformer for Video Person Re-identification

TL;DR: Wang et al. as mentioned in this paper proposed a multi-stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue.
Proceedings ArticleDOI

Scheduling Large-scale Distributed Training via Reinforcement Learning

TL;DR: Policy schedular is proposed that determines the arguments of learning rate (lr) by reinforcement learning, significantly reducing costs to tune them and achieving superior performances on various tasks and benchmarks.
Patent

Normalization method, apparatus and device for deep neural network, and storage medium

TL;DR: In this article, the authors proposed a normalization method for deep neural networks, in which the normalization is carried out in the at least one dimension, and statistical information of each dimension is covered, so that good robustness is achieved for statistics in each dimension.