scispace - formally typeset
W

Wangpeng An

Researcher at Tsinghua University

Publications -  21
Citations -  726

Wangpeng An is an academic researcher from Tsinghua University. The author has contributed to research in topics: Artificial neural network & Sparse approximation. The author has an hindex of 8, co-authored 19 publications receiving 413 citations. Previous affiliations of Wangpeng An include Hong Kong Polytechnic University.

Papers
More filters
Journal ArticleDOI

Detecting non-hardhat-use by a deep learning method from far-field surveillance videos

TL;DR: In this paper, the authors proposed the use of a high precision, high recall and widely applicable Faster R-CNN method to detect construction workers' non-hardhat-use (NHU) detection.
Journal ArticleDOI

Sparse, collaborative, or nonnegative representation: Which helps pattern classification?

TL;DR: In this paper, the authors investigated the use of nonnegative representation (NR) for pattern classification, which is largely ignored by previous work, and showed that NR can boost the representation power of homogeneous samples while limiting the represent power of heterogeneous samples, making the representation sparse and discriminative simultaneously and thus providing a more effective solution to representation based classification than SR/CR.
Journal ArticleDOI

A deep learning-based method for detecting non-certified work on construction sites

TL;DR: In this paper, the authors proposed a novel framework to check whether a site worker is working within the constraints of their certification, which comprises key video clips extraction, trade recognition and worker competency evaluation.
Proceedings ArticleDOI

A PID Controller Approach for Stochastic Optimization of Deep Networks

TL;DR: The proposed PID method reduces much the overshoot phenomena of SGD-Momentum, and it achieves up to 50% acceleration on popular deep network architectures with competitive accuracy, as verified by the experiments on the benchmark datasets including CIFar10, CIFAR100, and Tiny-ImageNet.
Proceedings ArticleDOI

Exponential decay sine wave learning rate for fast deep neural network training

TL;DR: This paper proposes a simple yet effective exponential decay sine wave like learning rate technique for SGD to improve its convergence speed, accelerating neural network training tremendously.