P
Ping Liu
Researcher at University of Technology, Sydney
Publications - 55
Citations - 3836
Ping Liu is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Feature extraction & Facial recognition system. The author has an hindex of 16, co-authored 55 publications receiving 2207 citations. Previous affiliations of Ping Liu include University of South Carolina & Institute of High Performance Computing Singapore.
Papers
More filters
Proceedings ArticleDOI
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
TL;DR: He et al. as discussed by the authors proposed a filter pruning via geometric median (FPGM) method to compress CNN models by pruning filters with redundancy, rather than those with relatively less importance.
Proceedings ArticleDOI
Facial Expression Recognition via a Boosted Deep Belief Network
TL;DR: A novel Boosted Deep Belief Network for performing the three training stages iteratively in a unified loopy framework and showed that the BDBN framework yielded dramatic improvements in facial expression analysis.
Proceedings ArticleDOI
Pose-Guided Feature Alignment for Occluded Person Re-Identification
TL;DR: This paper introduces a novel method named Pose-Guided Feature Alignment (PGFA), exploiting pose landmarks to disentangle the useful information from the occluded noise, and largely outperforms existing person re-id methods on three occlusion datasets, while remains top performance on two holistic datasets.
Posted Content
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
TL;DR: Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with“relatively less” importance, and when applied to two image classification benchmarks, the method validates its usefulness and strengths.
Proceedings ArticleDOI
Entangled Transformer for Image Captioning
TL;DR: A Transformer-based sequence modeling framework built only with attention layers and feedforward layers that enables the Transformer to exploit semantic and visual information simultaneously and achieves state-of-the-art performance on the MSCOCO image captioning dataset.