scispace - formally typeset
Z

Zhenqi Xu

Researcher at Beijing University of Posts and Telecommunications

Publications -  6
Citations -  298

Zhenqi Xu is an academic researcher from Beijing University of Posts and Telecommunications. The author has contributed to research in topics: Convolutional neural network & Deep learning. The author has an hindex of 3, co-authored 4 publications receiving 213 citations. Previous affiliations of Zhenqi Xu include Peking University.

Papers
More filters
Proceedings ArticleDOI

Learning temporal features using LSTM-CNN architecture for face anti-spoofing

TL;DR: This work proposes a deep neural network architecture combining Long Short-Term Memory (LSTM) units with Convolutional Neural Networks (CNN) that works well for face anti-spoofing by utilizing the LSTM units' ability of finding long relation from its input sequences as well as extracting local and dense features through convolution operations.
Proceedings ArticleDOI

Recurrent convolutional neural network for video classification

TL;DR: A new deep learning architecture called recurrent convolutional neural network (RCNN) which combines convolution operation and recurrent links for video classification tasks is proposed which can extract the local and dense features from image frames as well as learning the temporal features between consecutive frames.
Journal ArticleDOI

Facial landmark localization by enhanced convolutional neural network

TL;DR: A new approach to localizing landmarks in facial image by deep convolutional neural network (DCNN) that takes the expectations of the response maps of enhanced CNN and applies auto-encoder model to the global shape vector, which is effective to rectify the outlier points by the prior global landmark configurations.
Journal ArticleDOI

Learning Neural Volumetric Representations of Dynamic Humans in Minutes

TL;DR: Zhu et al. as discussed by the authors propose a part-based voxelized human representation to better distribute the representational power of the network to different human parts and propose a 2D motion parameterization scheme to increase the convergence rate of deformation field learning.

Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos

TL;DR: A pose-driven deformation based on the linear blend skinning algorithm, which combines the blend weight and the 3D human skeleton to produce observation-to-canonical correspondences, which outperforms recent human modeling methods.