scispace - formally typeset
H

Hao Li

Researcher at Alibaba Group

Publications -  225
Citations -  14999

Hao Li is an academic researcher from Alibaba Group. The author has contributed to research in topics: Deep learning & Computer science. The author has an hindex of 56, co-authored 221 publications receiving 10232 citations. Previous affiliations of Hao Li include University of Southern California & Institute for Creative Technologies.

Papers
More filters
Posted Content

Real-Time Facial Segmentation and Performance Capture from RGB Input

TL;DR: A state-of-the-art regression-based facial tracking framework with segmented face images as training is adopted, and accurate and uninterrupted facial performance capture is demonstrated in the presence of extreme occlusion and even side views.
Proceedings ArticleDOI

Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework

TL;DR: In this paper, an end-to-end and effective semi-supervised object detection framework called Instant-Teaching is proposed, which uses instant pseudo labeling with extended weak-strong data augmentations for teaching during each training iteration.
Journal ArticleDOI

Temporally coherent completion of dynamic shapes

TL;DR: A novel shape completion technique for creating temporally coherent watertight surfaces from real-time captured dynamic performances that does not suffer error accumulation typically introduced by noise, large deformations, and drastic topological changes is presented.
Posted Content

Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM

TL;DR: In this paper, the authors focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network, and model this problem as a discretely constrained optimization problem.
Proceedings Article

Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis

Abstract: We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.