scispace - formally typeset
H

Hao Li

Researcher at Alibaba Group

Publications -  225
Citations -  14999

Hao Li is an academic researcher from Alibaba Group. The author has contributed to research in topics: Deep learning & Computer science. The author has an hindex of 56, co-authored 221 publications receiving 10232 citations. Previous affiliations of Hao Li include University of Southern California & Institute for Creative Technologies.

Papers
More filters
Journal ArticleDOI

3D hair synthesis using volumetric variational autoencoders

TL;DR: This work proposes to represent the manifold of 3D hairstyles implicitly through a compact latent space of a volumetric variational autoencoder (VAE), which is significantly more robust and can handle a much wider variation of hairstyles than state-of-the-art data-driven hair modeling techniques with challenging inputs.
Proceedings ArticleDOI

Transformable Bottleneck Networks

TL;DR: It is demonstrated that the bottlenecks produced by networks trained for this task contain meaningful spatial structure that allows them to intuitively perform a variety of image manipulations in 3D, well beyond the rigid transformations seen during training.
Posted Content

Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis.

TL;DR: In this paper, a real-time method for synthesizing highly complex human motions using a novel training regime called the auto-conditioned Recurrent Neural Network (acRNN) is presented.
Proceedings ArticleDOI

Mesoscopic Facial Geometry Inference Using Deep Neural Networks

TL;DR: This work proposes to encode fine details in high-resolution displacement maps which are learned through a hybrid network adopting the state-of-the-art image-to-image translation network and super resolution network, enabling the full range of facial detail to be modeled.
Posted Content

On the Effects of Batch and Weight Normalization in Generative Adversarial Networks

Sitao Xiang, +1 more
TL;DR: A weight normalization (WN) approach is introduced for GAN training that significantly improves the stability, efficiency and the quality of the generated samples, and introduces squared Euclidean reconstruction error on a test set as a new objective measure to assess training performance in terms of speed, stability, and quality of generated samples.