scispace - formally typeset
Y

Yingliang Zhang

Researcher at ShanghaiTech University

Publications -  6
Citations -  281

Yingliang Zhang is an academic researcher from ShanghaiTech University. The author has contributed to research in topics: Rendering (computer graphics) & Computer science. The author has an hindex of 2, co-authored 2 publications receiving 176 citations.

Papers
More filters
Journal ArticleDOI

Single Sample Face Recognition via Learning Deep Supervised Autoencoders

TL;DR: Results demonstrate that the stacked supervised autoencoders-based face representation significantly outperforms the commonly used image representations in single sample per person face recognition, and it achieves higher recognition accuracy compared with other deep learning models, including the deep Lambertian network.
Proceedings ArticleDOI

Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time

TL;DR: This paper presents a novel Fourier PlenOctree (FPO) technique to tackle efficient neural mod-eling and real-time rendering of dynamic scenes captured under the free-view video (FVV) setting and shows that the resulting FPO enables compact memory overload to handle dynamic objects and supports efficient fine-tuning.
Journal ArticleDOI

Human Performance Modeling and Rendering via Neural Animated Mesh

TL;DR: This paper proposes a hybrid neural tracker to generate animated meshes, which combines explicit non-rigid tracking with implicit dynamic deformation in a self-supervised framework, and introduces a neural surface reconstructor for high-quality surface generation in minutes.
Journal ArticleDOI

Artemis: Articulated Neural Pets with Appearance and Motion synthesis

TL;DR: The core of ARTEMIS is a neural-generated (NGI) animal engine, which adopts an efficient octree-based representation for animal animation and fur rendering, and introduces an effective opti- mization scheme to reconstruct the skeletal motion of real animals captured by a multi-view RGB and Vicon camera array.
Journal ArticleDOI

Refocusable Gigapixel Panoramas for Immersive VR Experiences

TL;DR: A novel out-of-core rendering technique that supports not only classic panning, tilting, and zooming but also dynamic refocusing for viewing a GPP on HMD, inspired by the network package transmission mechanisms in distributed visualization.