scispace - formally typeset
L

Lincheng Li

Researcher at Tsinghua University

Publications -  14
Citations -  180

Lincheng Li is an academic researcher from Tsinghua University. The author has contributed to research in topics: Computer science & Motion field. The author has an hindex of 2, co-authored 4 publications receiving 122 citations.

Papers
More filters
Journal ArticleDOI

PMSC: PatchMatch-Based Superpixel Cut for Accurate Stereo Matching

TL;DR: A novel algorithm called PatchMatch-based superpixel cut to assign 3D labels of an image more accurately is proposed and currently ranks first on the new challenging Middlebury 3.0 benchmark among all the existing methods.
Journal ArticleDOI

3D cost aggregation with multiple minimum spanning trees for stereo matching.

TL;DR: This work proposes a cost-aggregation method that can embed minimum spanning tree (MST)-based support region filtering into PatchMatch 3D label search rather than aggregating on fixed size patches and develops multiple MST structures for cost aggregation on plenty of 3D labels.
Proceedings ArticleDOI

Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion

TL;DR: Zhang et al. as mentioned in this paper proposed an audio-driven talking head method to generate photo-realistic talking-head videos from a single reference image by modeling rigid 6D head movements with a motion-aware recurrent neural network.
Proceedings ArticleDOI

GaitStrip: Gait Recognition via Effective Strip-based Feature Representations and Multi-Level Framework

TL;DR: This work introduces a novel StriP-Based feature extractor (SPB) to learn the strip-based feature representations by directly taking each strip of the human body as the basic unit, and proposes a novel multi-branch structure, called Enhanced Convolution Module (ECM), to extract different representations of gaits.
Journal ArticleDOI

Zero-Shot Text-to-Parameter Translation for Game Character Auto-Creation

TL;DR: Zhang et al. as mentioned in this paper proposed a text-to-parameter translation method (T2P) to achieve zero-shot text-driven game character auto-creation.