scispace - formally typeset
J

Jingyi Yu

Researcher at ShanghaiTech University

Publications -  274
Citations -  7604

Jingyi Yu is an academic researcher from ShanghaiTech University. The author has contributed to research in topics: Light field & Rendering (computer graphics). The author has an hindex of 39, co-authored 260 publications receiving 5794 citations. Previous affiliations of Jingyi Yu include Mitsubishi & University UCINF.

Papers
More filters
Posted Content

SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos.

TL;DR: Zhang et al. as discussed by the authors proposed a multi-stream spatial-temporal graph convolutional network (ST-GCN) to predict the fine-grained semantic action attributes, and adopted a semantic attribute mapping block to assemble various correlated action attributes into a high-level action label for the overall detailed understanding of the whole sequence, so as to enable various applications like action assessment or motion scoring.
Posted Content

Generic Multiview Visual Tracking.

TL;DR: A generic multiview tracking (GMT) framework that allows camera movement, while requiring neither specific object model nor camera calibration, is proposed, which addresses missing target issues such as occlusion.
Proceedings ArticleDOI

4D Human Body Correspondences from Panoramic Depth Maps

TL;DR: In this article, an end-to-end deep learning scheme is proposed to establish dense shape correspondences and subsequently compress the data, which achieves state-of-the-art performance on both public and newly captured datasets.
Posted Content

Semantic See-Through Rendering on Light Fields

TL;DR: This work combines deep learning and stereo matching to provide each ray a semantic label and design tailored weighting schemes for blending the rays, which can effectively remove foreground residues when focusing on the background.
Journal ArticleDOI

Accurate Line-Based Relative Pose Estimation With Camera Matrices

TL;DR: A novel stereo trifocal tensor solver is presented and the camera matrix’s ability to continuously and robustly bootstrap visual motion estimation pipelines via integration into a robust, purely line-based visual odometry pipeline is outlined.