J
Jingyi Yu
Researcher at ShanghaiTech University
Publications - 274
Citations - 7604
Jingyi Yu is an academic researcher from ShanghaiTech University. The author has contributed to research in topics: Light field & Rendering (computer graphics). The author has an hindex of 39, co-authored 260 publications receiving 5794 citations. Previous affiliations of Jingyi Yu include Mitsubishi & University UCINF.
Papers
More filters
Patent
Method and system for three-dimensional model reconstruction
TL;DR: In this paper, a method of generating a 3D model of an object from a plurality of views at a plurality-of-views is described. But the method is limited to the case of a single camera.
Posted Content
Light Field Super-resolution via Attention-Guided Fusion of Hybrid Lenses
TL;DR: Zhang et al. as discussed by the authors proposed an end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input from two complementary and parallel perspectives.
Journal ArticleDOI
Full-Volume 3D Fluid Flow Reconstruction With Light Field PIV
TL;DR: Wang et al. as discussed by the authors proposed a low-cost particle imaging velocimetry (PIV) solution that uses compact lenslet-based light field cameras as imaging device to estimate particle depths under a single viewpoint by exploiting the focal stack symmetry of light field.
Posted Content
GNeRF: GAN-based Neural Radiance Field without Posed Camera
TL;DR: Zhang et al. as discussed by the authors introduced GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field (NeRF) reconstruction for the complex scenarios with unknown and even randomly initialized camera poses.
Journal ArticleDOI
High-resolution tomographic reconstruction of optical absorbance through scattering media using neural fields
Wuwei Ren,Siyuan Shen,Linlin Li,S.K. Gao,Yuehan Wang,L. Gu,Shiying Li,Xingjun Zhu,Jiahua Jiang,Jingyi Yu +9 more
TL;DR: NeuDOT as mentioned in this paper uses neural fields to continuously encode the optical absorbance within the volume and subsequently bridge the gap between model accuracy and high resolution, achieving sub-millimetre lateral resolution and resolution at 14 mm-depth.