scispace - formally typeset
Y

Yinqiang Zheng

Researcher at University of Tokyo

Publications -  122
Citations -  2729

Yinqiang Zheng is an academic researcher from University of Tokyo. The author has contributed to research in topics: Hyperspectral imaging & Computer science. The author has an hindex of 21, co-authored 101 publications receiving 1555 citations. Previous affiliations of Yinqiang Zheng include Shanghai Jiao Tong University & National Institute of Informatics.

Papers
More filters
Proceedings ArticleDOI

Revisiting the PnP Problem: A Fast, General and Optimal Solution

TL;DR: In this paper, a non-iterative O(n) solution was proposed for the perspective n-point problem, which is fast, generally applicable and globally optimal, using the Gr"obner basis technique.
Proceedings ArticleDOI

Learning to Reduce Dual-Level Discrepancy for Infrared-Visible Person Re-Identification

TL;DR: A novel Dual-level Discrepancy Reduction Learning (D$^2$RL) scheme which handles the two discrepancies separately in infrared-Visible person RE-IDentification and outperforms the state-of-the-art methods.
Proceedings ArticleDOI

Practical low-rank matrix approximation under robust L 1 -norm

TL;DR: This work proposes to add a convex trace-norm regularization term to improve convergence, without introducing too much heterogenous information, and customize a scalable first-order optimization algorithm to solve the regularized formulation on the basis of the augmented Lagrange multiplier (ALM) method.
Book ChapterDOI

Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring

TL;DR: A novel dataset (BSD) is contributed to the community, by collecting paired blurry/sharp video clips using a co-axis beam splitter acquisition system and a global spatio-temporal attention module is proposed to fuse the effective hierarchical features from past and future frames.
Proceedings ArticleDOI

Learning to See Moving Objects in the Dark

TL;DR: A fully convolutional network with 3D and 2D miscellaneous operations is utilized to learn an enhancement mapping with proper spatial-temporal transformation from raw camera sensor data to bright RGB videos, and it outperforms state-of-the-art low-light image/video enhancement algorithms.