scispace - formally typeset
L

Long Quan

Researcher at Hong Kong University of Science and Technology

Publications -  256
Citations -  13289

Long Quan is an academic researcher from Hong Kong University of Science and Technology. The author has contributed to research in topics: Affine transformation & Real image. The author has an hindex of 54, co-authored 253 publications receiving 10825 citations. Previous affiliations of Long Quan include University of Tsukuba & Centre national de la recherche scientifique.

Papers
More filters
Proceedings ArticleDOI

Image deblurring with blurred/noisy image pairs

TL;DR: This paper shows in this paper how to produce a high quality image that cannot be obtained by simply denoising the noisy image, or deblurring the blurred image alone, by combining information extracted from both blurred and noisy images.
Book ChapterDOI

MVSNet: Depth inference for unstructured multi-view stereo

TL;DR: This work presents an end-to-end deep learning architecture for depth map inference from multi-view images that flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature.
Journal ArticleDOI

Linear N-point camera pose determination

TL;DR: This work proposes a family of linear methods that yield a unique solution to 4- and 5-point pose determination for generic reference points and shows that they do not degenerate for coplanar configurations and even outperform the special linear algorithm for copLANar configurations in practice.
Journal ArticleDOI

A quasi-dense approach to surface reconstruction from uncalibrated images

TL;DR: A complete automatic and practical system of 3D modeling from raw images captured by hand-held cameras to surface representation is proposed, demonstrating the superior performance of the quasi-dense approach with respect to the standard sparse approach in robustness, accuracy, and applicability.
Journal ArticleDOI

Image-based plant modeling

TL;DR: A semi-automatic technique for modeling plants directly from images, automating the process of shape recovery while relying on the user to provide simple hints on segmentation, which inherits the realistic shape and complexity of a real plant.