G
Giang Bui
Researcher at University of Missouri
Publications - 9
Citations - 1380
Giang Bui is an academic researcher from University of Missouri. The author has contributed to research in topics: Feature detection (computer vision) & Point cloud. The author has an hindex of 6, co-authored 9 publications receiving 954 citations.
Papers
More filters
Proceedings ArticleDOI
NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results
Radu Timofte,Eirikur Agustsson,Luc Van Gool,Ming-Hsuan Yang,Lei Zhang,Bee Oh Lim,Sanghyun Son,Heewon Kim,Seungjun Nah,Kyoung Mu Lee,Xintao Wang,Yapeng Tian,Ke Yu,Yulun Zhang,Shixiang Wu,Chao Dong,Liang Lin,Yu Qiao,Chen Change Loy,Woong Bae,Jaejun Yoo,Yoseob Han,Jong Chul Ye,Jae-Seok Choi,Munchurl Kim,Yuchen Fan,Jiahui Yu,Wei Han,Ding Liu,Haichao Yu,Zhangyang Wang,Honghui Shi,Xinchao Wang,Thomas S. Huang,Yunjin Chen,Kai Zhang,Wangmeng Zuo,Zhimin Tang,Linkai Luo,Shaohui Li,Min Fu,Lei Cao,Wen Heng,Giang Bui,Truc Le,Ye Duan,Dacheng Tao,Ruxin Wang,Xu Lin,Jianxin Pang,Xu Jinchang,Yu Zhao,Xiangyu Xu,Jinshan Pan,Deqing Sun,Yujin Zhang,Xibin Song,Yuchao Dai,Xueying Qin,Xuan-Phung Huynh,Tiantong Guo,Hojjat Seyed Mousavi,Tiep H. Vu,Vishal Monga,Cristóvão Cruz,Karen Egiazarian,Vladimir Katkovnik,Rakesh Mehta,Arnav Kumar Jain,Abhinav Agarwalla,Ch V. Sai Praveen,Ruofan Zhou,Hongdiao Wen,Che Zhu,Zhiqiang Xia,Zhengtao Wang,Qi Guo +76 more
TL;DR: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results and gauges the state-of-the-art in single imagesuper-resolution.
Journal ArticleDOI
A multi-view recurrent neural network for 3D mesh segmentation
Truc Le,Giang Bui,Ye Duan +2 more
TL;DR: A multi-view recurrent neural network (MV-RNN) approach for 3D mesh segmentation that combines the convolutional neural networks and a two-layer long short term memory to yield coherent segmentation of 3D shapes is introduced.
Journal ArticleDOI
Point-based rendering enhancement via deep learning
TL;DR: The proposed novel deep learning-based approach that can generate high-resolution photo realistic point renderings from low-resolution point clouds using co-registered high-quality photographs as the ground truth data to train the deep neural network for point-based rendering outperforms state-of-the-art methods.
Journal ArticleDOI
An Ensemble Approach to Image Matching Using Contextual Features
TL;DR: It is shown that incorporating contextual information can provide complimentary information for scale invariant feature transform and boost local keypoint matching performance, as well as be used to describe corner feature points.
Proceedings ArticleDOI
Integrating videos with LIDAR scans for virtual reality
TL;DR: This work demonstrates how to register a variety of 2D imagery with a range scan to construct photo-realistic models and to extract walking people captured in videos and model them in a 3D space.