M
Ming Cheng
Researcher at Xiamen University
Publications - 85
Citations - 2794
Ming Cheng is an academic researcher from Xiamen University. The author has contributed to research in topics: Point cloud & Computer science. The author has an hindex of 18, co-authored 73 publications receiving 2196 citations. Previous affiliations of Ming Cheng include Tsinghua University.
Papers
More filters
Journal ArticleDOI
Design and implementation of a brain-computer interface with high transfer rates
TL;DR: A brain-computer interface that can help users to input phone numbers based on the steady-state visual evoked potential (SSVEP), which has noninvasive signal recording, little training required for use, and high information transfer rate.
Journal ArticleDOI
A BCI-based environmental controller for the motion-disabled
TL;DR: An environmental controller using a BCI technique based on steady-state visual evoked potential composed of a stimulator, a digital signal processor, and a trainable infrared remote-controller that has been applied to the control of an electric apparatus successfully.
Proceedings ArticleDOI
LO-Net: Deep Real-Time Lidar Odometry
TL;DR: Li et al. as discussed by the authors proposed a novel deep convolutional network pipeline, LO-Net, for real-time lidar odometry estimation, which can effectively learn feature representation for LO estimation, and implicitly exploit the sequential dependencies and dynamics in the data.
Journal ArticleDOI
Tree Classification in Complex Forest Point Clouds Based on Deep Learning
TL;DR: This letter proposes a new voxel-based deep learning method to classify tree species in 3-D point clouds collected from complex forest scenes, which exhibits performance superior to that of the other3-D tree species classification methods.
Journal ArticleDOI
NormalNet: A voxel-based CNN for 3D object classification and retrieval
TL;DR: NormalNet as mentioned in this paper is a voxel-based convolutional neural network (CNN) designed for 3D object recognition, which uses normal vectors of the object surfaces as input, which demonstrate stronger discrimination capability than binary voxels.