Z
Zhu Liang Yu
Researcher at South China University of Technology
Publications - 192
Citations - 4629
Zhu Liang Yu is an academic researcher from South China University of Technology. The author has contributed to research in topics: Adaptive beamformer & Robustness (computer science). The author has an hindex of 31, co-authored 176 publications receiving 3537 citations. Previous affiliations of Zhu Liang Yu include Nanyang Technological University & China University of Technology.
Papers
More filters
Journal ArticleDOI
A Hybrid BCI System Combining P300 and SSVEP and Its Application to Wheelchair Control
TL;DR: Combining P300 potential and steady-state visual evoked potential (SSVEP) significantly improved the performance of the BCI system in terms of detection accuracy and response time.
Journal ArticleDOI
An EEG-Based BCI System for 2-D Cursor Control by Combining Mu/Beta Rhythm and P300 Potential
TL;DR: This work proposes a new approach by combining two brain signals including Mu/Beta rhythm during motor imagery and P300 potential to address two-dimensional cursor control in EEG-based brain-computer interfaces.
Journal ArticleDOI
Beampattern Synthesis for Linear and Planar Arrays With Antenna Selection by Convex Optimization
TL;DR: In this paper, a convex optimization based beampattern synthesis method with antenna selection is proposed for linear and planar arrays, which can achieve completely arbitrary sidelobe levels.
Patent
Adaptive noise cancelling microphone system
Zhu Liang Yu,Wee Ser +1 more
TL;DR: An adaptive noise canceling microphone system for extracting a desired signal, in particular a desired speech signal, comprising two microphones being arranged at a predefined distance from each other, was proposed in this paper.
Journal ArticleDOI
Deep learning based on Batch Normalization for P300 signal detection
TL;DR: A novel CNN, termed BN3, is developed for detecting P300 signals, where Batch Normalization is introduced in the input and convolutional layers to alleviate over-fitting, and the rectified linear unit (ReLU) is employed in the convolutionAL layers to accelerate training.