scispace - formally typeset
Search or ask a question
Author

Li Wang

Bio: Li Wang is an academic researcher from Guangzhou University. The author has contributed to research in topics: Convolutional neural network & Time domain. The author has an hindex of 1, co-authored 4 publications receiving 4 citations.

Papers
More filters
Proceedings ArticleDOI
01 Jul 2020
TL;DR: This paper proposes a mixed-scale CNN architecture, and a data augmentation method is used to classify the EEG of motor imagery, which effectively solves the problems existing in the existing CNN-based motor imagery classification methods, and it improves the classification accuracy.
Abstract: The brain-computer interface (BCI) based on electroencephalography (EEG) converts the subject's intentions into control signals. For the BCI, the study of motor imagery has been widely used. In recent years, a classification method based on a convolutional neural network (CNNs) has been proposed. However, most of the existing methods use a single convolution scale on CNN, and another problem that affects the results is limited training data. To solve these problems, we propose a mixed-scale CNN architecture, and a data augmentation method is used to classify the EEG of motor imagery. After classifying the BCI competition IV dataset 2b, the average classification accuracy is 81.52%. Compared with the existing methods, our method has a better classification result. This method effectively solves the problems existing in the existing CNN-based motor imagery classification methods, and it improves the classification accuracy.

7 citations

Patent
03 Nov 2020
TL;DR: In this article, a convolutional neural network training method, an electroencephalogram signal recognition method and device and a medium, is described, and the method comprises the steps: executing a plurality of obtaining processes, obtaining an EEG signal in each obtaining process, and executing the time-domain data enhancement and frequency domain data enhancement of the EEG signal, and training a CNN by using the enhanced EEG signal.
Abstract: The invention discloses a convolutional neural network training method, an electroencephalogram signal recognition method and device and a medium, and the method comprises the steps: executing a plurality of obtaining processes, obtaining an electroencephalogram signal in each obtaining process, and executing the time domain data enhancement and frequency domain data enhancement of the electroencephalogram signal, and training a convolutional neural network by using the enhanced electroencephalogram signal, and the like. The convolutional neural network trained by the method is a multi-input,multi-convolution-scale and multi-convolution-type hybrid convolutional neural network, the sizes of a multi-input convolution layer and a convolution kernel are reasonably designed, and the method has high recognition accuracy; a training set used for training the convolutional neural network is obtained by performing time domain data enhancement and frequency domain data enhancement expansion based on the acquired electroencephalogram signals, so the training data volume of the convolutional neural network can be increased, the over-fitting phenomenon can be reduced, noise interference in the electroencephalogram signals can be effectively coped with, and the recognition effect can be improved. The method is widely applied to the technical field of signal processing.

1 citations

Proceedings ArticleDOI
16 Oct 2020
TL;DR: In this paper, a temporal-spatial-frequency feature selection model based on binary quantum particle swarm optimization (BQPSO) is proposed to improve the recognition results of the EEG signals.
Abstract: The electroencephalography (EEG) signals can be identified and translated into control commands by brain-computer interface (BCI) systems. To improve the recognition results of the EEG signals, a temporal-spatial-frequency feature selection model based on binary quantum particle swarm optimization (BQPSO) is proposed. The signals are firstly divided into six segments according to time, and then they are bandpass filtered into six different frequency ranges, respectively. Temporal-spatial-frequency features are extracted by common spatial pattern (CSP). After selecting by BQPSO, the optimized features are classified by extreme learning machine. Two different data sets are used to validate the proposed model, and their average classification results are 84.7% and 81.4%, respectively. Compared with other feature selection algorithms, our proposed model achieves the best results. Better classification results can be obtained by the appropriate feature selection algorithm.
Proceedings ArticleDOI
23 Jun 2018
TL;DR: A time-frequency-space range selection model based on neighborhood mutual information (NMI) that is entirely applicable to real time optimization calculation of online brain-computer interfaces and improvement of classification results is proposed.
Abstract: In order to increase the classification accuracy of the mental tasks with speech imagery, a time-frequency-space range selection model based on neighborhood mutual information (NMI) is proposed According to time, the electroencephalography (EEG) signals are divided into 7 distinct segments These 7 sections of signals are filtered by 28 band pass filters with different frequency range The filtered signals are extracted by common spatial pattern (CSP) to obtain spatial matrices Then, the NMI values of these matrices are calculated At last, the time-frequency-space range is optimized by NMI values The EEG signals are processed by the selected time-frequency-space range, and the eigenvalues are calculated and classified by variance and support vector machines, respectively From the results of 10 subjects, the average classification accuracy is improved by 30% after optimization The improvements of subjects S2 and S5 are the most pronounced, and their results are increased by 50% and 52%, respectively With automatic range selection and improvement of classification results, the model is entirely applicable to real time optimization calculation of online brain-computer interfaces

Cited by
More filters
Journal ArticleDOI
Yuexing Han1, Bing Wang1, Jie Luo1, Long Li1, Xiaolong Li1 
TL;DR: In this paper, a parallel convolutional neural network (PCNN) architecture is proposed to classify motor imagery signals, which achieves 83.0 ± 3.4% on BCI Competition IV dataset 2b, which outperforms the compared methods at least 5.2%.

14 citations

Journal ArticleDOI
Wonjun Ko1, Eunjin Jeon1, Seungwoo Jeong1, Jaeun Phyo1, Heung-Il Suk1 
TL;DR: In this article, a review of DL-based short/zero-calibration methods for BCI is presented, which includes data augmentation (DA), increasing the number of training samples without acquiring additional data, and transfer learning (TL), taking advantage of representative knowledge obtained from one dataset to address the data insufficiency problem in other datasets.
Abstract: Brain–computer interfaces (BCIs) utilizing machine learning techniques are an emerging technology that enables a communication pathway between a user and an external system such as a computer. Owing to its practicality, electroencephalography (EEG) is one of the most widely used measurements for BCI. However, EEG has complex patterns and EEG-based BCIs mostly involve a cost/time-consuming calibration phase; thus, acquiring sufficient EEG data is rarely possible. Recently, deep learning (DL) has had a theoretical/practical impact on BCI research because of its use in learning representations of complex patterns inherent in EEG. Moreover, algorithmic advances in DL facilitate short/zero-calibration in BCI, thereby suppressing the data acquisition phase. Those advancements include data augmentation (DA), increasing the number of training samples without acquiring additional data, and transfer learning (TL), taking advantage of representative knowledge obtained from one dataset to address the so-called data insufficiency problem in other datasets. In this study, we review DL-based short/zero-calibration methods for BCI. Further, we elaborate methodological/algorithmic trends, highlight intriguing approaches in the literature, and discuss directions for further research. In particular, we search for generative model-based and geometric manipulation-based DA methods. Additionally, we categorize TL techniques in DL-based BCIs into explicit and implicit methods. Our systematization reveals advances in the DA and TL methods. Among the studies reviewed herein, approximately 45% of DA studies used generative model-based techniques, whereas approximately 45% of TL studies used explicit knowledge transferring strategy. Moreover, based on our literature review, we recommend an appropriate DA strategy for DL-based BCIs and discuss trends of TLs used in DL-based BCIs.

14 citations

Journal ArticleDOI
TL;DR: In this paper , two one-dimensional convolutional embedding modules are proposed as a deep feature extractor, for single-channel and multichannel EEG signals respectively, and a deep metric learning model is detailed along with a stage-wise training strategy.
Abstract: Electroencephalography (EEG) is a commonly used clinical approach for the diagnosis of epilepsy which is a life-threatening neurological disorder. Many algorithms have been proposed for the automatic detection of epileptic seizures using traditional machine learning and deep learning. Although deep learning methods have achieved great success in many fields, their performance in EEG analysis and classification is still limited mainly due to the relatively small sizes of available datasets. In this paper, we propose an automatic method for the detection of epileptic seizures based on deep metric learning which is a novel strategy tackling the few-shot problem by mitigating the demand for massive data. First, two one-dimensional convolutional embedding modules are proposed as a deep feature extractor, for single-channel and multichannel EEG signals respectively. Then, a deep metric learning model is detailed along with a stage-wise training strategy. Experiments are conducted on the publicly-available Bonn University dataset which is a benchmark dataset, and the CHB-MIT dataset which is larger and more realistic. Impressive averaged accuracy of 98.60% and specificity of 100% are achieved on the most difficult classification of interictal (subset D) vs ictal (subset E) of the Bonn dataset. On the CHB-MIT dataset, an averaged accuracy of 86.68% and specificity of 93.71% are reached. With the proposed method, automatic and accurate detection of seizures can be performed in real time, and the heavy burden of neurologists can be effectively reduced.

9 citations

Posted Content
TL;DR: In this article, a cross-subject EEG classification framework with a generative adversarial networks (GANs) based method named common spatial GAN (CS-GAN), which used adversarial training between a generator and a discriminator to obtain high-quality data for augmentation.
Abstract: The cross-subject application of EEG-based brain-computer interface (BCI) has always been limited by large individual difference and complex characteristics that are difficult to perceive Therefore, it takes a long time to collect the training data of each user for calibration Even transfer learning method pre-training with amounts of subject-independent data cannot decode different EEG signal categories without enough subject-specific data Hence, we proposed a cross-subject EEG classification framework with a generative adversarial networks (GANs) based method named common spatial GAN (CS-GAN), which used adversarial training between a generator and a discriminator to obtain high-quality data for augmentation A particular module in the discriminator was employed to maintain the spatial features of the EEG signals and increase the difference between different categories, with two losses for further enhancement Through adaptive training with sufficient augmentation data, our cross-subject classification accuracy yielded a significant improvement of 1585% than leave-one subject-out (LOO) test and 857% than just adapting 100 original samples on the dataset 2a of BCI competition IV Moreover, We designed a convolutional neural networks (CNNs) based classification method as a benchmark with a similar spatial enhancement idea, which achieved remarkable results to classify motor imagery EEG data In summary, our framework provides a promising way to deal with the cross-subject problem and promote the practical application of BCI

1 citations

Proceedings ArticleDOI
15 Aug 2022
TL;DR: In this paper , a weighted shared two-dimensional convolutional CNN-LSTM network is proposed, which shares convolution kernels for feature maps of different channels of different EEG channels.
Abstract: The brain-computer interface technology enables the disabled to control external devices through the motor imagery EEG. Due to the complex changes of EEG in the time domain and frequency domain, classifiers play an important role in EEG recognition. Convolutional neural network is an excellent deep learning method, but most papers usually use one-dimensional convolution to identify EEG, and rarely consider comprehensive feature extraction and classification of time-frequency map through two-dimensional convolutional network. In this article, the time-frequency graphs of different EEG channels are superimposed by referring to the color dimension of the picture. A weighted shared two-dimensional convolutional CNN-LSTM network is proposed, which shares convolution kernels for feature maps of different channels. Compared with CNN and CNN-LSTM, the weight-sharing CNN-LSTM reduces the amount of calculation, speeds up the network training and improves the classification performance, the highest accuracy rate is 82.3%.