Journal ArticleDOI
Automatic ocular artifacts removal in EEG using deep learning
Reads0
Chats0
TLDR
This paper investigates the use of deep learning network (DLN) to remove OAs in EEG signals and compared the proposed method with the classic independent component analysis (ICA), kurtosis-ICA (K-ICA), Second-order blind identification (SOBI) and a shallow network method.About:
This article is published in Biomedical Signal Processing and Control.The article was published on 2018-05-01. It has received 97 citations till now.read more
Citations
More filters
Journal ArticleDOI
Deep learning-based electroencephalography analysis: a systematic review.
Yannick Roy,Hubert Banville,Isabela Albuquerque,Alexandre Gramfort,Tiago H. Falk,Jocelyn Faubert +5 more
TL;DR: In this paper, the authors present a review of 154 studies that apply deep learning to EEG, published between 2010 and 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring.
Journal ArticleDOI
A novel end-to-end 1D-ResCNN model to remove artifact from EEG signals
TL;DR: A one-dimensional residual Convolutional Neural Networks (1D-ResCNN) model for raw waveform-based EEG denoising is proposed to solve the above problem and can yield cleaner waveforms and achieve significant improvement in SNR and RMSE.
Posted Content
EEGdenoiseNet: A benchmark dataset for deep learning solutions of EEG denoising
TL;DR: The EEGdenoiseNet as discussed by the authors is a benchmark EEG dataset that is suited for training and testing deep learning-based denoising models, as well as for performance comparisons across models.
Journal ArticleDOI
EEGdenoiseNet: a benchmark dataset for deep learning solutions of EEG denoising.
TL;DR: The EEGdenoiseNet as discussed by the authors is a benchmark EEG dataset that is suited for training and testing DL-based denoising models, as well as for performance comparisons across models.
Proceedings ArticleDOI
Deep Convolutional Autoencoder for EEG Noise Filtering
TL;DR: This work presents a denoising approach based on deep learning using a deep convolutional autoencoder, which should reduce the effort of projectingDenoising filters in EEG, and seems to open a promising scope of research for noise filtering in EEG.
References
More filters
Journal ArticleDOI
A fast learning algorithm for deep belief nets
TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Journal ArticleDOI
Removing electroencephalographic artifacts by blind source separation.
Tzyy-Ping Jung,Tzyy-Ping Jung,Scott Makeig,Colin Humphries,Te-Won Lee,Te-Won Lee,Martin J. McKeown,Vicente J. Iragui,Terrence J. Sejnowski,Terrence J. Sejnowski +9 more
TL;DR: The results on EEG data collected from normal and autistic subjects show that ICA can effectively detect, separate, and remove contamination from a wide variety of artifactual sources in EEG records with results comparing favorably with those obtained using regression and PCA methods.
Journal ArticleDOI
A blind source separation technique using second-order statistics
TL;DR: A new source separation technique exploiting the time coherence of the source signals is introduced, which relies only on stationary second-order statistics that are based on a joint diagonalization of a set of covariance matrices.
Greedy Layer-Wise Training of Deep Networks
TL;DR: These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.
Journal ArticleDOI
The non-invasive Berlin Brain-Computer Interface: fast acquisition of effective performance in untrained subjects.
TL;DR: It is proposed that the key to quick efficiency in the BBCI system is its flexibility due to complex but physiologically meaningful features and its adaptivity which respects the enormous inter-subject variability.