scispace - formally typeset
Search or ask a question
Author

Yannick Roy

Bio: Yannick Roy is an academic researcher from Université de Montréal. The author has contributed to research in topics: Deep learning & EEGLAB. The author has an hindex of 2, co-authored 3 publications receiving 342 citations.
Topics: Deep learning, EEGLAB, Computer science, Biology

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors present a review of 154 studies that apply deep learning to EEG, published between 2010 and 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring.
Abstract: Context Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective In this work, we review 154 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to (1) the data, (2) the preprocessing methodology, (3) the DL design choices, (4) the results, and (5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About [Formula: see text] of the studies used convolutional neural networks (CNNs), while [Formula: see text] used recurrent neural networks (RNNs), most often with a total of 3-10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was [Formula: see text] across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. Significance To help the community progress and share work more effectively, we provide a list of recommendations for future studies and emphasize the need for more reproducible research. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly. A planned follow-up to this work will be an online public benchmarking portal listing reproducible results.

699 citations

Posted Content
TL;DR: In this paper, the authors reviewed 156 papers that apply deep learning to EEG, published between 2010 and 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring.
Abstract: Electroencephalography (EEG) is a complex signal and can require several years of training to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. In this work, we review 156 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches in order to inform future research and formulate recommendations. Various data items were extracted for each study pertaining to 1) the data, 2) the preprocessing methodology, 3) the DL design choices, 4) the results, and 5) the reproducibility of the experiments. Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours. As for the model, 40% of the studies used convolutional neural networks (CNNs), while 14% used recurrent neural networks (RNNs), most often with a total of 3 to 10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was 5.4% across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. To help the field progress, we provide a list of recommendations for future studies and we make our summary table of DL and EEG papers available and invite the community to contribute.

2 citations

Book ChapterDOI
01 Jan 2019
TL;DR: StaR is introduced, an EEGLAB framework for the MPT statistical analyses to be performed in R, and offers an intuitive user interface that integrates into EEGLAB’s menu.
Abstract: EEGLAB, a widely used toolbox in MATLAB (The Mathworks, Inc.), uses Independent Component Analysis (ICA) to decompose the EEG signal into sub-signals, and localizes brain sources of those sub-signals prior to independent component (IC) clustering for group study. In 2013, the Measure Projection Toolbox (MPT) was introduced as a new data-driven IC clustering toolbox for EEGLAB. Despite the numerous features and advantages offered by EEGLAB and the MPT, they both have limitations for statistical analyses with more than two independent variables. In order to work around those limitations, this paper introduces StaR, an EEGLAB framework for the MPT statistical analyses to be performed in R. StaR initially exports the data from different clusters generated by the MPT for different measures of interest (e.g., Event-Related Potentials (ERPs) and Event-Related Spectral Perturbations (ERSPs)) and formats the data such that further statistical analyses can be performed in R. Once in R, StaR uses linear mixed models as its default method to better handle missing values and intra-subject variability. Finally, StaR brings the results back into MATLAB to plot the results with the well-known and easy to interpret EEGLAB graphics. To make the whole process easy, StaR also offers an intuitive user interface that integrates into EEGLAB’s menu.
Journal ArticleDOI
TL;DR: In the domain of electroen-cephalography, a clear cognitive transition between the different phases of a 3D-MOT task is shown, showing some sort of a hand-ofi between attention and working memory.
Abstract: Our ability to track multiple objects in a dynamic environment enables us to perform everyday tasks such as driving, playing team sports, and walking in a crowded mall. Despite more than three decades of literature on multiple object tracking (MOT) tasks, the underlying and intertwined neural mechanisms remain poorly understood. Here we looked at the electroen-cephalography (EEG) neural correlates and their changes across the three phases of a 3D-MOT task, namely identification, tracking and recall. We recorded the EEG activity of 24 participants while they were performing a 3D-MOT task with either 1, 2 or 3 targets where some trials were lateralized and some were not. We observed what seems to be a handoff between focused attention and working memory processes when going from tracking to recall. Our findings revealed a strong inhibition in delta and theta frequencies from the frontal region during tracking, followed by a strong (re)activation of these same frequencies during recall. Our results also showed contralateral delay activity (CDA) for the lateralized trials, in both the identification and recall phases but not during tracking. domain, we show a clear cognitive transition between the different phases of a 3D-MOT task, showing some sort of a hand-off between attention and working memory.

Cited by
More filters
Journal ArticleDOI
TL;DR: NeuroKit2 as discussed by the authors is an open-source, community-driven, and user-centered Python package for neurophysiological signal processing, which includes high-level functions that enable data processing in a few lines of code using validated pipelines.
Abstract: NeuroKit2 is an open-source, community-driven, and user-centered Python package for neurophysiological signal processing. It provides a comprehensive suite of processing routines for a variety of bodily signals (e.g., ECG, PPG, EDA, EMG, RSP). These processing routines include high-level functions that enable data processing in a few lines of code using validated pipelines, which we illustrate in two examples covering the most typical scenarios, such as an event-related paradigm and an interval-related analysis. The package also includes tools for specific processing steps such as rate extraction and filtering methods, offering a trade-off between high-level convenience and fine-tuned control. Its goal is to improve transparency and reproducibility in neurophysiological research, as well as foster exploration and innovation. Its design philosophy is centred on user-experience and accessibility to both novice and advanced users.

215 citations

Journal ArticleDOI
TL;DR: DA increasingly used and considerably improved DL decoding accuracy on EEG and holds transformative promise for EEG processing, possibly like DL revolutionized computer vision, etc.

156 citations

Journal ArticleDOI
TL;DR: First benchmarking results for the recently published, freely accessible clinical 12-lead ECG dataset PTB-XL are put forward, finding that convolutional neural networks, in particular resnet- and inception-based architectures, show the strongest performance across all tasks.
Abstract: Electrocardiography (ECG) is a very common, non-invasive diagnostic procedure and its interpretation is increasingly supported by algorithms. The progress in the field of automatic ECG analysis has up to now been hampered by a lack of appropriate datasets for training as well as a lack of well-defined evaluation procedures to ensure comparability of different algorithms. To alleviate these issues, we put forward first benchmarking results for the recently published, freely accessible clinical 12-lead ECG dataset PTB-XL, covering a variety of tasks from different ECG statement prediction tasks to age and sex prediction. Among the investigated deep-learning-based timeseries classification algorithms, we find that convolutional neural networks, in particular resnet- and inception-based architectures, show the strongest performance across all tasks. We find consistent results on the ICBEB2018 challenge ECG dataset and discuss prospects of transfer learning using classifiers pretrained on PTB-XL. These benchmarking results are complemented by deeper insights into the classification algorithm in terms of hidden stratification, model uncertainty and an exploratory interpretability analysis, which provide connecting points for future research on the dataset. Our results emphasize the prospects of deep-learning-based algorithms in the field of ECG analysis, not only in terms of quantitative accuracy but also in terms of clinically equally important further quality metrics such as uncertainty quantification and interpretability. With this resource, we aim to establish the PTB-XL dataset as a resource for structured benchmarking of ECG analysis algorithms and encourage other researchers in the field to join these efforts.

143 citations

Journal ArticleDOI
TL;DR: This survey presents a survey of various action recognition techniques along with the HAR applications namely, content-based video summarization, human–computer interaction, education, healthcare, video surveillance, abnormal activity detection, sports, and entertainment.
Abstract: Human Action Recognition (HAR) involves human activity monitoring task in different areas of medical, education, entertainment, visual surveillance, video retrieval, as well as abnormal activity identification, to name a few. Due to an increase in the usage of cameras, automated systems are in demand for the classification of such activities using computationally intelligent techniques such as Machine Learning (ML) and Deep Learning (DL). In this survey, we have discussed various ML and DL techniques for HAR for the years 2011–2019. The paper discusses the characteristics of public datasets used for HAR. It also presents a survey of various action recognition techniques along with the HAR applications namely, content-based video summarization, human–computer interaction, education, healthcare, video surveillance, abnormal activity detection, sports, and entertainment. The advantages and disadvantages of action representation, dimensionality reduction, and action analysis methods are also provided. The paper discusses challenges and future directions for HAR.

142 citations

Journal ArticleDOI
TL;DR: A meticulous and systematic attempt at organizing and standardizing the methods of combining ML and MB models as hybrid learning methods and shedding some light on the challenges of hybrid models.
Abstract: A multitude of cyber-physical system (CPS) applications, including design, control, diagnosis, prognostics, and a host of other problems, are predicated on the assumption of model availability. There are mainly two approaches to modeling: Physics/Equation based modeling (Model-Based, MB) and Machine Learning (ML). Recently, there is a growing consensus that ML methodologies relying on data need to be coupled with prior scientific knowledge (or physics, MB) for modeling CPS. We refer to the paradigm that combines MB approaches with ML as hybrid learning methods. Hybrid modeling (HB) methods is a growing field within both the ML and scientific communities, and are recognized as an important emerging but nascent area of research. Recently, several works have attempted to merge MB and ML models for the complete exploitation of their combined potential. However, the research literature is scattered and unorganized. So, we make a meticulous and systematic attempt at organizing and standardizing the methods of combining ML and MB models. In addition to that, we outline five metrics for the comprehensive evaluation of hybrid models. Finally, we conclude by shedding some light on the challenges of hybrid models, which we, as a research community, should focus on for harnessing the full potential of hybrid models. An additional feature of this survey is that the hybrid modeling work has been discussed with a focus on modeling cyber-physical systems.

128 citations