scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Neural Engineering in 2019"


Journal ArticleDOI
TL;DR: Practical suggestions on the selection of many hyperparameters are provided in the hope that they will promote or guide the deployment of deep learning to EEG datasets in future research.
Abstract: Objective Electroencephalography (EEG) analysis has been an important tool in neuroscience with applications in neuroscience, neural engineering (e.g. Brain-computer interfaces, BCI's), and even commercial applications. Many of the analytical tools used in EEG studies have used machine learning to uncover relevant information for neural classification and neuroimaging. Recently, the availability of large EEG data sets and advances in machine learning have both led to the deployment of deep learning architectures, especially in the analysis of EEG signals and in understanding the information it may contain for brain functionality. The robust automatic classification of these signals is an important step towards making the use of EEG more practical in many applications and less reliant on trained professionals. Towards this goal, a systematic review of the literature on deep learning applications to EEG classification was performed to address the following critical questions: (1) Which EEG classification tasks have been explored with deep learning? (2) What input formulations have been used for training the deep networks? (3) Are there specific deep learning network structures suitable for specific types of tasks? Approach A systematic literature review of EEG classification using deep learning was performed on Web of Science and PubMed databases, resulting in 90 identified studies. Those studies were analyzed based on type of task, EEG preprocessing methods, input type, and deep learning architecture. Main results For EEG classification tasks, convolutional neural networks, recurrent neural networks, deep belief networks outperform stacked auto-encoders and multi-layer perceptron neural networks in classification accuracy. The tasks that used deep learning fell into five general groups: emotion recognition, motor imagery, mental workload, seizure detection, event related potential detection, and sleep scoring. For each type of task, we describe the specific input formulation, major characteristics, and end classifier recommendations found through this review. Significance This review summarizes the current practices and performance outcomes in the use of deep learning for EEG classification. Practical suggestions on the selection of many hyperparameters are provided in the hope that they will promote or guide the deployment of deep learning to EEG datasets in future research.

777 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a review of 154 studies that apply deep learning to EEG, published between 2010 and 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring.
Abstract: Context Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective In this work, we review 154 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to (1) the data, (2) the preprocessing methodology, (3) the DL design choices, (4) the results, and (5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About [Formula: see text] of the studies used convolutional neural networks (CNNs), while [Formula: see text] used recurrent neural networks (RNNs), most often with a total of 3-10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was [Formula: see text] across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. Significance To help the community progress and share work more effectively, we provide a list of recommendations for future studies and emphasize the need for more reproducible research. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly. A planned follow-up to this work will be an online public benchmarking portal listing reproducible results.

699 citations


Journal ArticleDOI
TL;DR: The current review evaluates EEG-based BCI paradigms regarding their advantages and disadvantages from a variety of perspectives, and various EEG decoding algorithms and classification methods are evaluated.
Abstract: Advances in brain science and computer technology in the past decade have led to exciting developments in brain-computer interface (BCI), thereby making BCI a top research area in applied science. The renaissance of BCI opens new methods of neurorehabilitation for physically disabled people (e.g. paralyzed patients and amputees) and patients with brain injuries (e.g. stroke patients). Recent technological advances such as wireless recording, machine learning analysis, and real-time temporal resolution have increased interest in electroencephalographic (EEG) based BCI approaches. Many BCI studies have focused on decoding EEG signals associated with whole-body kinematics/kinetics, motor imagery, and various senses. Thus, there is a need to understand the various experimental paradigms used in EEG-based BCI systems. Moreover, given that there are many available options, it is essential to choose the most appropriate BCI application to properly manipulate a neuroprosthetic or neurorehabilitation device. The current review evaluates EEG-based BCI paradigms regarding their advantages and disadvantages from a variety of perspectives. For each paradigm, various EEG decoding algorithms and classification methods are evaluated. The applications of these paradigms with targeted patients are summarized. Finally, potential problems with EEG-based BCI systems are discussed, and possible solutions are proposed.

475 citations


Journal ArticleDOI
TL;DR: ROAST is released as an open-source, easy-to-install and fully-automated pipeline for individualized TES modeling and its performance with commercial FEM software, and SimNIBS, a well-established open- source modeling pipeline is compared.
Abstract: Objective Research in the area of transcranial electrical stimulation (TES) often relies on computational models of current flow in the brain. Models are built based on magnetic resonance images (MRI) of the human head to capture detailed individual anatomy. To simulate current flow on an individual, the subject's MRI is segmented, virtual electrodes are placed on this anatomical model, the volume is tessellated into a mesh, and a finite element model (FEM) is solved numerically to estimate the current flow. Various software tools are available for each of these steps, as well as processing pipelines that connect these tools for automated or semi-automated processing. The goal of the present tool-realistic volumetric-approach to simulate transcranial electric simulation (ROAST)-is to provide an end-to-end pipeline that can automatically process individual heads with realistic volumetric anatomy leveraging open-source software and custom scripts to improve segmentation and execute electrode placement. Approach ROAST combines the segmentation algorithm of SPM12, a Matlab script for touch-up and automatic electrode placement, the finite element mesher iso2mesh and the solver getDP. We compared its performance with commercial FEM software, and SimNIBS, a well-established open-source modeling pipeline. Main results The electric fields estimated with ROAST differ little from the results obtained with commercial meshing and FEM solving software. We also do not find large differences between the various automated segmentation methods used by ROAST and SimNIBS. We do find bigger differences when volumetric segmentation are converted into surfaces in SimNIBS. However, evaluation on intracranial recordings from human subjects suggests that ROAST and SimNIBS are not significantly different in predicting field distribution, provided that users have detailed knowledge of SimNIBS. Significance We hope that the detailed comparisons presented here of various choices in this modeling pipeline can provide guidance for future tool development. We released ROAST as an open-source, easy-to-install and fully-automated pipeline for individualized TES modeling.

190 citations


Journal ArticleDOI
TL;DR: This framework significantly improves attention detection accuracy with inter-subject classification and is capable of learning from raw data with the least amount of pre-processing, which in turn eliminates the extensive computational load of time-consuming data preparation and feature extraction.
Abstract: Objective Despite the effective application of deep learning (DL) in brain-computer interface (BCI) systems, the successful execution of this technique, especially for inter-subject classification, in cognitive BCI has not been accomplished yet. In this paper, we propose a framework based on the deep convolutional neural network (CNN) to detect the attentive mental state from single-channel raw electroencephalography (EEG) data. Approach We develop an end-to-end deep CNN to decode the attentional information from an EEG time series. We also explore the consequences of input representations on the performance of deep CNN by feeding three different EEG representations into the network. To ensure the practical application of the proposed framework and avoid time-consuming re-training, we perform inter-subject transfer learning techniques as a classification strategy. Eventually, to interpret the learned attentional patterns, we visualize and analyse the network perception of the attention and non-attention classes. Main results The average classification accuracy is 79.26%, with only 15.83% of 120 subjects having an accuracy below 70% (a generally accepted threshold for BCI). This is while with the inter-subject approach, it is literally difficult to output high classification accuracy. This end-to-end classification framework surpasses conventional classification methods for attention detection. The visualization results demonstrate that the learned patterns from the raw data are meaningful. Significance This framework significantly improves attention detection accuracy with inter-subject classification. Moreover, this study sheds light on the research on end-to-end learning; the proposed network is capable of learning from raw data with the least amount of pre-processing, which in turn eliminates the extensive computational load of time-consuming data preparation and feature extraction.

160 citations


Journal ArticleDOI
TL;DR: This is the first time that high-quality speech has been reconstructed from neural recordings during speech production using deep neural networks, and uses a densely connected convolutional neural network topology which is well-suited to work with the small amount of data available from each participant.
Abstract: Objective Direct synthesis of speech from neural signals could provide a fast and natural way of communication to people with neurological diseases. Invasively-measured brain activity (electrocorticography; ECoG) supplies the necessary temporal and spatial resolution to decode fast and complex processes such as speech production. A number of impressive advances in speech decoding using neural signals have been achieved in recent years, but the complex dynamics are still not fully understood. However, it is unlikely that simple linear models can capture the relation between neural activity and continuous spoken speech. Approach Here we show that deep neural networks can be used to map ECoG from speech production areas onto an intermediate representation of speech (logMel spectrogram). The proposed method uses a densely connected convolutional neural network topology which is well-suited to work with the small amount of data available from each participant. Main results In a study with six participants, we achieved correlations up to r = 0.69 between the reconstructed and original logMel spectrograms. We transfered our prediction back into an audible waveform by applying a Wavenet vocoder. The vocoder was conditioned on logMel features that harnessed a much larger, pre-existing data corpus to provide the most natural acoustic output. Significance To the best of our knowledge, this is the first time that high-quality speech has been reconstructed from neural recordings during speech production using deep neural networks.

140 citations


Journal ArticleDOI
TL;DR: A hybrid deep network framework to improve classification accuracy of four-class MI-EEG signal is proposed and could be of great interest for real-life brain-computer interfaces (BCIs).
Abstract: Objective Learning the structures and unknown correlations of a motor imagery electroencephalogram (MI-EEG) signal is important for its classification. It is also a major challenge to obtain good classification accuracy from the increased number of classes and increased variability from different people. In this study, a four-class MI task is investigated. Approach An end-to-end novel hybrid deep learning scheme is developed to decode the MI task from EEG data. The proposed algorithm consists of two parts: a. A one-versus-rest filter bank common spatial pattern is adopted to preprocess and pre-extract the features of the four-class MI signal. b. A hybrid deep network based on the convolutional neural network and long-term short-term memory network is proposed to extract and learn the spatial and temporal features of the MI signal simultaneously. Main results The main contribution of this paper is to propose a hybrid deep network framework to improve the classification accuracy of the four-class MI-EEG signal. The hybrid deep network is a subject-independent shared neural network, which means it can be trained by using the training data from all subjects to form one model. Significance The classification performance obtained by the proposed algorithm on brain-computer interface (BCI) competition IV dataset 2a in terms of accuracy is 83% and Cohen's kappa value is 0.80. Finally, the shared hybrid deep network is evaluated by every subject respectively, and the experimental results illustrate that the shared neural network has satisfactory accuracy. Thus, the proposed algorithm could be of great interest for real-life BCIs.

116 citations


Journal ArticleDOI
TL;DR: Results indicate that the CNN model can extract underlying motor control information from EMG signals during single and multiple degree-of-freedom (DoF) tasks, due to higher regression accuracies especially with high EMG amplitudes.
Abstract: OBJECTIVE Deep learning models can learn representations of data that extract useful information in order to perform prediction without feature engineering. In this paper, an electromyography (EMG) control scheme with a regression convolutional neural network (CNN) is proposed as a substitute of conventional regression models that use purposefully designed features. APPROACH The usability of the regression CNN model is validated for the first time, using an online Fitts' law style test with both individual and simultaneous wrist motions. Results were compared to that of a support vector regression-based scheme with a group of widely used extracted features. MAIN RESULTS In spite of the proven efficiency of these well-known features, the CNN-based system outperformed the support vector machine (SVM) based scheme in throughput, due to higher regression accuracies especially with high EMG amplitudes. SIGNIFICANCE These results indicate that the CNN model can extract underlying motor control information from EMG signals during single and multiple degree-of-freedom (DoF) tasks. The advantage of regression CNN over classification CNN (studied previously) is that it allows independent and simultaneous control of motions.

111 citations


Journal ArticleDOI
TL;DR: A new implementation of the Finite Element Method (FEM) for TMS and TES that is based on modern algorithms and libraries is presented and the convergence results suggest that accurately capturing the tissue geometry in addition to choosing a sufficiently accurate numerical method is of fundamental importance for accurate simulations.
Abstract: Objective Transcranial magnetic stimulation (TMS) and transcranial electric stimulation (TES) modulate brain activity non-invasively by generating electric fields either by electromagnetic induction or by injecting currents via skin electrodes. Numerical simulations based on anatomically detailed head models of the TMS and TES electric fields can help us to understand and optimize the spatial stimulation pattern in the brain. However, most realistic simulations are still slow, and the role of anatomical fidelity on simulation accuracy has not been evaluated in detail so far. Approach We present and validate a new implementation of the finite element method (FEM) for TMS and TES that is based on modern algorithms and libraries. We also evaluate the convergence of the simulations and estimate errors stemming from numerical and modelling aspects. Main results Comparisons with analytical solutions for spherical phantoms validate our new FEM implementation, which is three to six times faster than previous implementations. The convergence results suggest that accurately capturing the tissue geometry in addition to choosing a sufficiently accurate numerical method is of fundamental importance for accurate simulations. Significance The new implementation allows for a substantial increase in computational efficiency of FEM TMS and TES simulations. This is especially relevant for applications such as the systematic assessment of model uncertainty and the optimization of multi-electrode TES montages. The results of our systematic error analysis allow the user to select the best tradeoff between model resolution and simulation speed for a specific application. The new FEM code is openly available as a part of our open-source software SimNIBS 3.0.

88 citations


Journal ArticleDOI
TL;DR: The results demonstrated the feasibility and efficiency of combining high-frequency steady-state visual evoked potential-based BCI and computer vision-based object recognition to control robotic arms.
Abstract: Objective Recent attempts in developing brain-computer interface (BCI)-controlled robots have shown the potential of this area in the field of assistive robots. However, implementing the process of picking and placing objects using a BCI-controlled robotic arm still remains challenging. BCI performance, system portability, and user comfort need to be further improved. Approach In this study, a novel control approach, which combines high-frequency steady-state visual evoked potential (SSVEP)-based BCI and computer vision-based object recognition, is proposed to control a robotic arm for performing pick and place tasks that require control with multiple degrees of freedom. The computer vision can identify objects in the workspace and locate their positions, while the BCI allows the user to select one of these objects to be acted upon by the robotic arm. The robotic arm was programmed to be able to autonomously pick up and place the selected target object without moment-by-moment supervision by the user. Main results Online results obtained from ten healthy subjects indicated that a BCI command for the proposed system could be selected from four possible choices in 6.5 s (i.e. 2.25 s for visual stimulation and 4.25 s for gaze shifting) with 97.75% accuracy. All subjects could successfully complete the pick and place tasks using the proposed system. Significance These results demonstrated the feasibility and efficiency of combining high-frequency SSVEP-based BCI and computer vision-based object recognition to control robotic arms. The control strategy presented here could be extended to control robotic arms to perform other complicated tasks.

85 citations


Journal ArticleDOI
TL;DR: These results provide a facile implantation method to apply ultraflexible neural probes in scalable neural recording and to control the surgical injury by reducing the microwire diameters to cellular scale.
Abstract: Objective Implanted microelectrodes provide a unique means to directly interface with the nervous system but have been limited by the lack of stable functionality. There is growing evidence suggesting that substantially reducing the mechanical rigidity of neural electrodes promotes tissue compatibility and improves their recording stability in both the short- and long-term. However, the miniaturized dimensions and ultraflexibility desired for mitigating tissue responses preclude the probe's self-supported penetration into the brain tissue. Approach Here we demonstrate the high-throughput implantation of multi-shank ultraflexible neural electrode arrays with surgical footprints as small as 200 µm2 in a mouse model. This is achieved by using arrays of tungsten microwires as shuttle devices, and bio-dissolvable adhesive polyethylene glycol (PEG) to temporarily attach a shank onto each microwire. Main results We show the ability to simultaneously deliver electrode arrays in designed patterns, to adjust the implantation locations of the shanks by need, to target different brain structures, and to control the surgical injury by reducing the microwire diameters to cellular scale. Significance These results provide a facile implantation method to apply ultraflexible neural probes in scalable neural recording.

Journal ArticleDOI
TL;DR: This work investigates electroencephalogram (EEG) signal processing techniques, aiming to enhance the classification performance of multiple MI tasks in terms of tackling the challenges caused by the vast variety of subjects.
Abstract: Objective. A motor-imagery-based brain–computer interface (MI-BCI) provides an alternative way for people to interface with the outside world. However, the classification accuracy of MI signals remains challenging, especially with an increased number of classes and the presence of high variations with data from multiple individual people. This work investigates electroencephalogram (EEG) signal processing techniques, aiming to enhance the classification performance of multiple MI tasks in terms of tackling the challenges caused by the vast variety of subjects. Approach. This work introduces a novel method to extract discriminative features by combining the features of functional brain networks with two other feature extraction algorithms: common spatial pattern (CSP) and local characteristic-scale decomposition (LCD). After functional brain networks are established from the MI EEG signals of the subjects, the measures of degree in the binary networks are extracted as additional features and fused with features in the frequency and spatial domains extracted by the CSP and LCD algorithms. A real-time BCI robot control system is designed and implemented with the proposed method. Subjects can control the movement of the robot through four classes of MI tasks. Both the BCI competition IV dataset 2a and real-time data acquired in our designed system are used to validate the performance of the proposed method. Main results. As for the offline data experiment results, the average classification accuracy of the proposed method reaches 79.7%, outperforming the majority of popular algorithms. Experimental results with real-time data also prove the proposed method to be highly promising in its real-time performance. Significance. The experimental results show that our proposed method is robust in extracting discriminative brain activity features when performing different MI tasks, hence improving the classification accuracy in four-class MI tasks. The high classification accuracy and low computational demand show a considerable practicality for real-time rehabilitation systems.

Journal ArticleDOI
TL;DR: It is hypothesized that fiber orientation influences activation thresholds and that fiber orientations can be selectively targeted with DBS waveforms, and anodic stimulation preferentially activates orthogonal fibers, approaching or leaving the electrode, at lower thresholds for similar therapeutic benefit in DBS with decreased power consumption.
Abstract: Objective: During deep brain stimulation (DBS), it is well understood that extracellular cathodic stimulation can cause activation of passing axons. Activation can be predicted from the second derivative of the electric potential along an axon, which depends on axonal orientation with respect to the stimulation source. We hypothesize that fiber orientation influences activation thresholds and that fiber orientations can be selectively targeted with DBS waveforms. Approach: We used bioelectric field and multicompartment NEURON models to explore preferential activation based on fiber orientation during monopolar or bipolar stimulation. Preferential fiber orientation was extracted from the principal eigenvectors and eigenvalues of the Hessian matrix of the electric potential. We tested cathodic, anodic, and charge-balanced pulses to target neurons based on fiber orientation in general and clinical scenarios. Main Results: Axons passing the DBS lead have positive second derivatives around a cathode, whereas orthogonal axons have positive second derivatives around an anode, as indicated by the Hessian. Multicompartment NEURON models confirm that passing fibers are activated by cathodic stimulation, and orthogonal fibers are activated by anodic stimulation. Additionally, orthogonal axons have lower thresholds compared to passing axons. In a clinical scenario, fiber pathways associated with therapeutic benefit can be targeted with anodic stimulation at 50% lower stimulation amplitudes. Significance: Fiber orientations can be selectively targeted with simple changes to the stimulus waveform. Anodic stimulation preferentially activates orthogonal fibers, approaching or leaving the electrode, at lower thresholds for similar therapeutic benefit in DBS with decreased power consumption.

Journal ArticleDOI
TL;DR: The deep learning method presented here surpassed the performance of previously reported methods using computationally expensive features with standard machine learning methods like logistic regression and support vector machine classifiers.
Abstract: OBJECTIVE This paper introduces a fully automated, subject-specific deep-learning convolutional neural network (CNN) system for forecasting seizures using ambulatory intracranial EEG (iEEG). The system was tested on a hand-held device (Mayo Epilepsy Assist Device) in a pseudo-prospective mode using iEEG from four canines with naturally occurring epilepsy. APPROACH The system was trained and tested on 75 seizures collected over 1608 d utilizing a genetic algorithm to optimize forecasting hyper-parameters (prediction horizon (PH), median filter window length, and probability threshold) for each subject-specific seizure forecasting model. The trained CNN models were deployed on a hand-held tablet computer and tested on testing iEEG datasets from four canines. The results from the iEEG testing datasets were compared with Monte Carlo simulations using a Poisson random predictor with equal time in warning to evaluate seizure forecasting performance. MAIN RESULTS The results show the CNN models forecasted seizures at rates significantly above chance in all four dogs (p < 0.01, with mean 0.79 sensitivity and 18% time in warning). The deep learning method presented here surpassed the performance of previously reported methods using computationally expensive features with standard machine learning methods like logistic regression and support vector machine classifiers. SIGNIFICANCE Our findings principally support the feasibility of deploying trained CNN models on a hand-held computational device (Mayo Epilepsy Assist Device) that analyzes streaming iEEG data for real-time seizure forecasting.

Journal ArticleDOI
TL;DR: The new temporary-tattoo dry electrode system for sleep staging analysis may allow the identification of disorders associated with neurological disorders such as rapid eye movement (REM) sleep behavior disorder.
Abstract: Objective Circadian and sleep dysfunction have long been symptomatic hallmarks of a variety of devastating neurodegenerative conditions. The gold standard for sleep monitoring is overnight sleep in a polysomnography (PSG) laboratory. However, this method has several limitations such as availability, cost and being labour-intensive. In recent years there has been a heightened interest in home-based sleep monitoring via wearable sensors. Our objective was to demonstrate the use of printed electrode technology as a novel platform for sleep monitoring. Approach Printed electrode arrays offer exciting opportunities in the realm of wearable electrophysiology. In particular, soft electrodes can conform neatly to the wearer's skin, allowing user convenience and stable recordings. As such, soft skin-adhesive non-gel-based electrodes offer a unique opportunity to combine electroencephalography (EEG), electromyography (EMG), electrooculography (EOG) and facial EMG capabilities to capture neural and motor functions in comfortable non-laboratory settings. In this investigation temporary-tattoo dry electrode system for sleep staging analysis was designed, implemented and tested. Main results EMG, EOG and EEG were successfully recorded using a wireless system. Stable recordings were achieved both at a hospital environment and a home setting. Sleep monitoring during a 6 h session shows clear differentiation of sleep stages. Significance The new system has great potential in monitoring sleep disorders in the home environment. Specifically, it may allow the identification of disorders associated with neurological disorders such as rapid eye movement (REM) sleep behavior disorder.

Journal ArticleDOI
TL;DR: A computationally efficient method based on the activating function, AF-Max, reliably reproduces the VTAs generated by direct axon modeling, and is proposed as a potentially superior model for representing generic neural tissue activation.
Abstract: Objective: Computational models are a popular tool for predicting the effects of deep brain stimulation (DBS) on neural tissue. One commonly used model, the volume of tissue activated (VTA), is computed using multiple methodologies. We quantified differences in the VTAs generated by five methodologies: the traditional axon model method, the electric field norm, and three activating function based approaches — the activating function at each grid point in the tangential direction (AF-Tan) or in the maximally activating direction (AF-3D), and the maximum activating function along the entire length of a tangential fiber (AF-Max).a#13; Approach: We computed the VTA using each method across multiple stimulation settings. The resulting volumes were compared for similarity, and the methodologies were analyzed for their differences in behavior.a#13; Main Results: Activation threshold values for both the electric field norm and the activating function vary with regards to electrode configuration, pulse width, and frequency. All methods produced highly similar volumes for monopolar stimulation. For bipolar electrode configurations, only the maximum activating function along the tangential axon method, AF-Max, produced similar volumes to those produced by the axon model method. Further analysis revealed that both of these methods are biased by their exclusive use of tangential fiber orientations. In contrast, the activating function in the maximally activating direction method, AF-3D, produces a VTA that is free of axon orientation and projection bias.a#13; Significance: Simulating tangentially oriented axons, the standard approach of computing the VTA, is too computationally expensive for widespread implementation and yields results biased by the assumption of tangential fiber orientation. In this work, we show that a computationally efficient method based on the activating function, AF-Max, reliably reproduces the VTAs generated by direct axon modeling. Further, we propose another method, AF-3D as a potentially superior model for representing generic neural tissue activation.a#13;

Journal ArticleDOI
TL;DR: It is shown for the first time that intraneural sensory feedback of the grip force improves the sensorimotor control of a transradial amputee controlling a myoelectric prosthesis and opens up new possibilities to improve the quality of life of amputees using a neural prosthesis.
Abstract: Objective Tactile afferents in the human hand provide fundamental information about hand-environment interactions, which is used by the brain to adapt the motor output to the physical properties of the object being manipulated. A hand amputation disrupts both afferent and efferent pathways from/to the hand, completely invalidating the individual's motor repertoire. Although motor functions may be partially recovered by using a myoelectric prosthesis, providing functionally effective sensory feedback to users of prosthetics is a largely unsolved challenge. While past studies using invasive stimulation suggested that sensory feedback may help in handling fragile objects, none explored the underpinning, relearned, motor coordination during grasping. In this study, we aimed at showing for the first time that intraneural sensory feedback of the grip force (GF) improves the sensorimotor control of a transradial amputee controlling a myoelectric prosthesis. Approach We performed a longitudinal study testing a single subject (clinical trial registration number NCT02848846). A stacking cups test (CUP) performed over two weeks aimed at measuring the subject's ability to finely regulate the GF applied with the prosthesis. A pick and lift test (PLT), performed at the end of the study, measured the level of motor coordination, and whether the subject transferred the motor skills learned in the CUP to an alien task. Main results The results show that intraneural sensory feedback increases the subject's ability in regulating the GF and allows for improved performance over time. Additionally, the PLT demonstrated that the subject was able to generalize and transfer her manipulation skills to an unknown task and to improve her motor coordination. Significance Our findings suggest that intraneural sensory feedback holds the potential of restoring functionally effective tactile feedback. This opens up new possibilities to improve the quality of life of amputees using a neural prosthesis.

Journal ArticleDOI
TL;DR: The neural-drive method on real-time finger force estimation was more accurate over time compared with the conventional EMG-amplitude method during prolonged muscle contractions, and can potentially offer a more accurate and robust neural interface technique for reliable neural-machine interactions based on MU pool discharge information.
Abstract: Objective The goal of this study was to perform real-time estimation of isometric finger extension force using the discharge information of motor units (MUs). Approach A real-time electromyogram (EMG) decomposition method based on the fast independent component analysis (FastICA) algorithm was developed to extract MU discharge events from high-density (HD) EMG recordings. The decomposition was first performed offline during an initialization period, and the obtained separation matrix was then applied to new data samples in real-time. Since MU pool discharge probability reflects the neural drive to spinal motoneurons, individual finger forces were estimated based on a firing rate-force model established during the initialization, termed the neural-drive method. The conventional EMG amplitude-based method was used to estimate the forces as a comparison, termed the EMG-amplitude method. Simulated HD-EMG signals were first used to evaluate the accuracy of the real-time decomposition. Experimental EMG recordings of 5 min of isometric finger extension with pseudorandom force levels were used to assess the performance of force estimation over time. Main results The simulation results showed that the accuracy of real-time decomposition was 86%, compared with an offline accuracy of 94%. However, the real-time decomposition accuracy was stable over time. The experimental results showed that the neural-drive method had a significantly smaller root mean square error (RMSE) of the force estimation compared with the EMG-amplitude method, which was consistent across fingers. Additionally, the RMSE of the neural-drive method was stable until 230 s, while the RMSE of the EMG-amplitude method increased progressively over time. Significance The neural-drive method on real-time finger force estimation was more accurate over time compared with the conventional EMG-amplitude method during prolonged muscle contractions. The outcomes can potentially offer a more accurate and robust neural interface technique for reliable neural-machine interactions based on MU pool discharge information.

Journal ArticleDOI
TL;DR: Findings validate the feasibility of the proposed NFT to improve sensorimotor cortical activations and BCI performance during motor imagery and it is promising to optimize conventional NFT manner and evaluate the effectiveness of motor training.
Abstract: Objective We proposed a brain-computer interface (BCI) based visual-haptic neurofeedback training (NFT) by incorporating synchronous visual scene and proprioceptive electrical stimulation feedback. The goal of this work was to improve sensorimotor cortical activations and classification performance during motor imagery (MI). In addition, their correlations and brain network patterns were also investigated respectively. Approach 64-channel electroencephalographic (EEG) data were recorded in nineteen healthy subjects during MI before and after NFT. During NFT sessions, the synchronous visual-haptic feedbacks were driven by real-time lateralized relative event-related desynchronization (lrERD). Main results By comparison between previous and posterior control sessions, the cortical activations measured by multi-band (i.e. alpha_1: 8-10 Hz, alpha_2: 11-13 Hz, beta_1: 15-20 Hz and beta_2: 22-28 Hz) absolute ERD powers and lrERD patterns were significantly enhanced after the NFT. The classification performance was also significantly improved, achieving a ~9% improvement and reaching ~85% in mean classification accuracy from a relatively poor performance. Additionally, there were significant correlations between lrERD patterns and classification accuracies. The partial directed coherence based functional connectivity (FC) networks covering the sensorimotor area also showed an increase after the NFT. Significance These findings validate the feasibility of our proposed NFT to improve sensorimotor cortical activations and BCI performance during motor imagery. And it is promising to optimize conventional NFT manner and evaluate the effectiveness of motor training.

Journal ArticleDOI
TL;DR: It is concluded that simultaneous recordings of the perceived sound and the corresponding EEG response may be a practical tool to assess speech intelligibility in the context of hearing aids.
Abstract: Objective Speech signals have a remarkable ability to entrain brain activity to the rapid fluctuations of speech sounds. For instance, one can readily measure a correlation of the sound amplitude with the evoked responses of the electroencephalogram (EEG), and the strength of this correlation is indicative of whether the listener is attending to the speech. In this study we asked whether this stimulus-response correlation is also predictive of speech intelligibility. Approach We hypothesized that when a listener fails to understand the speech in adverse hearing conditions, attention wanes and stimulus-response correlation also drops. To test this, we measure a listener's ability to detect words in noisy speech while recording their brain activity using EEG. We alter intelligibility without changing the acoustic stimulus by pairing it with congruent and incongruent visual speech. Main results For almost all subjects we found that an improvement in speech detection coincided with an increase in correlation between the noisy speech and the EEG measured over a period of 30 min. Significance We conclude that simultaneous recordings of the perceived sound and the corresponding EEG response may be a practical tool to assess speech intelligibility in the context of hearing aids.

Journal ArticleDOI
TL;DR: The results suggest that LSSMs with low-dimensional latent states can capture important dynamics in human large-scale ECoG power features, thus achieving dynamic modeling and dimensionality reduction.
Abstract: Objective. Developing dynamic network models for multisite electrocorticogram (ECoG) activity can help study neural representations and design neurotechnologies in humans given the clinical promise of ECoG. However, dynamic network models have so far largely focused on spike recordings rather than ECoG. A dynamic network model for ECoG recordings, which constitute a network, should describe their temporal dynamics while also achieving dimensionality reduction given the inherent spatial and temporal correlations. Approach. We devise both linear and nonlinear dynamic models for ECoG power features and comprehensively evaluate their accuracy in predicting feature dynamics. Linear state-space models (LSSMs) provide a general linear dynamic network model and can simultaneously achieve dimensionality reduction by describing high-dimensional signals in terms of a low-dimensional latent state. We thus study whether and how well LSSMs can predict ECoG dynamics and achieve dimensionality reduction. Further, we fit a general family of nonlinear dynamic models termed radial basis function (RBF) auto-regressive (AR) models for ECoG to study how the linear form of LSSMs affects the prediction of ECoG dynamics. Finally, we study the differences in dynamics and predictability of ECoG power features across different frequency bands. We use both numerical simulations and large-scale ECoG activity recorded from 10 human epilepsy subjects to evaluate the models. Results. First, we find that LSSMs can significantly predict the dynamics of ECoG power features using latent states with a much lower dimension compared to the number of features. Second, compared with LSSMs, nonlinear RBF-AR models do not improve the prediction of human ECoG power features, thus suggesting the usefulness of the linear assumption in describing ECoG dynamics. Finally, compared with other frequency bands, the dynamics of ECoG power features in 1-8 Hz (delta+theta) can be predicted significantly better and is more dominated by slow dynamics. Significance. Our results suggest that LSSMs with low-dimensional latent states can capture important dynamics in human large-scale ECoG power features, thus achieving dynamic modeling and dimensionality reduction. These results have significant implications for studying human brain function and dysfunction and for future design of closed-loop neurotechnologies for decoding and stimulation.

Journal ArticleDOI
TL;DR: A personalized closed-loop anesthetic delivery system in a rodent model that tracks both inter- and intra-subject variabilities in real time while simultaneously controlling the anesthetic in closed loop is developed and discovered that the brain response to anesthetic infusion rate varied during control.
Abstract: OBJECTIVE Personalized automatic control of medically-induced coma, a critical multi-day therapy in the intensive care unit, could greatly benefit clinical care and further provide a novel scientific tool for investigating how the brain response to anesthetic infusion rate changes during therapy. Personalized control would require real-time tracking of inter- and intra-subject variabilities in the brain response to anesthetic infusion rate while simultaneously delivering the therapy, which has not been achieved. Current control systems for medically-induced coma require a separate offline model fitting experiment to deal with inter-subject variabilities, which would lead to therapy interruption. Removing the need for these offline interruptions could help facilitate clinical feasbility. In addition, current systems do not track intra-subject variabilities. Tracking intra-subject variabilities is essential for studying whether or how the brain response to anesthetic infusion rate changes during therapy. Further, such tracking could enhance control precison and thus help facilitate clinical feasibility. APPROACH Here we develop a personalized closed-loop anesthetic delivery (CLAD) system in a rodent model that tracks both inter- and intra-subject variabilities in real time while simultaneously controlling the anesthetic in closed loop. We tested the CLAD in rats by administrating propofol to control the electroencephalogram (EEG) burst suppression. We first examined whether the CLAD can remove the need for offline model fitting interruption. We then used the CLAD as a tool to study whether and how the brain response to anesthetic infusion rate changes as a function of changes in the depth of medically-induced coma. Finally, we studied whether the CLAD can enhance control compared with prior systems by tracking intra-subject variabilities. MAIN RESULTS The CLAD precisely controlled the EEG burst suppression in each rat without performing offline model fitting experiments. Further, using the CLAD, we discovered that the brain response to anesthetic infusion rate varied during control, and that these variations correlated with the depth of medically-induced coma in a consistent manner across individual rats. Finally, tracking these variations reduced control bias and error by more than 70% compared with prior systems. SIGNIFICANCE This personalized CLAD provides a new tool to study the dynamics of brain response to anesthetic infusion rate and has significant implications for enabling clinically-feasible automatic control of medically-induced coma.

Journal ArticleDOI
TL;DR: SpindleNet is ultra-fast and scalable to multichannel EEG recordings, with an accuracy level comparable to human experts, making it appealing for long-term sleep monitoring and closed-loop neuroscience experiments.
Abstract: Objective Sleep spindles have been implicated in memory consolidation and synaptic plasticity during NREM sleep. Detection accuracy and latency in automatic spindle detection are critical for real-time applications. Approach Here we propose a novel deep learning strategy (SpindleNet) to detect sleep spindles based on a single EEG channel. While the majority of spindle detection methods are used for off-line applications, our method is well suited for online applications. Main results Compared with other spindle detection methods, SpindleNet achieves superior detection accuracy and speed, as demonstrated in two publicly available expert-validated EEG sleep spindle datasets. Our real-time detection of spindle onset achieves detection latencies of 150-350 ms (~two-three spindle cycles) and retains excellent performance under low EEG sampling frequencies and low signal-to-noise ratios. SpindleNet has good generalization across different sleep datasets from various subject groups of different ages and species. Significance SpindleNet is ultra-fast and scalable to multichannel EEG recordings, with an accuracy level comparable to human experts, making it appealing for long-term sleep monitoring and closed-loop neuroscience experiments.

Journal ArticleDOI
TL;DR: The strong association between motor unit discharge behaviors and kinematics proves the potential of the approach for the simultaneous and proportional control of prostheses and indicated the possibility of identifying individual motor unit behavior in dynamic natural contractions.
Abstract: Objective The aim of the study was to characterize the accuracy in the identification of motor unit discharges during natural movements using high-density electromyography (EMG) signals and to investigate their correlation with finger kinematics. Approach High-density EMG signals of forearm muscles and finger joint angles were recorded concurrently during hand movements of ten able-bodied subjects. EMG signals were decomposed into motor unit spike trains (MUSTs) with a blind-source separation method. The first principle component (FPC) of the low-pass filtered MUST was correlated with finger joint angles. Main results On average, [Formula: see text] motor units were identified during each individual finger task with an estimated decomposition accuracy [Formula: see text]85%. The FPC extracted from discharge rates was strongly associated to the joint angles ([Formula: see text]), and preceded the joint angles on average by [Formula: see text] ms. Moreover, the FPC outperformed two time-domain features (the EMG envelop and the root mean square of EMG) in estimating joint angles. Significance These results indicated the possibility of identifying individual motor unit behavior in dynamic natural contractions. Moreover, the strong association between motor unit discharge behaviors and kinematics proves the potential of the approach for the simultaneous and proportional control of prostheses.

Journal ArticleDOI
TL;DR: A novel approach for improving motor intention detection by automatically selecting subject-specific spatio-temporal-spectral features, especially when MI has to be detected against rest condition is presented.
Abstract: Fil: Peterson, Victoria. Consejo Nacional de Investigaciones Cientificas y Tecnicas. Centro Cientifico Tecnologico Conicet - Santa Fe. Instituto de Matematica Aplicada del Litoral. Universidad Nacional del Litoral. Instituto de Matematica Aplicada del Litoral; Argentina

Journal ArticleDOI
TL;DR: This study provided a novel method for detecting personalized spatial-frequency abnormalities of children with ADHD at a precise spatial- frequencies resolution and proposed a new form of representation of multichannel EEG data that is compatible with mainstream CNN architectures.
Abstract: OBJECTIVE Attention-deficit/hyperactivity disorder (ADHD) is one of the most prevalent neurobehavioral disorders. Studies have tried to find the neural correlations of ADHD with electroencephalography (EEG). Due to the heterogeneity in the ADHD population, a multivariate EEG profile is useful, and the detection of a personalized abnormality in EEG is urgently needed. Deep learning algorithms, especially convolutional neural network (CNN), have made tremendous progress recently, and are expected to solve the problem. APPROACH We adopted CNN techniques and a visualization technique named gradient-weighted class activation mapping (Grad-CAM) for detecting a personalized spatial-frequency abnormality in EEGs of ADHD children. A total of 50 children with ADHD (nine girls, mean age: 10.44 ± 0.75) and 57 controls who were matched for age and handedness were recruited. The power spectrum density of EEGs was used as input. We presented an intuitive form of representing multichannel EEG data that is trainable to CNN models. Personalized abnormalities were derived from ADHD children and were compared to the distributions of relative powers in different frequency bands. MAIN RESULTS We demonstrated that applying CNN techniques to ADHD identification is feasible, with an accuracy of 90.29% ± 0.58%. There were major differences in personalized spatial-frequency abnormalities between individuals affected by ADHD. The abnormalities were consistent with the power distributions in both group- and individual- level. SIGNIFICANCE This study provided a novel method for detecting personalized spatial-frequency abnormalities of children with ADHD at a precise spatial-frequency resolution. We proposed a new form of representation of multichannel EEG data that is compatible with mainstream CNN architectures. We ensured that CNN models were interpretable and reliable relating to clinical practice by visualizing the decision-making process. We expect that detection of personalized abnormalities using deep learning techniques can facilitate the identification of potential neural pathways and the planning of targeted treatments for children with ADHD.

Journal ArticleDOI
TL;DR: This work computationally investigate the distributions and strength of the stimulation dosage during ctDCS with the aim of determining the targeted cerebellar regions of a group of subjects with different electrode montages.
Abstract: Objective Cerebellar transcranial direct current stimulation (ctDCS) is a neuromodulation scheme that delivers a small current to the cerebellum. In this work, we computationally investigate the distributions and strength of the stimulation dosage during ctDCS with the aim of determining the targeted cerebellar regions of a group of subjects with different electrode montages. Approach We used a new inter-individual registration method that permitted the projection of computed electric fields (EFs) from individual realistic head models (n = 18) to standard cerebellar template for the first time. Main results Variations of the EF on the cerebellar surface were found to have standard deviations of up to 55% of the mean. The dominant factor that accounted for 62% of the variability of the maximum EFs was the skin-cerebellum distance, whereas the cerebrospinal fluid volume explained 53% of the average EF distribution. Despite the inter-individual variations, a systematic tendency of the EF hotspot emerges beneath the active electrode in group-level analysis. The hotspot can be adjusted by the electrode position so that the most effective stimulation is delivered to a group of subjects. Significance Targeting specific cerebellar structures with ctDCS is not straightforward, as neuromodulation depends not only on the placement/design of the electrodes configuration but also on inter-individual variability due to anatomical differences. The proposed method permitted generalizing the EFs to a cerebellum atlas. The atlas is useful for studying the mechanisms of ctDCS, planning ctDCS and explaining findings of experimental studies.

Journal ArticleDOI
TL;DR: Thresholds were largely insensitive to the transverse endoneurial resistivity, but estimates of the bulk resistivity increased with extracellular resistivity and axonal area fraction; the numerical and analytical estimates were in strong agreement except at high axonal Area fractions.
Abstract: Objective Computational modeling is an important tool for developing and optimizing implantable neural stimulation devices, but requires accurate electrical and geometrical parameter values to improve predictive value. We quantified the effects of perineurial (resistive sheath around each fascicle) and endoneurial (within each fascicle) parameter values for modeling peripheral nerve stimulation. Approach We implemented 3D finite element models of compound peripheral nerves and cuff electrodes to quantify activation and block thresholds of model axons. We also implemented a 2D finite element model of a bundle of axons to estimate the bulk transverse endoneurial resistivity; we compared numerical estimates to an analytical solution. Main results Since the perineurium is highly resistive, potentials were approximately constant over the cross section of a fascicle, and the perineurium resistivity, longitudinal endoneurial resistivity, and fascicle diameter had important effects on thresholds. Activation thresholds increased up to ~130% for higher perineurium resistivity (~400 versus 2200 Ω m) and by ~35%-250% for lower longitudinal endoneurial resistivity (3.5 versus 0.75 Ω m), with larger increases for smaller diameter axons and for axons in larger fascicles. Further, thresholds increased by ~30%-180% for larger fascicle radii, yielding a larger increase with higher perineurium resistivity. Thresholds were largely insensitive to the transverse endoneurial resistivity, but estimates of the bulk resistivity increased with extracellular resistivity and axonal area fraction; the numerical and analytical estimates were in strong agreement except at high axonal area fractions, where structured axon placements that achieved tighter packing produced electric field inhomogeneities. Significance We performed a systematic investigation of the effects of values and methods for modeling the perineurium and endoneurium on thresholds for neural stimulation and block. These results provide guidance for future modeling studies, including parameter selection, data interpretation, and comparison to experimental results.

Journal ArticleDOI
TL;DR: Subretinal photovoltaic arrays implanted subretinally into rats with degenerate retina provide sufficient visual acuity for restoration of central vision in patients blinded by age-related macular degeneration and are evaluated in vivo.
Abstract: Objective. Retinal prostheses aim to restore sight by electrically stimulating the surviving retinal neurons. In clinical trials of the current retinal implants, prosthetic visual acuity does not exceed 20/550. However, to provide meaningful restoration of central vision in patients blinded by age-related macular degeneration (AMD), prosthetic acuity should be at least 20/200, necessitating a pixel pitch of about 50 µm or lower. With such small pixels, stimulation thresholds are high due to limited penetration of electric field into tissue. Here, we address this challenge with our latest photovoltaic arrays and evaluate their performance in-vivo. Approach. We fabricated photovoltaic arrays with 55 and 40 µm pixels (a) in flat geometry, and (b) with active electrodes on 10 µm tall pillars. The arrays were implanted subretinally into rats with degenerate retina. Stimulation thresholds and grating acuity were evaluated using measurements of the visually evoked potentials (VEP). Main Results. With 55 μm pixels, we measured grating acuity of 48±11 μm, which matches the linear pixel pitch of the hexagonal array. This geometrically corresponds to a visual acuity of 20/192 in a human eye, matching the threshold of legal blindness in the US (20/200). With pillar electrodes, the irradiance threshold was nearly halved, and duration threshold reduced by more than 3-fold, compared to flat pixels. With 40 μm pixels, VEP was too low for reliable measurements of the grating acuity, even with pillar electrodes. Significance. While being helpful for treating a complete loss of sight, current prosthetic technologies are insufficient for addressing the leading cause of untreatable visual impairment - AMD. Subretinal photovoltaic arrays may provide sufficient visual acuity for restoration of central vision in patients blinded by AMD.

Journal ArticleDOI
TL;DR: The promise of CL DBS therapy is demonstrated and the importance of using subject-specific models in these systems is highlighted, as the therapeutic effect of the system is at least as good as that of current, continuous-stimulation paradigms.
Abstract: Objective Deep brain stimulation (DBS) is a well-established treatment for essential tremor, but may not be an optimal therapy, as it is always on, regardless of symptoms. A closed-loop (CL) DBS, which uses a biosignal to determine when stimulation should be given, may be better. Cortical activity is a promising biosignal for use in a closed-loop system because it contains features that are correlated with pathological and normal movements. However, neural signals are different across individuals, making it difficult to create a 'one size fits all' closed-loop system. Approach We used machine learning to create a patient-specific, CL DBS system. In this system, binary classifiers are used to extract patient-specific features from cortical signals and determine when volitional, tremor-evoking movement is occurring to alter stimulation voltage in real time. Main results This system is able to deliver stimulation up to 87%-100% of the time that subjects are moving. Additionally, we show that the therapeutic effect of the system is at least as good as that of current, continuous-stimulation paradigms. Significance These findings demonstrate the promise of CL DBS therapy and highlight the importance of using subject-specific models in these systems.