scispace - formally typeset
Search or ask a question

Showing papers in "Frontiers in Neuroinformatics in 2018"



Journal ArticleDOI
TL;DR: A convolutional neural network based on raw EEG signals instead of manual feature extraction was used and the effective identification of the three cases using time domain signals as input samples is achieved for only some patients, but the classification accuracies of frequency domain signals are significantly increased compared to timedomain signals.
Abstract: Epilepsy is a neurological disorder that affects approximately fifty million people according to the World Health Organization. While electroencephalography (EEG) plays important roles in monitoring the brain activity of patients with epilepsy and diagnosing epilepsy, an expert is needed to analyze all EEG recordings to detect epileptic activity. This method is obviously time-consuming and tedious, and a timely and accurate diagnosis of epilepsy is essential to initiate antiepileptic drug therapy and subsequently reduce the risk of future seizures and seizure-related complications. In this study, a convolutional neural network (CNN) based on raw EEG signals instead of manual feature extraction was used to distinguish ictal, preictal, and interictal segments for epileptic seizure detection. We compared the performances of time and frequency domain signals in the detection of epileptic signals based on the intracranial Freiburg and scalp CHB-MIT databases to explore the potential of these parameters. Three types of experiments involving two binary classification problems (interictal vs. preictal and interictal vs. ictal) and one three-class problem (interictal vs. preictal vs. ictal) were conducted to explore the feasibility of this method. Using frequency domain signals in the Freiburg database, average accuracies of 96.7, 95.4, and 92.3% were obtained for the three experiments, while the average accuracies for detection in the CHB-MIT database were 95.6, 97.5, and 93% in the three experiments. Using time domain signals in the Freiburg database, the average accuracies were 91.1, 83.8, and 85.1% in the three experiments, while the signal detection accuracies in the CHB-MIT database were only 59.5, 62.3, and 47.9% in the three experiments. Based on these results, the three cases are effectively detected using frequency domain signals. However, the effective identification of the three cases using time domain signals as input samples is achieved for only some patients. Overall, the classification accuracies of frequency domain signals are significantly increased compared to time domain signals. In addition, frequency domain signals have greater potential than time domain signals for CNN applications.

264 citations


Journal ArticleDOI
TL;DR: It is argued that this package facilitates the use of spiking networks for large-scale machine learning problems and some simple examples by using BindsNET in practice are shown.
Abstract: The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.

201 citations


Journal ArticleDOI
TL;DR: A first 3D cell atlas for the whole mouse brain is provided, showing cell positions constructed algorithmically from whole brain Nissl and gene expression stains, and compared against values from the literature.
Abstract: Despite vast numbers of studies of stained cells in the mouse brain, no current brain atlas provides region-by-region neuron counts. In fact, neuron numbers are only available for about 4% of brain of regions and estimates often vary by as much as 3-fold. Here we provide a first 3D cell atlas for the whole mouse brain, showing cell positions constructed algorithmically from whole brain Nissl and gene expression stains, and compared against values from the literature. The atlas provides the densities and positions of all excitatory and inhibitory neurons, astrocytes, oligodendrocytes, and microglia in each of the 737 brain regions defined in the AMBA. The atlas is dynamic, allowing comparison with previously reported numbers, addition of cell types, and improvement of estimates as new data is integrated. The atlas also provides insights into cellular organization only possible at this whole brain scale, and is publicly available.

194 citations


Journal ArticleDOI
TL;DR: NeuroMatic is an open-source software toolkit that performs data acquisition, data analysis, simulations and simulations of electrophysiological properties of the nervous system and has the advantage of working within Igor Pro, a platform-independent environment that includes an extensive library of built-in functions.
Abstract: Acquisition, analysis and simulation of electrophysiological properties of the nervous system require multiple software packages. This makes it difficult to conserve experimental metadata and track the analysis performed. It also complicates certain experimental approaches such as online analysis. To address this, we developed NeuroMatic, an open-source software toolkit that performs data acquisition (episodic, continuous and triggered recordings), data analysis (spike rasters, spontaneous event detection, curve fitting, stationarity) and simulations (stochastic synaptic transmission, synaptic short-term plasticity, integrate-and-fire and Hodgkin-Huxley-like single-compartment models). The merging of a wide range of tools into a single package facilitates a more integrated style of research, from the development of online analysis functions during data acquisition, to the simulation of synaptic conductance trains during dynamic-clamp experiments. Moreover, NeuroMatic has the advantage of working within Igor Pro, a platform-independent environment that includes an extensive library of built-in functions, a history window for reviewing the user's workflow and the ability to produce publication-quality graphics. Since its original release, NeuroMatic has been used in a wide range of scientific studies and its user base has grown considerably. NeuroMatic version 3.0 can be found at http://www.neuromatic.thinkrandom.com and https://github.com/SilverLabUCL/NeuroMatic.

190 citations


Journal ArticleDOI
TL;DR: A new classification framework based on combination of 2D convolutional neural networks (CNN) and recurrent Neural networks (RNNs), which learns the intra-slice and inter-slice features for classification after decomposition of the 3D PET image into a sequence of2D slices is proposed.
Abstract: Alzheimer’s disease (AD) is a progressive and irreversible brain degenerative disorder which often happens in people aged more than 65 years old. Currently, there is no effective cure for AD, but it is of great interest to develop treatments that can delay its progression. Accurate and early diagnosis of AD is vital for the patient care and development of future treatment. Positrons Emission Tomography (PET) is a functional molecular imaging modality, which proves to be a powerful tool to help understand the anatomical and neural changes of brain related to AD. Most existing methods extract the handcrafted features from images, and then design a classifier to distinguish AD from other groups. The success of these computer-aided diagnosis methods highly depends on the preprocessing of brain images, including image rigid registration and segmentation. Motivated by the success of deep learning in image classification, this paper proposes a new classification framework based on combination of 2D convolutional neural networks (CNN) and recurrent neural networks (RNN), which learns the intra-slice and inter-slice features for classification after decomposition of the 3D PET image into a sequence of 2D slices. The hierarchical 2D CNNs are built to capture the intra-slice features while the gated recurrent unit (GRU) of RNN is cascaded to learn and integrate the inter-slice features for final classification. No rigid image registration and segmentation are required for PET images. Our method is evaluated on the baseline FDG-PET images acquired from 339 subjects including 93 AD patients, 146 mild cognitive impairments (MCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an area under receiver operating characteristic curve (AUC) of 95.3% for AD vs. NC classification and 83.9% for MCI vs. NC classification, demonstrating the promising classification performance.

175 citations


Journal ArticleDOI
TL;DR: This work presents a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks and shows that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Abstract: State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10% of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

113 citations


Journal ArticleDOI
TL;DR: The TUH EEG Seizure Corpus (TUSZ) is introduced, which is the largest open source corpus of its type, and represents an accurate characterization of clinical conditions.
Abstract: The electroencephalogram (EEG), which has been in clinical use for over 70 years, is still an essential tool for diagnosis of neural functioning (Kennett, 2012). Well-known applications of EEGs include identification of epilepsy and epileptic seizures, anoxic and hypoxic damage to the brain, and identification of neural disorders such as hemorrhagic stroke, ischemia and toxic metabolic encephalopathy (Drury, 1988). More recently there has been interest in diagnosing Alzheimer's (Tsolaki et al., 2014), head trauma (Rapp et al., 2015), and sleep disorders (Younes, 2017). Many of these clinical applications now involve the collection of large amounts of data (e.g., 72-h continuous EEG recordings), which makes manual interpretation challenging. Similarly, the increased use of EEGs in critical care has created a significant demand for high-performance automatic interpretation software (e.g., real-time seizure detection). A critical obstacle in the development of machine learning (ML) technology for these applications is the lack of big data resources to support training of complex deep learning systems. One of the most popular transcribed seizure databases available to the research community, the CHB-MIT Corpus (Goldberger et al., 2000), only consists of 23 subjects. Though high performance has been achieved on this corpus (Shoeb and Guttag, 2010), these results have not been representative of clinical performance (Golmohammadi et al., 2018). Therefore, we introduce the TUH EEG Seizure Corpus (TUSZ), which is the largest open source corpus of its type and represents an accurate characterization of clinical conditions. Since seizures occur only a small fraction of the time in this type of data, and manual annotation of such low-yield data would be prohibitively expensive and unproductive, we developed a triage process for locating seizure recordings. We automatically selected data from the much larger TUH EEG Corpus (Obeid and Picone, 2016) that met certain selection criteria. Three approaches were used to identify files with a high probability that a seizure event occurred: (1) keyword search of EEG reports for sessions that were likely to contain seizures (e.g., reports containing phrases such as “seizure begins with” and “evolution”), (2) automatic detection of seizure events using commercially available software (Persyst Development Corporation., 2017), and (3) automatic detection using an experimental deep learning system (Golmohammadi et al., 2018). Data for which approaches (2) and (3) were in agreement were given highest priority. Accurate annotation of an EEG requires extensive training. For this reason, manual annotation of EEGs is usually done by board-certified neurologists with many years of post-medical school training. Consequently, it is difficult to transcribe large amounts of data because such expertise is in short supply and is most often focused on clinical practice. Previous attempts to employ panels of experts or use crowdsourcing strategies were not productive (Obeid et al., 2017). However, we have demonstrated that a viable alternative is to use a team of highly trained undergraduates at the Neural Engineering Data Consortium (NEDC) at Temple University. These students have been trained to transcribe data for seizure events (e.g., start/stop times; seizure type) at accuracy levels that rival expert neurologists at a fraction of the cost (Obeid et al., 2017; Shah et al. in review). In order to validate the team's work, a portion of their annotations were compared to those of expert neurologists and shown to have a high inter-rater agreement. In this paper, we describe the techniques used to develop TUSZ, evaluate their effectiveness, and present some descriptive statistics on the resulting corpus.

97 citations


Journal ArticleDOI
TL;DR: The open-source software LFPy is extended to allow for modeling of networks of multicompartment neurons with concurrent calculations of extracellular potentials and current dipole moments and is shown to show strong scaling performance with different numbers of message-passing interface (MPI) processes, and for different network sizes with different density of connections.
Abstract: Recordings of extracellular electrical, and later also magnetic, brain signals have been the dominant technique for measuring brain activity for decades. The interpretation of such signals is however nontrivial, as the measured signals result from both local and distant neuronal activity. In volume-conductor theory the extracellular potentials can be calculated from a distance-weighted sum of contributions from transmembrane currents of neurons. Given the same transmembrane currents, the contributions to the magnetic field recorded both inside and outside the brain can also be computed. This allows for the development of computational tools implementing forward models grounded in the biophysics underlying electrical and magnetic measurement modalities. LFPy (LFPy.readthedocs.io) incorporated a well-established scheme for predicting extracellular potentials of individual neurons with arbitrary levels of biological detail. It relies on NEURON (neuron.yale.edu) to compute transmembrane currents of multicompartment neurons which is then used in combination with an electrostatic forward model. Its functionality is now extended to allow for modeling of networks of multicompartment neurons with concurrent calculations of extracellular potentials and current dipole moments. The current dipole moments are then, in combination with suitable volume-conductor head models, used to compute non-invasive measures of neuronal activity, like scalp potentials (electroencephalographic recordings; EEG) and magnetic fields outside the head (magnetoencephalographic recordings; MEG). One such built-in head model is the four-sphere head model incorporating the different electric conductivities of brain, cerebrospinal fluid, skull and scalp. We demonstrate the new functionality of the software by constructing a network of biophysically detailed multicompartment neuron models from the Neocortical Microcircuit Collaboration (NMC) Portal (bbp.epfl.ch/nmc-portal) with corresponding statistics of connections and synapses, and compute in vivo-like extracellular potentials (local field potentials, LFP; electrocorticographical signals, ECoG) and corresponding current dipole moments. From the current dipole moments we estimate corresponding EEG and MEG signals using the four-sphere head model. We also show strong scaling performance of LFPy with different numbers of message-passing interface (MPI) processes, and for different network sizes with different density of connections. The open-source software LFPy is equally suitable for execution on laptops and in parallel on high-performance computing (HPC) facilities and is publicly available on GitHub.com.

96 citations


Journal ArticleDOI
TL;DR: The analyses revealed that the collection of EEG data using a high-density montage is crucial for RSN detection by sICA, but also the use of appropriate methods for head modeling and source localization have a substantial effect on RSN reconstruction.
Abstract: Resting state networks (RSNs) in the human brain were recently detected using high-density electroencephalography (hdEEG). This was done by using an advanced analysis workflow to estimate neural signals in the cortex and to assess functional connectivity (FC) between distant cortical regions. FC analyses were conducted either using temporal (tICA) or spatial independent component analysis (sICA). Notably, EEG-RSNs obtained with sICA were very similar to RSNs retrieved with sICA from functional magnetic resonance imaging data. It still remains to be clarified, however, what technological aspects of hdEEG acquisition and analysis primarily influence this correspondence. Here we examined to what extent the detection of EEG-RSN maps by sICA depends on the electrode density, the accuracy of the head model, and the source localization algorithm employed. Our analyses revealed that the collection of EEG data using a high-density montage is crucial for RSN detection by sICA, but also the use of appropriate methods for head modeling and source localization have a substantial effect on RSN reconstruction. Overall, our results confirm the potential of hdEEG for mapping the functional architecture of the human brain, and highlight at the same time the interplay between acquisition technology and innovative solutions in data analysis.

94 citations


Journal ArticleDOI
TL;DR: This study demonstrates how to utilize Bayesian inference, Bayesian second-level inference in particular, implemented in SPM 12 by analyzing fMRI data available to public via NeuroVault and provides practical guidelines about how to set the parameters for Bayes inference and how to interpret the results, such as Bayes factors, from the inference.
Abstract: Recent debates about the conventional traditional threshold used in the fields of neuroscience and psychology, namely P < .05, have spurred researchers to consider alternative ways to analyze fMRI data. A group of methodologists and statisticians have considered Bayesian inference as a candidate methodology. However, few previous studies have attempted to provide end users of fMRI analysis tools, such as SPM 12, with practical guidelines about how to conduct Bayesian inference. In the present study, we aim to demonstrate how to utilize Bayesian inference, Bayesian second-level inference in particular, implemented in SPM 12 by analyzing fMRI data available to public via NeuroVault. In addition, to help end users understand how Bayesian inference actually works in SPM 12, we examine outcomes from Bayesian second-level inference implemented in SPM 12 by comparing them with those from classical second-level inference. Finally, we provide practical guidelines about how to set the parameters for Bayesian inference and how to interpret the results, such as Bayes factors, from the inference. We also discuss the practical and philosophical benefits of Bayesian inference and directions for future research.

Journal ArticleDOI
TL;DR: This article reviews the various systems proposed over the past few years with a focus on the shortcomings that have prevented wide-scale implementation, including issues pertaining to temporal stability, psychological and physiological changes, protocol design, equipment and performance evaluation.
Abstract: The emergence of the digital world has greatly increased the number of accounts and passwords that users must remember. It has also increased the need for secure access to personal information in the cloud. Biometrics is one approach to person recognition, which can be used in identification as well as authentication. Among the various modalities that have been developed, electroencephalography (EEG)-based biometrics features unparalleled universality, distinctiveness and collectability, while minimizing the risk of circumvention. However, commercializing EEG-based person recognition poses a number of challenges. This article reviews the various systems proposed over the past few years with a focus on the shortcomings that have prevented wide-scale implementation, including issues pertaining to temporal stability, psychological and physiological changes, protocol design, equipment and performance evaluation. We also examine several directions for the further development of usable EEG-based recognition systems as well as the niche markets to which they could be applied. It is expected that rapid advancements in EEG instrumentation, on-device processing and machine learning techniques will lead to the emergence of commercialized person recognition systems in the near future.

Journal ArticleDOI
TL;DR: A newly developed standard for presenting results acquired during MIBCI experiments is proposed, designed to facilitate communication and comparison of essential information regarding the effects observed, based on the findings of descriptive analysis and meta-analysis.
Abstract: Brain-Computer Interfaces (BCI) constitute an alternative channel of communication between humans and environment. There are a number of different technologies which enable the recording of brain activity. One of these is electroencephalography (EEG). The most common EEG methods include interfaces whose operation is based on changes in the activity of Sensorimotor Rhythms (SMR) during imagery movement, so-called Motor Imagery BCI (MIBCI). The present article is a review of 131 articles published from 1997 to 2017 discussing various procedures of data processing in MIBCI. The experiments described in these publications have been compared in terms of the methods used for data registration and analysis. Some of the studies (76 reports) were subjected to meta-analysis which showed corrected average classification accuracy achieved in these studies at the level of 51.96%, a high degree of heterogeneity of results (Q = 1806577.61; df= 486; p < 0.001; I2 = 99.97%), as well as significant effects of number of channels, number of mental images, and method of spatial filtering. On the other hand the meta-regression failed to provide evidence that there was an increase in the effectiveness of the solutions proposed in the articles published in recent years. The authors have proposed a newly developed standard for presenting results acquired during MIBCI experiments, which is designed to facilitate communication and comparison of essential information regarding the effects observed. Also, based on the findings of descriptive analysis and meta-analysis, the authors formulated recommendations regarding practices applied in research on signal processing in MIBCIs.

Journal ArticleDOI
TL;DR: Deep Learning Methods to Process fMRI Data and Their Application in the Diagnosis of Cognitive Impairment: A Brief Overview and The authors' Opinion.
Abstract: Citation: Wen D, Wei Z, Zhou Y, Li G, Zhang X and Han W (2018) Deep Learning Methods to Process fMRI Data and Their Application in the Diagnosis of Cognitive Impairment: A Brief Overview and Our Opinion. Front. Neuroinform. 12:23. doi: 10.3389/fninf.2018.00023 Deep Learning Methods to Process fMRI Data and Their Application in the Diagnosis of Cognitive Impairment: A Brief Overview and Our Opinion

Journal ArticleDOI
TL;DR: This paper uses conventional T1-weighted MRI to define morphological brain networks (MBNs), each quantifying shape relationship between different cortical regions for a specific cortical attribute at both low-order and high-order levels, and proposes high- order MBN which better captures brain complex interactions by modeling the morphological relationship between pairs of ROIs.
Abstract: Brain disorders, such as Autism Spectrum Disorder (ASD), alter brain functional (from fMRI) and structural (from diffusion MRI) connectivity at multiple levels and in varying degrees. While unraveling such alterations have been the focus of a large number of studies, morphological brain connectivity has been out of the research scope. In particular, shape-to-shape relationships across brain regions of interest (ROIs) were rarely investigated. As such, the use of networks based on morphological brain data in neurological disorder diagnosis, while leveraging the advent of machine learning, could complement our knowledge on brain wiring alterations in unprecedented ways. In this paper, we use conventional T1-weighted MRI to define morphological brain networks (MBNs), each quantifying shape relationship between different cortical regions for a specific cortical attribute at both low-order and high-order levels. While typical brain connectomes investigate the relationship between two ROIs, we propose high-order MBN which better captures brain complex interactions by modeling the morphological relationship between pairs of ROIs. For ASD identification, we present a connectomic manifold learning framework, which learns multiple kernels to estimate a similarity measure between ASD and normal controls (NC) connectional features, to perform dimensionality reduction for clustering ASD and NC subjects. We benchmark our ASD identification method against both supervised and unsupervised state-of-the-art methods, while depicting the most discriminative high- and low-order relationships between morphological regions in the left and right hemispheres.

Journal ArticleDOI
TL;DR: This paper proposes the use of an hBCI for the classification of three brain activation patterns elicited by mental arithmetic, motor imagery, and idle state, with the aim to elevate the information transfer rate (ITR) of h BCI by increasing the number of classes while minimizing the loss of accuracy.
Abstract: The performance of a brain-computer interface (BCI) can be enhanced by simultaneously using two or more modalities to record brain activity, which is generally referred to as a hybrid BCI. To date, many BCI researchers have tried to implement a hybrid BCI system by combining electroencephalography (EEG) and functional near-infrared spectroscopy (NIRS) to improve the overall accuracy of binary classification. However, since hybrid EEG-NIRS BCI, which will be denoted by hBCI in this paper, has not been applied to ternary classification problems, paradigms and classification strategies appropriate for ternary classification using hBCI are not well investigated. Here we propose the use of an hBCI for the classification of three brain activation patterns elicited by mental arithmetic, motor imagery, and idle state, with the aim to elevate the information transfer rate (ITR) of hBCI by increasing the number of classes while minimizing the loss of accuracy. EEG electrodes were placed over the prefrontal cortex and the motor cortex, and NIRS optodes were placed only on the forehead. The ternary classification problem was decomposed into three binary classification problems using the “one-versus-one” classification strategy to apply the multi-band common spatial patterns filter to EEG data. A 10 × 10-fold cross validation was performed using shrinkage linear discriminant analysis to evaluate the average classification accuracies for EEG-BCI, NIRS-BCI, and hBCI when the meta-classification method was adopted to enhance classification accuracy. The ternary classification accuracies for EEG-BCI, NIRS-BCI, and hBCI were 76.1 ± 12.8, 64.1 ± 9.7, and 82.2 ± 10.2%, respectively. The classification accuracy of the proposed hBCI was thus significantly higher than those of the other BCIs (p < 0.005). The average ITR for the proposed hBCI was calculated to be 4.70 ± 1.92 bits/minute, which was 34.3% higher than that reported for a previous binary hBCI study.

Journal ArticleDOI
TL;DR: Uncertainpy, an open-source Python toolbox, tailored to perform uncertainty quantification and sensitivity analysis of neuroscience models, is presented to the neuroscience community in a user-oriented manner.
Abstract: Computational models in neuroscience typically contain many parameters that are poorly constrained by experimental data. Uncertainty quantification and sensitivity analysis provide rigorous procedures to quantify how the model output depends on this parameter uncertainty. Unfortunately, the application of such methods is not yet standard within the field of neuroscience. Here we present Uncertainpy, an open-source Python toolbox, tailored to perform uncertainty quantification and sensitivity analysis of neuroscience models. Uncertainpy aims to make it quick and easy to get started with uncertainty analysis, without any need for detailed prior knowledge. The toolbox allows uncertainty quantification and sensitivity analysis to be performed on already existing models without needing to modify the model equations or model implementation. Uncertainpy bases its analysis on polynomial chaos expansions, which are more efficient than the more standard Monte-Carlo based approaches. Uncertainpy is tailored for neuroscience applications by its built-in capability for calculating characteristic features in the model output. The toolbox does not merely perform a point-to-point comparison of the ``raw'' model output (e.g., membrane voltage traces), but can also calculate the uncertainty and sensitivity of salient model response features such as spike timing, action potential width, average interspike interval, and other features relevant for various neural and neural network models. Uncertainpy comes with several common models and features built in, and including custom models and new features is easy. The aim of the current paper is to present Uncertainpy to the neuroscience community in a user-oriented manner. To demonstrate its broad applicability, we perform an uncertainty quantification and sensitivity analysis of three case studies relevant for neuroscience: the original Hodgkin-Huxley point-neuron model for action potential generation, a multi-compartmental model of a thalamic interneuron implemented in the NEURON simulator, and a sparsely connected recurrent network model implemented in the NEST simulator.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed clustering coefficients tailored to correlation matrices to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes.
Abstract: Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighbouring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of the previous applications of clustering coefficient (and other) measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node) are useful in characterising individual nodes. In contrast, the conventional local clustering coefficients were strongly correlated with and therefore may be confounded by the node's connectivity. The proposed methods are expected to help us to understand clustering and lack thereof in correlational brain networks, such as those derived from functional time series and across-participant correlation in neuroanatomical properties.

Journal ArticleDOI
TL;DR: PerAF reduced the influence of BOLD signal intensity and hence increase the inter-scanner reliability of ALFF, and among the VWWB metrics, DC showed the worst, and PerAF showed similar intra-scanNER reliability with ALFF and the best reliability among all the 4 metrics.
Abstract: As the multi-center studies with resting-state functional magnetic resonance imaging (RS-fMRI) have been more and more applied to neuropsychiatric studies, both intra- and inter-scanner reliability of RS-fMRI are becoming increasingly important. The amplitude of low frequency fluctuation (ALFF), regional homogeneity (ReHo), and degree centrality (DC) are 3 main RS-fMRI metrics in a way of voxel-wise whole-brain (VWWB) analysis. Although the intra-scanner reliability (i.e., test-retest reliability) of these metrics has been widely investigated, few studies has investigated their inter-scanner reliability. In the current study, 21 healthy young subjects were enrolled and scanned with blood oxygenation level dependent (BOLD) RS-fMRI in 3 visits (V1 - V3), with V1 and V2 scanned on a GE MR750 scanner and V3 on a Siemens Prisma. RS-fMRI data were collected under two conditions, eyes open (EO) and eyes closed (EC), each lasting 8 minutes. We firstly evaluated the intra- and inter-scanner reliability of ALFF, ReHo, and DC. Secondly, we measured systematic difference between two scanning visits of the same scanner as well as between two scanners. Thirdly, to account for the potential difference of intra- and inter-scanner local magnetic field inhomogeneity, we measured the difference of relative BOLD signal intensity to the mean BOLD signal intensity of the whole brain between each pair of visits. Last, we used percent amplitude of fluctuation (PerAF) to correct the difference induced by relative BOLD signal intensity. The inter-scanner reliability was much worse than intra-scanner reliability; Among the VWWB metrics, DC showed the worst (both for intra-scanner and inter-scanner comparisons). PerAF showed similar intra-scanner reliability with ALFF and the best reliability among all the 4 metrics. PerAF reduced the influence of BOLD signal intensity and hence increase the inter-scanner reliability of ALFF. For multi-center studies, inter-scanner reliability should be taken into account.

Journal ArticleDOI
TL;DR: The results show the feasibility of incorporating recognizable driver's bioelectrical responses into advanced driver-assistance systems to carry out early detection of emergency braking situations which could be useful to reduce car accidents.
Abstract: The anticipatory recognition of braking is essential to prevent traffic accidents. For instance, driving assistance systems can be useful to properly respond to emergency braking situations. Moreover, the response time to emergency braking situations can be affected and even increased by different driver's cognitive states caused by stress, fatigue, and extra workload. This work investigates the detection of emergency braking from driver's electroencephalographic (EEG) signals that precede the brake pedal actuation. Bioelectrical signals were recorded while participants were driving in a car simulator while avoiding potential collisions by performing emergency braking. In addition, participants were subjected to stress, workload, and fatigue. EEG signals were classified using support vector machines (SVM) and convolutional neural networks (CNN) in order to discriminate between braking intention and normal driving. Results showed significant recognition of emergency braking intention which was on average 71.1% for SVM and 71.8% CNN. In addition, the classification accuracy for the best participant was 80.1 and 88.1% for SVM and CNN, respectively. These results show the feasibility of incorporating recognizable driver's bioelectrical responses into advanced driver-assistance systems to carry out early detection of emergency braking situations which could be useful to reduce car accidents.

Journal ArticleDOI
TL;DR: DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management, designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments.
Abstract: DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.

Journal ArticleDOI
TL;DR: A versatile and extendable MATLAB-based toolbox, BRANT (BRAinNetome fmri Toolkit), with a wide range of rs-fMRI data processing functions and code-generated GUIs, to facilitate data processing and alleviate the burden of manually drawing GUIs.
Abstract: Data processing toolboxes for resting-state functional MRI (rs-fMRI) have provided us with a variety of functions and user friendly graphic user interfaces (GUIs) However, many toolboxes only cover a certain range of functions, and use exclusively designed GUIs To facilitate data processing and alleviate the burden of manually drawing GUIs, we have developed a versatile and extendable MATLAB-based toolbox, BRANT (BRAinNetome fmri Toolkit), with a wide range of rs-fMRI data processing functions and code-generated GUIs During the implementation, we have also empowered the toolbox with parallel computing techniques, efficient file handling methods for compressed file format, and one-line scripting In BRANT, users can find rs-fMRI batch processing functions for preprocessing, brain spontaneous activity analysis, functional connectivity analysis, complex network analysis, statistical analysis, and results visualization, while developers can quickly publish scripts with code-generated GUIs

Journal ArticleDOI
TL;DR: A freely available and open-source MATLAB graphical user interface toolbox, known as the Neuroscience Information Toolbox (NIT), for EEG–fMRI multimodal fusion analysis, which was designed to provide a convenient and easy-to-use toolbox for researchers, especially for novice users.
Abstract: Recently, scalp electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) multimodal fusion has been pursued in an effort to study human brain function and dysfunction to obtain more comprehensive information on brain activity in which the spatial and temporal resolutions are both satisfactory. However, a more flexible and easy-to-use toolbox for EEG-fMRI multimodal fusion is still lacking. In this study, we therefore developed a freely available and open-source MATLAB graphical user interface toolbox, known as the Neuroscience Information Toolbox (NIT), for EEG-fMRI multimodal fusion analysis. The NIT consists of three modules: 1) the fMRI module, which has batch fMRI preprocessing, nuisance signal removal, bandpass filtering and calculation of resting-state measures; 2) the EEG module, which includes artifact removal, extracting EEG features (event onset, power and amplitude), and marking interesting events; and 3) the fusion module, in which fMRI-informed EEG analysis and EEG-informed fMRI analysis are included. The NIT was designed to provide a convenient and easy-to-use toolbox for researchers, especially for novice users. The NIT can be downloaded for free at http://www.neuro.uestc.edu.cn/NIT.html, and detailed information, including the introduction of NIT, user’s manual and example data sets, can also be observed on this website. We hope that the NIT is a promising toolbox for exploring brain information in various EEG and fMRI studies.

Journal ArticleDOI
TL;DR: Experimental results show that the fusion of low- and high- order FCs can generally help to improve the final classification performance, even though the high-order FC may contain less discriminative information than its low-order counterpart.
Abstract: Functional connectivity (FC) network has been becoming an increasingly useful tool for understanding the cerebral working mechanism and mining sensitive biomarkers for neural/mental disease diagnosis. Currently, Pearson's Correlation (PC) is the simplest and most commonly used scheme in FC estimation. Despite its empirical effectiveness, PC only encodes the low-order (i.e., second-order) statistics by calculating the pairwise correlations between network nodes (brain regions), which fails to capture the high-order information involved in FC (e.g., the correlations among different edges in a network). To address this issue, we propose a novel FC estimation method based on Matrix Variate Normal Distribution (MVND), which can capture both low- and high-order correlations simultaneously with a clear mathematical interpretability. Specifically, we first generate a set of BOLD subseries by the sliding window scheme, and for each subseries we construct a temporal FC network by PC. Then, we employ the constructed FC networks as samples to estimate the final low- and high-order FC networks by maximizing the likelihood of MVND. To illustrate the effectiveness of the proposed method, we conduct experiments to identify subjects with Mild Cognitive Impairment (MCI) from Normal Controls (NCs). Experimental results show that the fusion of low- and high-order FCs can generally help to improve the final classification performance, even though the high-order FC may contain less discriminative information than its low-order counterpart. Importantly, the proposed method for simultaneous estimation of low- and high-order FCs can achieve better classification performance than the two baseline methods, i.e., the original PC method and a recent high-order FC estimation method.

Journal ArticleDOI
TL;DR: Five characteristics that a scientific code in computational science should possess are articulate: re-runnable, repeatable, reproducible, reusable, and replicable.
Abstract: Scientific code is different from production software. Scientific code, by producing results that are then analyzed and interpreted, participates in the elaboration of scientific conclusions. This imposes specific constraints on the code that are often overlooked in practice. We articulate, with a small example, five characteristics that a scientific code in computational science should possess: re-runnable, repeatable, reproducible, reusable, and replicable. The code should be executable (re-runnable) and produce the same result more than once (repeatable); it should allow an investigator to reobtain the published results (reproducible) while being easy to use, understand and modify (reusable), and it should act as an available reference for any ambiguity in the algorithmic descriptions of the article (replicable).

Journal ArticleDOI
TL;DR: Basic concepts that will be important in the development of credible clinical neuroscience models are introduced: reproducibility and replicability; verification and validation; model configuration; and procedures and processes for credible mechanistic multiscale modeling.
Abstract: Modeling and simulation in computational neuroscience is currently a research enterprise to better understand neural systems. It is not yet directly applicable to the problems of patients with brain disease. To be used for clinical applications, there must not only be considerable progress in the field but also a concerted effort to use best practices in order to demonstrate model credibility to regulatory bodies, to clinics and hospitals, to doctors, and to patients. In doing this for neuroscience, we can learn lessons from long-standing practices in other areas of simulation (aircraft, computer chips), from software engineering, and from other biomedical disciplines. In this manuscript, we introduce some basic concepts that will be important in the development of credible clinical neuroscience models: reproducibility and replicability; verification and validation; model configuration; and procedures and processes for credible mechanistic multiscale modeling. We also discuss how garnering strong community involvement can promote model credibility. Finally, in addition to direct usage with patients, we note the potential for simulation usage in the area of Simulation-Based Medical Education, an area which to date has been primarily reliant on physical models (mannequins) and scenario-based simulations rather than on numerical simulations.

Journal ArticleDOI
TL;DR: An Ensemble Classification model with Performance Weighting is presented that combines several Support-Vector-Machine with linear kernel classifiers for different biomedical group of tests and selects the most discriminant features from neuroimages from PPMI database subjects.
Abstract: In last years, several approaches to develop an effective Computer-Aided-Diagnosis (CAD) system for Parkinson's Disease (PD) have been proposed. Most of these methods have focused almost exclusively on brain images through the use of Machine-Learning algorithms suitable to characterize structural or functional patterns. Those patterns provide enough information about the status and/or the progression at intermediate and advanced stages of Parkinson's Disease. Nevertheless this information could be insufficient at early stages of the pathology. The Parkinson's Progression Markers Initiative (PPMI) database includes neurological images along with multiple biomedical tests. This information opens up the possibility of comparing different biomarker classification results. As data come from heterogeneous sources, it is expected that we could include some of these biomarkers in order to obtain new information about the pathology. Based on that idea, this work presents an Ensemble Classification model with Performance Weighting. This proposal has been tested comparing Healthy Control subjects (HC) vs. patients with PD (considering both PD and SWEDD labeled subjects as the same class). This model combines several Support-Vector-Machine (SVM) with linear kernel classifiers for different biomedical group of tests-including CerebroSpinal Fluid (CSF), RNA, and Serum tests-and pre-processed neuroimages features (Voxels-As-Features and a list of defined Morphological Features) from PPMI database subjects. The proposed methodology makes use of all data sources and selects the most discriminant features (mainly from neuroimages). Using this performance-weighted ensemble classification model, classification results up to 96% were obtained.

Journal ArticleDOI
TL;DR: By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care.
Abstract: Historically, research databases have existed in isolation with no practical avenue for sharing or pooling medical data into high dimensional datasets that can be efficiently compared across databases. To address this challenge, the Ontario Brain Institute's "Brain-CODE" is a large-scale neuroinformatics platform designed to support the collection, storage, federation, sharing and analysis of different data types across several brain disorders, as a means to understand common underlying causes of brain dysfunction and develop novel approaches to treatment. By providing researchers access to aggregated datasets that they otherwise could not obtain independently, Brain-CODE incentivizes data sharing and collaboration and facilitates analyses both within and across disorders and across a wide array of data types, including clinical, neuroimaging and molecular. The Brain-CODE system architecture provides the technical capabilities to support (1) consolidated data management to securely capture, monitor and curate data, (2) privacy and security best-practices, and (3) interoperable and extensible systems that support harmonization, integration, and query across diverse data modalities and linkages to external data sources. Brain-CODE currently supports collaborative research networks focused on various brain conditions, including neurodevelopmental disorders, cerebral palsy, neurodegenerative diseases, epilepsy and mood disorders. These programs are generating large volumes of data that are integrated within Brain-CODE to support scientific inquiry and analytics across multiple brain disorders and modalities. By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care.

Journal ArticleDOI
TL;DR: To facilitate group studies, a software is developed that allows to perform virtual electrode implantation in patients’ neuroanatomy and to overlay results of epileptic and functional mapping, as well as resection masks from the surgery.
Abstract: In some cases of pharmaco-resistant and focal epilepsies, intracranial recordings performed epidurally (electrocorticography, ECoG) and/or in depth (stereoelectroencephalography, SEEG) can be required to locate the seizure onset zone and the eloquent cortex before surgical resection. In SEEG, each electrode contact records brain's electrical activity in a spherical volume of 3 mm diameter approximately. The spatial coverage is around 1% of the brain and differs between patients because the implantation of electrodes is tailored for each case. Group studies thus need a large number of patients to reach a large spatial sampling, which can be achieved more easily using a multicentric approach such as implemented in our F-TRACT project (f-tract.eu). To facilitate group studies, we developed a software-IntrAnat Electrodes-that allows to perform virtual electrode implantation in patients' neuroanatomy and to overlay results of epileptic and functional mapping, as well as resection masks from the surgery. IntrAnat Electrodes is based on a patient database providing multiple search criteria to highlight various group features. For each patient, the anatomical processing is based on a series of software publicly available. Imaging modalities (Positron Emission Tomography (PET), anatomical MRI pre-implantation, post-implantation and post-resection, functional MRI, diffusion MRI, Computed Tomography (CT) with electrodes) are coregistered. The 3D T1 pre-implantation MRI gray/white matter is segmented and spatially normalized to obtain a series of cortical parcels using different neuroanatomical atlases. On post-implantation images, the user can position 3D models of electrodes defined by their geometry. Each electrode contact is then labeled according to its position in the anatomical atlases, to the class of tissue (gray or white matter, cerebro-spinal fluid) and to its presence inside or outside the resection mask. Users can add more functionally informed labels on contact, such as clinical responses after electrical stimulation, cortico-cortical evoked potentials, gamma band activity during cognitive tasks or epileptogenicity. IntrAnat Electrodes software thus provides a means to visualize multimodal data. The contact labels allow to search for patients in the database according to multiple criteria representing almost all available data, which is to our knowledge unique in current SEEG software. IntrAnat Electrodes will be available in the forthcoming release of BrainVisa software and tutorials can be found on the F-TRACT webpage.

Journal ArticleDOI
TL;DR: A fully automated algorithm, ATLAS, for delineating the core lesion, trained to match the lesion delineation by human experts is presented, which may contribute to more optimal patient triaging for active or supportive therapy.
Abstract: Stroke is the second most common cause of death worldwide, responsible for 6.24 million deaths in 2015 (about 11% of all deaths). Three out of four stroke survivors suffer long term disability, as many cannot return to their prior employment or live independently. Eighty-seven percent of strokes are ischemic. As an increasing volume of ischemic brain tissue proceeds to permanent infarction in the hours following the onset, immediate treatment is pivotal to increase the likelihood of good clinical outcome for the patient. Triaging stroke patients for active therapy requires assessment of the volume of salvageable and irreversible damaged tissue, respectively. With Magnetic Resonance Imaging (MRI), diffusion-weighted imaging is commonly used to assess the extent of permanently damaged tissue, the core lesion. To speed up and standardize decision-making in acute stroke management we present a fully automated algorithm, ATLAS, for delineating the core lesion. We compare performance to widely used threshold based methodology, as well as a recently proposed state-of-the-art algorithm: COMBAT Stroke. ATLAS is a machine learning algorithm trained to match the lesion delineation by human experts. The algorithm utilizes decision trees along with spatial pre- and post-regularization to outline the lesion. As input data the algorithm takes images from 108 patients with acute anterior circulation stroke from the I-Know multicenter study. We divided the data into training and test data using leave-one-out cross validation to assess performance in independent patients. Performance was quantified by the Dice index. The median Dice coefficient of ATLAS algorithm was 0.6122, which was significantly higher than COMBAT Stroke, with a median Dice coefficient of 0.5636 (p < 0.0001) and the best possible performing methods based on thresholding of the diffusion weighted images (median Dice coefficient: 0.3951) or the apparent diffusion coefficient (median Dice coefficeint: 0.2839). Furthermore, the volume of the ATLAS segmentation was compared to the volume of the expert segmentation, yielding a standard deviation of the residuals of 10.25 ml compared to 17.53 ml for COMBAT Stroke. Since accurate quantification of the volume of permanently damaged tissue is essential in acute stroke patients, ATLAS may contribute to more optimal patient triaging for active or supportive therapy.