scispace - formally typeset
Search or ask a question

Showing papers in "Frontiers in Neuroinformatics in 2020"


Journal ArticleDOI
TL;DR: A systemic review of currently available, low-cost, consumer EEG-based drowsiness detection systems found that even basic features, such as the power spectra of EEG bands, were able to consistently detect drowsness.
Abstract: Drowsiness is a leading cause of traffic and industrial accidents, costing lives and productivity. Electroencephalography (EEG) signals can reflect awareness and attentiveness, and low-cost consumer EEG headsets are available on the market. The use of these devices as drowsiness detectors could increase the accessibility of safety and productivity-enhancing devices for small businesses and developing countries. We conducted a systemic review of currently available, low-cost, consumer EEG-based drowsiness detection systems. We sought to determine whether consumer EEG headsets could be reliably utilized as rudimentary drowsiness detection systems. We included documented cases describing successful drowsiness detection using consumer EEG-based devices, including the Neurosky MindWave, InteraXon Muse, Emotiv Epoc, Emotiv Insight, and OpenBCI. Of 46 relevant studies, ~27 reported an accuracy score. The lowest of these was the Neurosky Mindwave, with a minimum of 31%. The second lowest accuracy reported was 79.4% with an OpenBCI study. In many cases, algorithmic optimization remains necessary. Different methods for accuracy calculation, system calibration, and different definitions of drowsiness made direct comparisons problematic. However, even basic features, such as the power spectra of EEG bands, were able to consistently detect drowsiness. Each specific device has its own capabilities, tradeoffs, and limitations. Widely used spectral features can achieve successful drowsiness detection, even with low-cost consumer devices; however, reliability issues must still be addressed in an occupational context.

86 citations


Journal ArticleDOI
TL;DR: Past and present trends within the animal fMRI community are explored and critical aspects in study design, data acquisition, and post-processing operations, that may affect the results and influence the comparability between studies are highlighted.
Abstract: Animal whole-brain functional magnetic resonance imaging (fMRI) provides a non-invasive window into brain activity. A collection of associated methods aims to replicate observations made in humans and to identify the mechanisms underlying the distributed neuronal activity in the healthy and disordered brain. Animal fMRI studies have developed rapidly over the past years, fueled by the development of resting-state fMRI connectivity and genetically encoded neuromodulatory tools. Yet, comparisons between sites remain hampered by lack of standardization. Recently, we highlighted that mouse resting-state functional connectivity converges across centers, although large discrepancies in sensitivity and specificity remained. Here, we explore past and present trends within the animal fMRI community and highlight critical aspects in study design, data acquisition, and post-processing operations, that may affect the results and influence the comparability between studies. We also suggest practices aimed to promote the adoption of standards within the community and improve between-lab reproducibility. The implementation of standardized animal neuroimaging protocols will facilitate animal population imaging efforts as well as meta-analysis and replication studies, the gold standards in evidence-based science.

67 citations


Journal ArticleDOI
TL;DR: Compared with three widely used GAN-based MRI reconstruction methods, the proposed method can obtain a higher peak signal-to-noise ratio (PSNR) and structural similarity index measure(SSIM), and the details of the reconstructed image are more abundant and more realistic for further clinical scrutinization and diagnostic tasks.
Abstract: Research on undersampled magnetic resonance image (MRI) reconstruction can increase the speed of MRI imaging and reduce patient suffering. In this paper, an undersampled MRI reconstruction method based on Generative Adversarial Networks with the Self-Attention mechanism and the Relative Average discriminator (SARA-GAN) is proposed. In our SARA-GAN, the relative average discriminator theory is applied to make full use of the prior knowledge, in which half of the input data of the discriminator is true and half is fake. At the same time, a self-attention mechanism is incorporated into the high-layer of the generator to build long-range dependence of the image, which can overcome the problem of limited convolution kernel size. Besides, spectral normalization is employed to stabilize the training process. Compared with three widely used GAN-based MRI reconstruction methods, i.e., DAGAN, DAWGAN, and DAWGAN-GP, the proposed method can obtain a higher peak signal-to-noise ratio (PSNR) and structural similarity index measure(SSIM), and the details of the reconstructed image are more abundant and more realistic for further clinical scrutinization and diagnostic tasks.

58 citations


Journal ArticleDOI
TL;DR: The same approach showed potential in predicting earlier the prevalent underlying disease in dementia patients whose clinical profile is uncertain between AD and VD, therefore suggesting its usefulness in supporting physicians' diagnostic evaluations.
Abstract: Among dementia-like diseases, Alzheimer disease (AD) and vascular dementia (VD) are two of the most frequent. AD and VD may share multiple neurological symptoms that may lead to controversial diagnoses when using conventional clinical and MRI criteria. Therefore, other approaches are needed to overcome this issue. Machine learning (ML) combined with magnetic resonance imaging (MRI) has been shown to improve the diagnostic accuracy of several neurodegenerative diseases, including dementia. To this end, in this study, we investigated, first, whether different kinds of ML algorithms, combined with advanced MRI features, could be supportive in classifying VD from AD and, second, whether the developed approach might help in predicting the prevalent disease in subjects with an unclear profile of AD or VD. Three ML categories of algorithms were tested: artificial neural network (ANN), support vector machine (SVM), and adaptive neuro-fuzzy inference system (ANFIS). Multiple regional metrics from resting-state fMRI (rs-fMRI) and diffusion tensor imaging (DTI) of 60 subjects (33 AD, 27 VD) were used as input features to train the algorithms and find the best feature pattern to classify VD from AD. We then used the identified VD-AD discriminant feature pattern as input for the most performant ML algorithm to predict the disease prevalence in 15 dementia patients with a "mixed VD-AD dementia" (MXD) clinical profile using their baseline MRI data. ML predictions were compared with the diagnosis evidence from a 3-year clinical follow-up. ANFIS emerged as the most efficient algorithm in discriminating AD from VD, reaching a classification accuracy greater than 84% using a small feature pattern. Moreover, ANFIS showed improved classification accuracy when trained with a multimodal input feature data set (e.g., DTI + rs-fMRI metrics) rather than a unimodal feature data set. When applying the best discriminant pattern to the MXD group, ANFIS achieved a correct prediction rate of 77.33%. Overall, results showed that our approach has a high discriminant power to classify AD and VD profiles. Moreover, the same approach also showed potential in predicting earlier the prevalent underlying disease in dementia patients whose clinical profile is uncertain between AD and VD, therefore suggesting its usefulness in supporting physicians' diagnostic evaluations.

56 citations


Journal ArticleDOI
TL;DR: A systematic review of the literature in automated multiple sclerosis lesion segmentation based on deep learning and gives a quantitative comparison of the methods reviewed through two metrics: Dice Similarity Coefficient and Positive Predictive Value.
Abstract: In recent years, there have been multiple works of literature reviewing methods for automatically segmenting multiple sclerosis (MS) lesions. However, there is no literature systematically and individually review deep learning-based MS lesion segmentation methods. Although the previous review also included methods based on deep learning, there are some methods based on deep learning that they did not review. In addition, their review of deep learning methods did not go deep into the specific categories of Convolutional Neural Network (CNN). They only reviewed these methods in a generalized form, such as supervision strategy, input data handling strategy, etc. This paper presents a systematic review of the literature in automated multiple sclerosis lesion segmentation based on deep learning. Algorithms based on deep learning reviewed are classified into two categories through their CNN style, and their strengths and weaknesses will also be given through our investigation and analysis. We give a quantitative comparison of the methods reviewed through two metrics: Dice Similarity Coefficient (DSC) and Positive Predictive Value (PPV). Finally, the future direction of the application of deep learning in MS lesion segmentation will be discussed.

44 citations


Journal ArticleDOI
TL;DR: A novel reconstruction algorithm based on generative adversarial networks with the Wasserstein distance (WGAN) and a temporal-spatial-frequency (TSF-MSE) loss function is introduced, beneficial for the requirements of high classification performance and low cost and is convenient for the design of high-performance brain computer interface systems.
Abstract: Applications based on electroencephalography (EEG) signals suffer from the mutual contradiction of high classification performance vs. low cost. The nature of this contradiction makes EEG signal reconstruction with high sampling rates and sensitivity challenging. Conventional reconstruction algorithms lead to loss of the representative details of brain activity and suffer from remaining artifacts because such algorithms only aim to minimize the temporal mean-squared-error (MSE) under generic penalties. Instead of using temporal MSE according to conventional mathematical models, this paper introduces a novel reconstruction algorithm based on generative adversarial networks with the Wasserstein distance (WGAN) and a temporal-spatial-frequency (TSF-MSE) loss function. The carefully designed TSF-MSE-based loss function reconstructs signals by computing the MSE from time-series features, common spatial pattern features, and power spectral density features. Promising reconstruction and classification results are obtained from three motor-related EEG signal datasets with different sampling rates and sensitivities. Our proposed method significantly improves classification performances of EEG signals reconstructions with the same sensitivity and the average classification accuracy improvements of EEG signals reconstruction with different sensitivities. By introducing the WGAN reconstruction model with TSF-MSE loss function, the proposed method is beneficial for the requirements of high classification performance and low cost and is convenient for the design of high-performance brain computer interface systems.

34 citations


Journal ArticleDOI
TL;DR: It is concluded that Magia can be reliably used to process brain PET data because of the high inter-operator variance resulting from the manual delineation of reference regions.
Abstract: Processing of positron emission tomography (PET) data typically involves manual work, causing inter-operator variance. Here we introduce the Magia toolbox that enables processing of brain PET data with minimal user intervention. We investigated the accuracy of Magia with four tracers: [11C]carfentanil, [11C]raclopride, [11C]MADAM, and [11C]PiB. We used data from 30 control subjects for each tracer. Five operators manually delineated reference regions for each subject. The data were processed using Magia using the manually and automatically generated reference regions. We first assessed inter-operator variance resulting from the manual delineation of reference regions. We then compared the differences between the manually and automatically produced reference regions and the subsequently obtained binding potentials and standardized-uptake-value-ratios. The results show that manually produced reference regions can be remarkably different from each other, leading to substantial differences also in outcome measures. While the Magia-derived reference regions were anatomically different from the manual ones, Magia produced outcome measures highly consistent with the average of the manually obtained estimates. For [11C]carfentanil and [11C]PiB there was no bias, while for [11C]raclopride and [11C]MADAM Magia produced 3-5% higher binding potentials. Based on these results and considering the high inter-operator variance of the manual method, we conclude that Magia can be reliably used to process brain PET data.

31 citations


Journal ArticleDOI
TL;DR: This paper introduces and numerically evaluates a new, finite element-based numerical scheme for the KNP-EMI model, capable of efficiently and flexibly handling geometries of arbitrary dimension and arbitrary polynomial degree and studies ephaptic coupling induced in an unmyelinated axon bundle.
Abstract: Mathematical models for excitable cells are commonly based on cable theory, which considers a homogenized domain and spatially constant ionic concentrations. Although such models provide valuable insight, the effect of altered ion concentrations or detailed cell morphology on the electrical potentials cannot be captured. In this paper, we discuss an alternative approach to detailed modeling of electrodiffusion in neural tissue. The mathematical model describes the distribution and evolution of ion concentrations in a geometrically-explicit representation of the intra- and extracellular domains. As a combination of the electroneutral Kirchhoff-Nernst-Planck (KNP) model and the Extracellular-Membrane-Intracellular (EMI) framework, we refer to this model as the KNP-EMI model. Here, we introduce and numerically evaluate a new, finite element-based numerical scheme for the KNP-EMI model, capable of efficiently and flexibly handling geometries of arbitrary dimension and arbitrary polynomial degree. Moreover, we compare the electrical potentials predicted by the KNP-EMI and EMI models. Finally, we study ephaptic coupling induced in an unmyelinated axon bundle and demonstrate how the KNP-EMI framework can give new insights in this setting.

25 citations


Journal ArticleDOI
TL;DR: The advantages of both DNN and HMRF are combined to train the model with a not so large amount of the annotations in deep learning, which leads to a more effective cerebrovascular segmentation method.
Abstract: Automated cerebrovascular segmentation of time-of-flight magnetic resonance angiography (TOF-MRA) images is an important technique, which can be used to diagnose abnormalities in the cerebrovascular system, such as vascular stenosis and malformation. Automated cerebrovascular segmentation can direct show the shape, direction and distribution of blood vessels. Although deep neural network (DNN)-based cerebrovascular segmentation methods have shown to yield outstanding performance, they are limited by their dependence on huge training dataset. In this paper, we propose an unsupervised cerebrovascular segmentation method of TOF-MRA images based on DNN and hidden Markov random field (HMRF) model. Our DNN-based cerebrovascular segmentation model is trained by the labeling of HMRF rather than manual annotations. The proposed method was trained and tested using 100 TOF-MRA images. The results were evaluated using the dice similarity coefficient (DSC), which reached a value of 0.79. The trained model achieved better performance than that of the traditional HMRF-based cerebrovascular segmentation method in binary pixel-classification. This paper combines the advantages of both DNN and HMRF to train the model with a not so large amount of the annotations in deep learning, which leads to a more effective cerebrovascular segmentation method.

24 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a set of experiments in order to study three critical pitfalls encountered in the design of MCDS in the literature, namely, the number of simulated particles and time steps, simplifications in the intra-axonal substrate representation, and the impact of the substrate's size on the signal stemming from the extraaxonal space.
Abstract: Monte-Carlo Diffusion Simulations (MCDS) have been used extensively as a ground truth tool for the validation of microstructure models for Diffusion-Weighted MRI. However, methodological pitfalls in the design of the biomimicking geometrical configurations and the simulation parameters can lead to approximation biases. Such pitfalls affect the reliability of the estimated signal, as well as its validity and reproducibility as ground truth data. In this work, we first present a set of experiments in order to study three critical pitfalls encountered in the design of MCDS in the literature, namely, the number of simulated particles and time steps, simplifications in the intra-axonal substrate representation, and the impact of the substrate's size on the signal stemming from the extra-axonal space. The results obtained show important changes in the simulated signals and the recovered microstructure features when changes in those parameters are introduced. Thereupon, driven by our findings from the first studies, we outline a general framework able to generate complex substrates. We show the framework's capability to overcome the aforementioned simplifications by generating a complex crossing substrate, which preserves the volume in the crossing area and achieves a high packing density. The results presented in this work, along with the simulator developed, pave the way toward more realistic and reproducible Monte-Carlo simulations for Diffusion-Weighted MRI.

23 citations


Journal ArticleDOI
TL;DR: This study built a human-scale spiking network model of the cerebellum, composed of 68 billion spiking neurons, on the K computer and succeeded in reproducing plausible neuronal activity patterns that are observed experimentally in animals.
Abstract: Computer simulation of the human brain at an individual neuron resolution is an ultimate goal of computational neuroscience. The Japanese flagship supercomputer, K, provides unprecedented computational capability toward this goal. The cerebellum contains 80% of the neurons in the whole brain. Therefore, computer simulation of the human-scale cerebellum will be a challenge for modern supercomputers. In this study, we built a human-scale spiking network model of the cerebellum, composed of 68 billion spiking neurons, on the K computer. As a benchmark, we performed a computer simulation of a cerebellum-dependent eye movement task known as the optokinetic response. We succeeded in reproducing plausible neuronal activity patterns that are observed experimentally in animals. The model was built on dedicated neural network simulation software called MONET (Millefeuille-like Organization NEural neTwork), which calculates layered sheet types of neural networks with parallelization by tile partitioning. To examine the scalability of the MONET simulator, we repeatedly performed simulations while changing the number of compute nodes from 1,024 to 82,944 and measured the computational time. We observed a good weak-scaling property for our cerebellar network model. Using all 82,944 nodes, we succeeded in simulating a human-scale cerebellum for the first time, although the simulation was 578 times slower than the wall clock time. These results suggest that the K computer is already capable of creating a simulation of a human-scale cerebellar model with the aid of the MONET simulator.

Journal ArticleDOI
TL;DR: The Nutil software is an open access and stand-alone executable software that enables automated transformations, post-processing, and analyses of 2D section images using multi-core processing (OpenMP).
Abstract: With recent technological advances in microscopy and image acquisition of tissue sections, further developments of tools are required for viewing, transforming, and analyzing the ever-increasing amounts of high-resolution data produced. In the field of neuroscience, histological images of whole rodent brain sections are commonly used for investigating brain connections as well as cellular and molecular organization in the normal and diseased brain, but present a problem for the typical neuroscientist with no or limited programming experience in terms of the pre- and post-processing steps needed for analysis. To meet this need we have designed Nutil, an open access and stand-alone executable software that enables automated transformations, post-processing, and analyses of 2D section images using multi-core processing (OpenMP). The software is written in C++ for efficiency, and provides the user with a clean and easy graphical user interface for specifying the input and output parameters. Nutil currently contains four separate tools: (1) A transformation toolchain named "Transform" that allows for rotation, mirroring and scaling, resizing, and renaming of very large tiled tiff images. (2) "TiffCreator" enables the generation of tiled TIFF images from other image formats such as PNG and JPEG. (3) A "Resize" tool completes the preprocessing toolset and allows downscaling of PNG and JPEG images with output in PNG format. (4) The fourth tool is a post-processing method called "Quantifier" that enables the quantification of segmented objects in the context of regions defined by brain atlas maps generated with the QuickNII software based on a 3D reference atlas (mouse or rat). The output consists of a set of report files, point cloud coordinate files for visualization in reference atlas space, and reference atlas images superimposed with color-coded objects. The Nutil software is made available by the Human Brain Project (https://www.humanbrainproject.eu) at https://www.nitrc.org/projects/nutil/.

Journal ArticleDOI
TL;DR: NeuroRA as discussed by the authors is a toolbox for representational analysis of multi-modal neural data (e.g., EEG, MEG, fNIRS, fMRI, and other sources of neruroelectrophysiological data).
Abstract: In studies of cognitive neuroscience, multivariate pattern analysis (MVPA) is widely used as it offers richer information than traditional univariate analysis. Representational similarity analysis (RSA), as one method of MVPA, has become an effective decoding method based on neural data by calculating the similarity between different representations in the brain under different conditions. Moreover, RSA is suitable for researchers to compare data from different modalities and even bridge data from different species. However, previous toolboxes have been made to fit specific datasets. Here, we develop NeuroRA, a novel and easy-to-use toolbox for representational analysis. Our toolbox aims at conducting cross-modal data analysis from multi-modal neural data (e.g., EEG, MEG, fNIRS, fMRI, and other sources of neruroelectrophysiological data), behavioral data, and computer-simulated data. Compared with previous software packages, our toolbox is more comprehensive and powerful. Using NeuroRA, users can not only calculate the representational dissimilarity matrix (RDM), which reflects the representational similarity among different task conditions and conduct a representational analysis among different RDMs to achieve a cross-modal comparison. Besides, users can calculate neural pattern similarity (NPS), spatiotemporal pattern similarity (STPS), and inter-subject correlation (ISC) with this toolbox. NeuroRA also provides users with functions performing statistical analysis, storage, and visualization of results. We introduce the structure, modules, features, and algorithms of NeuroRA in this paper, as well as examples applying the toolbox in published datasets.

Journal ArticleDOI
TL;DR: The present study demonstrates, for the first time, that it is possible to predict reaction times on the basis of EEG data and can serve as a foundation for a system that can, in the future, increase the safety of air traffic.
Abstract: The main hypothesis of this work is that the time of delay in reaction to an unexpected event can be predicted on the basis of the brain activity recorded prior to that event. Such mental activity can be represented by electroencephalographic data. To test this hypothesis, (we conducted a) novel experiment involving 19 participants that took part in (a) two-hour long session of simulated aircraft flights. An EEG signal processing pipeline is proposed that consists of signal preprocessing, extract(ing) bandpass features, and (using regression to predict the reaction times). (The) prediction algorithms that are (used) in this study (are) the Least Absolute Shrinkage Operator and its Least Angle Regression modification, as well as Kernel Ridge and Radial Basis Support Vector Machine regression. The average Mean Absolute Error obtained across the 19 subjects was 114 ms. The present study demonstrates, for the first time, that it is possible to predict reaction times on the basis of EEG data. The presented solution can serve as (a) foundation for a system that can, in the future, increase the safety of air traffic.

Journal ArticleDOI
TL;DR: A standardized QC protocol for brain registration, with minimal training overhead and no required knowledge of brain anatomy is proposed, to help standardize QC practices across laboratories, improve the consistency of reporting of QC in publications, and open the way for QC assessment of large datasets which could be used to train automated QC systems.
Abstract: Automatic alignment of brain anatomy in a standard space is a key step when processing magnetic resonance imaging for group analyses. Such brain registration is prone to failure, and the results are therefore typically reviewed visually to ensure quality. There is however no standard, validated protocol available to perform this visual quality control (QC). We propose here a standardized QC protocol for brain registration, with minimal training overhead and no required knowledge of brain anatomy. We validated the reliability of three-level QC ratings (OK, Maybe, Fail) across different raters. Nine experts each rated N = 100 validation images, and reached moderate to good agreement (kappa from 0.4 to 0.68, average of 0.54 ± 0.08), with the highest agreement for "Fail" images (Dice from 0.67 to 0.93, average of 0.8 ± 0.06). We then recruited volunteers through the Zooniverse crowdsourcing platform, and extracted a consensus panel rating for both the Zooniverse raters (N = 41) and the expert raters. The agreement between expert and Zooniverse panels was high (kappa = 0.76). Overall, our protocol achieved a good reliability when performing a two level assessment (Fail vs. OK/Maybe) by an individual rater, or aggregating multiple three-level ratings (OK, Maybe, Fail) from a panel of experts (3 minimum) or non-experts (15 minimum). Our brain registration QC protocol will help standardize QC practices across laboratories, improve the consistency of reporting of QC in publications, and will open the way for QC assessment of large datasets which could be used to train automated QC systems.

Journal ArticleDOI
TL;DR: To the best of the authors' knowledge, this is the first study combining both structured clinical data with non-structured NCCT imaging data for the diagnosis of LVO in the acute setting, with superior performance compared to previously reported approaches.
Abstract: Background The detection of large vessel occlusion (LVO) plays a critical role in the diagnosis and treatment of acute ischemic stroke (AIS). Identifying LVO in the pre-hospital setting or early stage of hospitalization would increase the patients' chance of receiving appropriate reperfusion therapy and thereby improve neurological recovery. Methods To enable rapid identification of LVO, we established an automated evaluation system based on all recorded AIS patients in Hong Kong Hospital Authority's hospitals in 2016. The 300 study samples were randomly selected based on a disproportionate sampling plan within the integrated electronic health record system, and then separated into a group of 200 patients for model training, and another group of 100 patients for model performance evaluation. The evaluation system contained three hierarchical models based on patients' demographic data, clinical data and non-contrast CT (NCCT) scans. The first two levels of modeling utilized structured demographic and clinical data, while the third level involved additional NCCT imaging features obtained from deep learning model. All three levels' modeling adopted multiple machine learning techniques, including logistic regression, random forest, support vector machine (SVM), and eXtreme Gradient Boosting (XGboost). The optimal cut-off for the likelihood of LVO was determined by the maximal Youden index based on 10-fold cross-validation. Comparisons of performance on the testing group were made between these techniques. Results Among the 300 patients, there were 160 women and 140 men aged from 27 to 104 years (mean 76.0 with standard deviation 13.4). LVO was present in 130 (43.3%) patients. Together with clinical and imaging features, the XGBoost model at the third level of evaluation achieved the best model performance on testing group. The Youden index, accuracy, sensitivity, specificity, F1 score, and area under the curve (AUC) were 0.638, 0.800, 0.953, 0.684, 0.804, and 0.847, respectively. Conclusion To the best of our knowledge, this is the first study combining both structured clinical data with non-structured NCCT imaging data for the diagnosis of LVO in the acute setting, with superior performance compared to previously reported approaches. Our system is capable of automatically providing preliminary evaluations at different pre-hospital stages for potential AIS patients.

Journal ArticleDOI
TL;DR: A simple method in AFNI for determining the consistency of left and right within a pair of acquired volumes for a particular subject; the presence of EPI-anatomical inconsistency, for example, is a sign that dataset header information likely requires correction.
Abstract: Knowing the difference between left and right is generally assumed throughout the brain MRI research community. However, we note widespread occurrences of left-right orientation errors in MRI open database repositories where volumes have contained systematic left-right flips between subject EPIs and anatomicals, due to having incorrect or missing file header information. Here we present a simple method in AFNI for determining the consistency of left and right within a pair of acquired volumes for a particular subject; the presence of EPI-anatomical inconsistency, for example, is a sign that dataset header information likely requires correction. The method contains both a quantitative evaluation as well as a visualizable verification. We test the functionality using publicly available datasets. Left-right flipping is not immediately obvious in most cases, so we also present visualization methods for looking at this problem (and other potential problems), using examples from both FMRI and DTI datasets.

Journal ArticleDOI
TL;DR: The Tomographic Quantitative Electroencephalography (qEEGt) toolbox is integrated with the Montreal Neurological Institute (MNI) Neuroinformatics Ecosystem as a docker into the Canadian Brain Imaging Research Platform (CBRAIN).
Abstract: The Tomographic Quantitative Electroencephalography (qEEGt) toolbox is integrated with the Montreal Neurological Institute (MNI) Neuroinformatics Ecosystem as a docker into the Canadian Brain Imaging Research Platform (CBRAIN). qEEGt produces age-corrected normative Statistical Parametric Maps of EEG log source spectra testing compliance to a normative database. This toolbox was developed at the Cuban Neuroscience Center as part of the first wave of the Cuban Human Brain Mapping Project (CHBMP) and has been validated and used in different health systems for several decades. Incorporation into the MNI ecosystem now provides CBRAIN registered users access to its full functionality and is accompanied by a public release of the source code on GitHub and Zenodo repositories. Among other features are the calculation of EEG scalp spectra, and the estimation of their source spectra using the Variable Resolution Electrical Tomography (VARETA) source imaging. Crucially, this is completed by the evaluation of z spectra by means of the built-in age regression equations obtained from the CHBMP database (ages 5-87) to provide normative Statistical Parametric Mapping of EEG log source spectra. Different scalp and source visualization tools are also provided for evaluation of individual subjects prior to further post-processing. Openly releasing this software in the CBRAIN platform will facilitate the use of standardized qEEGt methods in different research and clinical settings. An updated precis of the methods is provided in Appendix I as a reference for the toolbox. qEEGt/CBRAIN is the first installment of instruments developed by the neuroinformatic platform of the Cuba-Canada-China (CCC) project.

Journal ArticleDOI
TL;DR: The Python package Sammba-MRI (SmAll-MaMmal BrAin MRI in Python) allows flexible and efficient use of existing methods and enables fluent scriptable analysis workflows, from raw data conversion to multimodal processing.
Abstract: Small-mammal neuroimaging offers incredible opportunities to investigate structural and functional aspects of the brain. Many tools have been developed in the last decade to analyse small animal data, but current softwares are less mature than the available tools that process human brain data. The Python package Sammba-MRI (SmAll-MaMmal BrAin MRI in Python; http://sammba-mri.github.io) allows flexible and efficient use of existing methods and enables fluent scriptable analysis workflows, from raw data conversion to multimodal processing.

Journal ArticleDOI
TL;DR: An alternative method based on the functional connectivity strength (FCS) derived from an individual channel provided a new approach to identify schizophrenia, improving the objective diagnosis of this mental disorder.
Abstract: Functional near-infrared spectroscopy (fNIRS) has been widely employed in the objective diagnosis of patients with schizophrenia during a verbal fluency task (VFT). Most of the available methods depended on the time-domain features extracted from the data of single or multiple channels. The present study proposed an alternative method based on the functional connectivity strength (FCS) derived from an individual channel. The data measured 100 patients with schizophrenia and 100 healthy controls, who were used to train the classifiers and to evaluate their performance. Different classifiers were evaluated, and support machine vector achieved the best performance. In order to reduce the dimensional complexity of the feature domain, principal component analysis (PCA) was applied. The classification results by using an individual channel, a combination of several channels, and 52 ensemble channels with and without the dimensional reduced technique were compared. It provided a new approach to identify schizophrenia, improving the objective diagnosis of this mental disorder. FCS from three channels on the medial prefrontal and left ventrolateral prefrontal cortices rendered accuracy as high as 84.67%, sensitivity at 92.00%, and specificity at 70%. The neurophysiological significance of the change at these regions was consistence with the major syndromes of schizophrenia.

Journal ArticleDOI
TL;DR: Findings suggested that a higher number of trials per minute optimizes the ITR of a non-invasive BBI, suggesting that the delays innate to each BCI protocol and CBI stimulation method must also be accounted for.
Abstract: A non-invasive, brain-to-brain interface (BBI) requires precision neuromodulation and high temporal resolution as well as portability to increase accessibility. A BBI is a combination of the brain-computer interface (BCI) and the computer-brain interface (CBI). The optimization of BCI parameters has been extensively researched, but CBI has not. Parameters taken from the BCI and CBI literature were used to simulate a two-class medical monitoring BBI system under a wide range of conditions. BBI function was assessed using the information transfer rate (ITR), measured in bits per trial and bits per minute. The BBI ITR was a function of classifier accuracy, window update rate, system latency, stimulation failure rate (SFR), and timeout threshold. The BCI parameters, including window length, update rate, and classifier accuracy, were kept constant to investigate the effects of varying the CBI parameters, including system latency, SFR, and timeout threshold. Based on passively monitoring BCI parameters, a base ITR of 1 bit/trial was used. The optimal latency was found to be 100 ms or less, with a threshold no more than twice its value. With the optimal latency and timeout parameters, the system was able to maintain near-maximum efficiency, even with a 25% SFR. When the CBI and BCI parameters are compared, the CBI's system latency and timeout threshold should be reflected in the BCI's update rate. This would maximize the number of trials, even at a high SFR. These findings suggested that a higher number of trials per minute optimizes the ITR of a non-invasive BBI. The delays innate to each BCI protocol and CBI stimulation method must also be accounted for. The high latencies in each are the primary constraints of non-invasive BBI for the foreseeable future.

Journal ArticleDOI
TL;DR: Stimulator improvement and precision positioning solutions promise opportunities for further studies of temporally interfering electrical stimulation.
Abstract: Methods by which to achieve non-invasive deep brain stimulation via temporally interfering with electric fields have been proposed, but the precision of the positioning of the stimulation and the reliability and stability of the outputs require improvement. In this study, a temporally interfering electrical stimulator was developed based on a neuromodulation technique using the interference modulation waveform produced by several high-frequency electrical stimuli to treat neurodegenerative diseases. The device and auxiliary software constitute a non-invasive neuromodulation system. The technical problems related to the multichannel high-precision output of the device were solved by an analog phase accumulator and a special driving circuit to reduce crosstalk. The function of measuring bioimpedance in real time was integrated into the stimulator to improve effectiveness. Finite element simulation and phantom measurements were performed to find the functional relations among the target coordinates, current ratio, and electrode position in the simplified model. Then, an appropriate approach was proposed to find electrode configurations for desired target locations in a detailed and realistic mouse model. A mouse validation experiment was carried out under the guidance of a simulation, and the reliability and positioning accuracy of temporally interfering electric stimulators were verified. Stimulator improvement and precision positioning solutions promise opportunities for further studies of temporally interfering electrical stimulation.

Journal ArticleDOI
TL;DR: An open-source tool that enables online feedback during electrophysiology experiments and provides a Python interface for the widely used Open Ephys open source data acquisition system, called OPETH, which allows real-time identification of genetically defined neuron types or behaviorally responsive populations.
Abstract: Single cell electrophysiology remains one of the most widely used approaches of systems neuroscience. Decisions made by the experimenter during electrophysiology recording largely determine recording quality, duration of the project and value of the collected data. Therefore, online feedback aiding these decisions can lower monetary and time investment, and substantially speed up projects as well as allow novel studies otherwise not possible due to prohibitively low throughput. Real-time feedback is especially important in studies that involve optogenetic cell type identification by enabling a systematic search for neurons of interest. However, such tools are scarce and limited to costly commercial systems with high degree of specialization, which hitherto prevented wide-ranging benefits for the community. To address this, we present an open-source tool that enables online feedback during electrophysiology experiments and provides a Python interface for the widely used Open Ephys open source data acquisition system. Specifically, our software allows flexible online visualization of spike alignment to external events, called the online peri-event time histogram (OPETH). These external events, conveyed by digital logic signals, may indicate photostimulation time stamps for in vivo optogenetic cell type identification or the times of behaviorally relevant events during in vivo behavioral neurophysiology experiments. Therefore, OPETH allows real-time identification of genetically defined neuron types or behaviorally responsive populations. By allowing "hunting" for neurons of interest, OPETH significantly reduces experiment time and thus increases the efficiency of experiments that combine in vivo electrophysiology with behavior or optogenetic tagging of neurons.

Journal ArticleDOI
TL;DR: The results imply that predicting BMI from structural brain scans using DL represents a promising approach to investigate the relationship between brain morphological variability and individual differences in body weight and provide a new scope for future investigations regarding the potential clinical utility of brain-predicted BMI.
Abstract: In recent years, deep learning (DL) has become more widespread in the fields of cognitive and clinical neuroimaging. Using deep neural network models to process neuroimaging data is an efficient method to classify brain disorders and identify individuals who are at increased risk of age-related cognitive decline and neurodegenerative disease. Here we investigated, for the first time, whether structural brain imaging and DL can be used for predicting a physical trait that is of significant clinical relevance-the body mass index (BMI) of the individual. We show that individual BMI can be accurately predicted using a deep convolutional neural network (CNN) and a single structural magnetic resonance imaging (MRI) brain scan along with information about age and sex. Localization maps computed for the CNN highlighted several brain structures that strongly contributed to BMI prediction, including the caudate nucleus and the amygdala. Comparison to the results obtained via a standard automatic brain segmentation method revealed that the CNN-based visualization approach yielded complementary evidence regarding the relationship between brain structure and BMI. Taken together, our results imply that predicting BMI from structural brain scans using DL represents a promising approach to investigate the relationship between brain morphological variability and individual differences in body weight and provide a new scope for future investigations regarding the potential clinical utility of brain-predicted BMI.

Journal ArticleDOI
TL;DR: This paper demonstrates the feasibility of a multicenter animal study, sharing raw data from forty rats and processing pipelines between four imaging centers, and quantitatively reports about the variability observed across two MR data providers and evaluates the influence of image processing steps on the final maps.
Abstract: Similarly to human population imaging, there are several well-founded motivations for animal population imaging, the most notable being the improvement of the validity of statistical results by pooling a sufficient number of animal data provided by different imaging centers. In this paper, we demonstrate the feasibility of such a multicenter animal study, sharing raw data from forty rats and processing pipelines between four imaging centers. As specific use case, we focused on T1 and T2 mapping of the healthy rat brain at 7T. We quantitatively report about the variability observed across two MR data providers and evaluate the influence of image processing steps on the final maps, using three fitting algorithms from three centers. Finally, to derive relaxation times from different brain areas, two multi-atlas segmentation pipelines from different centers were performed on two different platforms. Differences between the two data providers were 2.21% for T1 and 9.52% for T2. Differences between processing pipelines were 1.04% for T1 and 3.33% for T2. These maps, obtained in healthy conditions, may be used in the future as reference when exploring alterations in animal models of pathology.

Journal ArticleDOI
TL;DR: This work integrates an existing framework for continuous interactions with a recently proposed directed communication scheme for spikes, allowing, for the first time, the efficient exploration of the interactions of chemical and electrical coupling in large-scale neuronal networks models with natural synapse density distributed across thousands of compute nodes.
Abstract: Investigating the dynamics and function of large-scale spiking neuronal networks with realistic numbers of synapses is made possible today by state-of-the-art simulation code that scales to the largest contemporary supercomputers. However, simulations that involve electrical interactions, also called gap junctions, besides chemical synapses scale only poorly due to a communication scheme that collects global data on each compute node. In comparison to chemical synapses, gap junctions are far less abundant. To improve scalability we exploit this sparsity by integrating an existing framework for continuous interactions with a recently proposed directed communication scheme for spikes. Using a reference implementation in the NEST simulator we demonstrate excellent scalability of the integrated framework, accelerating large-scale simulations with gap junctions by more than an order of magnitude. This allows, for the first time, the efficient exploration of the interactions of chemical and electrical coupling in large-scale neuronal networks models with natural synapse density distributed across thousands of compute nodes.

Journal ArticleDOI
TL;DR: This article unifies neural modeling results that illustrate several basic design principles and mechanisms used by advanced brains to develop cortical maps with multiple psychological functions and concerns the role of Adaptive Resonance Theory top-down matching and attentional circuits in the dynamic stabilization of early development and adult learning.
Abstract: This article unifies neural modeling results that illustrate several basic design principles and mechanisms that are used by advanced brains to develop cortical maps with multiple psychological functions. One principle concerns how brains use a strip map that simultaneously enables one feature to be represented throughout its extent, as well as an ordered array of another feature at different positions of the strip. Strip maps include circuits to represent ocular dominance and orientation columns, place-value numbers, auditory streams, speaker-normalized speech, and cognitive working memories that can code repeated items. A second principle concerns how feature detectors for multiple functions develop in topographic maps, including maps for optic flow navigation, reinforcement learning, motion perception, and category learning at multiple organizational levels. A third principle concerns how brains exploit a spatial gradient of cells that respond at an ordered sequence of different rates. Such a rate gradient is found along the dorsoventral axis of the entorhinal cortex, whose lateral branch controls the development of time cells, and whose medial branch controls the development of grid cells. Populations of time cells can be used to learn how to adaptively time behaviors for which a time interval of hundreds of milliseconds, or several seconds, must be bridged, as occurs during trace conditioning. Populations of grid cells can be used to learn hippocampal place cells that represent the large spaces in which animals navigate. A fourth principle concerns how and why all neocortical circuits are organized into layers, and how functionally distinct columns develop in these circuits to enable map development. A final principle concerns the role of Adaptive Resonance Theory top-down matching and attentional circuits in the dynamic stabilization of early development and adult learning. Cortical maps are modeled in visual, auditory, temporal, parietal, prefrontal, entorhinal, and hippocampal cortices.

Journal ArticleDOI
TL;DR: This paper aims to demonstrate the efforts towards in-situ applicability of EMMARM, the objective of which is to provide real-time information about the “habitats” of Neuroscience and Medicine.
Abstract: 1 School of Mathematical and Statistical Sciences, School of Life Sciences, Arizona State University, Tempe, AZ, United States, Department of Integrative and Computational Neuroscience, Paris-Saclay Institute of Neuroscience, CNRS/Université Paris-Saclay, Gif-sur-Yvette, France, Department of Biostatistics and Center for Medical Informatics, Yale University, New Haven, CT, United States, 4 Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway, 5 Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany

Journal ArticleDOI
TL;DR: A new Smart Region Growing algorithm (SmRG) is presented for the segmentation of single neurons in their intricate 3D arrangement within the brain, enabling an accurate reconstruction of complex 3D cellular structures from high-resolution images of neural tissue.
Abstract: Accurately digitizing the brain at the micro-scale is crucial for investigating brain structure-function relationships and documenting morphological alterations due to neuropathies. Here we present a new Smart Region Growing algorithm (SmRG) for the segmentation of single neurons in their intricate 3D arrangement within the brain. Its Region Growing procedure is based on a homogeneity predicate determined by describing the pixel intensity statistics of confocal acquisitions with a mixture model, enabling an accurate reconstruction of complex 3D cellular structures from high-resolution images of neural tissue. The algorithm's outcome is a 3D matrix of logical values identifying the voxels belonging to the segmented structure, thus providing additional useful volumetric information on neurons. To highlight the algorithm's full potential, we compared its performance in terms of accuracy, reproducibility, precision and robustness of 3D neuron reconstructions based on microscopic data from different brain locations and imaging protocols against both manual and state-of-the-art reconstruction tools.

Journal ArticleDOI
TL;DR: A novel semi-supervised learning algorithm to boost the performance of random forest under limited labeled data by exploiting the local structure of unlabeled data and replacing it with a graph-embedded entropy which is more reliable for insufficient labeled data scenario.
Abstract: One major challenge in medical imaging analysis is the lack of label and annotation which usually requires medical knowledge and training. This issue is particularly serious in the brain image analysis such as the analysis of retinal vasculature, which directly reflects the vascular condition of Central Nervous System (CNS). In this paper, we present a novel semi-supervised learning algorithm to boost the performance of random forest under limited labeled data by exploiting the local structure of unlabeled data. We identify the key bottleneck of random forest to be the information gain calculation and replace it with a graph-embedded entropy which is more reliable for insufficient labeled data scenario. By properly modifying the training process of standard random forest, our algorithm significantly improves the performance while preserving the virtue of random forest such as low computational burden and robustness over over-fitting. Our method has shown a superior performance on both medical imaging analysis and machine learning benchmarks.