scispace - formally typeset
Search or ask a question
JournalISSN: 1662-453X

Frontiers in Neuroscience 

Frontiers Media
About: Frontiers in Neuroscience is an academic journal published by Frontiers Media. The journal publishes majorly in the area(s): Population & Resting state fMRI. It has an ISSN identifier of 1662-453X. It is also open access. Over the lifetime, 9242 publications have been published receiving 268946 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: MNE-Python as discussed by the authors is an open-source software package that provides state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions.
Abstract: Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne.

1,723 citations

Journal ArticleDOI
TL;DR: The most common building blocks and techniques used to implement these circuits, and an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin–Huxley models to bi-dimensional generalized adaptive integrate and fire models.
Abstract: Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain-machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin-Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.

1,559 citations

Journal ArticleDOI
TL;DR: Some of the mathematical concepts available for quantitative analysis of (hierarchical) modularity in brain networks are reviewed and some of the recent work investigating modularity of structural and functional brain networks derived from analysis of human neuroimaging data is summarized.
Abstract: Brain networks are increasingly understood as one of a large class of information processing systems that share important organizational principles in common, including the property of a modular community structure. A module is topologically defined as a subset of highly inter-connected nodes which are relatively sparsely connected to nodes in other modules. In brain networks, topological modules are often made up of anatomically neighboring and/or functionally related cortical regions, and inter-modular connections tend to be relatively long distance. Moreover, brain networks and many other complex systems demonstrate the property of hierarchical modularity, or modularity on several topological scales: within each module there will be a set of sub-modules, and within each sub-module a set of sub-sub-modules, etc. There are several general advantages to modular and hierarchically modular network organization, including greater robustness, adaptivity, and evolvability of network function. In this context, we review some of the mathematical concepts available for quantitative analysis of (hierarchical) modularity in brain networks and we summarize some of the recent work investigating modularity of structural and functional brain networks derived from analysis of human neuroimaging data.

1,042 citations

Journal ArticleDOI
TL;DR: The FBCSP algorithm performed relatively the best among the other submitted algorithms and yielded a mean kappa value of 0.569 and 0.600 across all subjects in Datasets 2a and 2b of the BCI Competition IV.
Abstract: The Common Spatial Pattern (CSP) algorithm is an effective and popular method for classifying 2-class motor imagery electroencephalogram (EEG) data, but its effectiveness depends on the subject-specific frequency band. This paper presents the Filter Bank Common Spatial Pattern (FBCSP) algorithm to optimize the subject-specific frequency band for CSP on Datasets 2a and 2b of the Brain-Computer Interface (BCI) Competition IV. Dataset 2a comprised 4 classes of 22 channels EEG data from 9 subjects, and Dataset 2b comprised 2 classes of 3 bipolar channels EEG data from 9 subjects. Multi-class extensions to FBCSP are also presented to handle the 4-class EEG data in Dataset 2a, namely, Divide-and-Conquer (DC), Pair-Wise (PW), and One-Versus-Rest (OVR) approaches. Two feature selection algorithms are also presented to select discriminative CSP features on Dataset 2b, namely, the Mutual Information-based Best Individual Feature (MIBIF) algorithm, and the Mutual Information-based Rough Set Reduction (MIRSR) algorithm. The single-trial classification accuracies were presented using 10x10-fold cross-validations on the training data and session-to-session transfer on the evaluation data from both datasets. Disclosure of the test data labels after the BCI Competition IV showed that the FBCSP algorithm performed relatively the best among the other submitted algorithms and yielded a mean kappa value of 0.569 and 0.600 across all subjects in Datasets 2a and 2b respectively.

862 citations

Journal ArticleDOI
TL;DR: In this paper, the membrane potentials of spiking neurons are treated as differentiable signals, where discontinuities at spike times are considered as noise, which enables an error backpropagation mechanism for deep spiking neural networks.
Abstract: Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

818 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202395
202216
20211,642
20201,413
20191,438
20181,017