scispace - formally typeset
Search or ask a question

Showing papers in "Frontiers in Neuroinformatics in 2015"


Journal ArticleDOI
TL;DR: It is demonstrated that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results and a multi-stage robust referencing scheme is introduced to deal with the noisy channel-reference interaction.
Abstract: The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode/.

701 citations


Journal ArticleDOI
TL;DR: NeuroVault as discussed by the authors is a web-based repository that allows researchers to store, share, visualize, and decode statistical maps of the human brain without the need to install additional software.
Abstract: Here we present NeuroVault — a web based repository that allows researchers to store, share, visualize, and decode statistical maps of the human brain. NeuroVault is easy to use and employs modern web technologies to provide informative visualization of data without the need to install additional software. In addition, it leverages the power of the Neurosynth database to provide cognitive decoding of deposited maps. The data are exposed through a public REST API enabling other services and tools to take advantage of it. NeuroVault is a new resource for researchers interested in conducting meta- and coactivation analyses.

495 citations


Journal ArticleDOI
TL;DR: The Decoding Toolbox (TDT) is introduced which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data and offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns.
Abstract: The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns.

348 citations


Journal ArticleDOI
TL;DR: Bonsai is described, a modular, high-performance, open-source visual programming framework for the acquisition and online processing of data streams and demonstrated how it allows for the rapid and flexible prototyping of integrated experimental designs in neuroscience.
Abstract: The design of modern scientific experiments requires the control and monitoring of many different data streams. However, the serial execution of programming instructions in a computer makes it a challenge to develop software that can deal with the asynchronous, parallel nature of scientific data. Here we present Bonsai, a modular, high-performance, open-source visual programming framework for the acquisition and online processing of data streams. We describe Bonsai's core principles and architecture and demonstrate how it allows for the rapid and flexible prototyping of integrated experimental designs in neuroscience. We specifically highlight some applications that require the combination of many different hardware and software components, including video tracking of behavior, electrophysiology and closed-loop control of stimulation.

343 citations


Journal ArticleDOI
TL;DR: Pycortex as mentioned in this paper is a toolbox for interactive surface mapping and visualization of fMRI images, which can be inflated and flattened interactively, aiding interpretation of the correspondence between the anatomical surface and the flattened cortical sheet.
Abstract: Surface visualizations of fMRI provide a comprehensive view of cortical activity. However, surface visualizations are difficult to generate and most common visualization techniques rely on unnecessary interpolation which limits the fidelity of the resulting maps. Furthermore, it is difficult to understand the relationship between flattened cortical surfaces and the underlying 3D anatomy using tools available currently. To address these problems we have developed pycortex, a Python toolbox for interactive surface mapping and visualization. Pycortex exploits the power of modern graphics cards to sample volumetric data on a per-pixel basis, allowing dense and accurate mapping of the voxel grid across the surface. Anatomical and functional information can be projected onto the cortical surface. The surface can be inflated and flattened interactively, aiding interpretation of the correspondence between the anatomical surface and the flattened cortical sheet. The output of pycortex can be viewed using WebGL, a technology compatible with modern web browsers. This allows complex fMRI surface maps to be distributed broadly online without requiring installation of complex software.

147 citations


Journal ArticleDOI
TL;DR: An open-source framework, automatic analysis (aa), that can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.
Abstract: Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.

144 citations


Journal ArticleDOI
TL;DR: It is found that FSL, Freesurfer and CIVET use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve, which creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction.
Abstract: Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

117 citations


Journal ArticleDOI
TL;DR: The ANNarchy (Artificial Neural Networks architect) neural simulator is presented, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both, and is compared to existing solutions.
Abstract: Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions.

81 citations


Journal ArticleDOI
TL;DR: This work presents a general purpose data transformation system that is developed by leveraging the existing state-of-the-art in data integration and query rewriting, plus new execution methodologies to ensure efficient data transformation for disease datasets.
Abstract: This paper presents a system for declaratively transforming medical subjects' data into a common data model representation. Our work is part of the “GAAIN” project on Alzheimer's disease data federation across multiple data providers. We present a general purpose data transformation system that we have developed by leveraging the existing state-of-the-art in data integration and query rewriting. In this work we have further extended the current technology with new formalisms that facilitate expressing a broader range of data transformation tasks, plus new execution methodologies to ensure efficient data transformation for disease datasets.

79 citations


Journal ArticleDOI
TL;DR: FrontiersinNeuroinformatics | www.frontiersin.org 1 April2015|Volume9|Article11Editedandreviewedby:Sean L. Hill,International NeuroinformaticCoordinating Facility, Sweden
Abstract: EDITORIALpublished:14April2015doi:10.3389/fninf.2015.00011FrontiersinNeuroinformatics|www.frontiersin.org 1 April2015|Volume9|Article11Editedandreviewedby:Sean L. Hill,International NeuroinformaticsCoordinating Facility, Sweden*Correspondence:Andrew P. Davison,andrew.davison@unic.cnrs-gif.frReceived:20 March 2015Accepted:28 March 2015Published:14 April 2015Citation:Muller E, Bednar JA, Diesmann M,Gewaltig M-O, Hines M and DavisonAP (2015) Python in neuroscience.Front. Neuroinform. 9:11.doi: 10.3389/fninf.2015.00011

63 citations


Journal ArticleDOI
TL;DR: BrainX3 can be used as a novel immersive platform for exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery.
Abstract: BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX3 can thus be used as a novel immersive platform for exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas.

Journal ArticleDOI
TL;DR: Overall, it is shown how the improved spatial resolution provided by high density, large scale MEAs can be reliably exploited to characterize activity from large neural populations and brain circuits.
Abstract: An emerging generation of high-density microelectrode arrays (MEAs) is now capable of recording spiking activity simultaneously from thousands of neurons with closely spaced electrodes. Reliable spike detection and analysis in such recordings is challenging due to the large amount of raw data and the dense sampling of spikes with closely spaced electrodes. Here, we present a highly efficient, online capable spike detection algorithm, and an offline method with improved detection rates, which enables estimation of spatial event locations at a resolution higher than that provided by the array by combining information from multiple electrodes. Data acquired with a 4096 channel MEA from neuronal cultures and the neonatal retina, as well as synthetic data, was used to test and validate these methods. We demonstrate that these algorithms outperform conventional methods due to a better noise estimate and an improved signal-to-noise ratio (SNR) through combining information from multiple electrodes. Finally, we present a new approach for analyzing population activity based on the characterization of the spatio-temporal event profile, which does not require the isolation of single units. Overall, we show how the improved spatial resolution provided by high density, large scale MEAs can be reliably exploited to characterize activity from large neural populations and brain circuits.

Journal ArticleDOI
TL;DR: BrainBrowser is a lightweight, high-performance JavaScript visualization library built to provide easy-to-use, powerful, on-demand visualization of remote datasets in this new research environment.
Abstract: Recent years have seen massive, distributed datasets become the norm in neuroimaging research, and the methodologies used analyze them have, in response, become more collaborative and exploratory. Tools and infrastructure are continuously being developed and deployed to facilitate research in this context: grid computation platforms to process the data, distributed data stores to house and share them, high-speed networks to move them around and collaborative, often web-based, platforms to provide access to and sometimes manage the entire system. BrainBrowser is a lightweight, high-performance JavaScript visualization library built to provide easy-to-use, powerful, on-demand visualization of remote datasets in this new research environment. BrainBrowser leverages modern Web technologies, such as WebGL, HTML5 and Web Workers, to visualize 3D surface and volumetric neuroimaging data in any modern web browser without requiring any browser plugins. It is thus trivial to integrate BrainBrowser into any web-based platform. BrainBrowser is simple enough to produce a basic web-based visualization in a few lines of code, while at the same time being robust enough to create full-featured visualization applications. BrainBrowser can dynamically load the data required for a given visualization, so no network bandwidth needs to be waisted on data that will not be used. BrainBrowser's integration into the standardized web platform also allows users to consider using 3D data visualization in novel ways, such as for data distribution, data sharing and dynamic online publications. BrainBrowser is already being used in two major online platforms, CBRAIN and LORIS, and has been used to make the 1TB MACACC dataset openly accessible.

Journal ArticleDOI
TL;DR: A Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint, and a StimServer for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two- photon acquisition system.
Abstract: Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, "FocusStack", for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, "StimServer", for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. "FocusStack" is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories.

Journal ArticleDOI
TL;DR: An algorithm is presented for parameter estimation of spiking neuron models which uses a spike train metric as a fitness function and applies this to parameter discovery in modeling two experimental data sets with spiking neurons.
Abstract: Spiking neuron models can accurately predict the response of neurons to somatically injected currents if the model parameters are carefully tuned. Predicting the response of in-vivo neurons responding to natural stimuli presents a far more challenging modeling problem. In this study, an algorithm is presented for parameter estimation of spiking neuron models. The algorithm is a hybrid evolutionary algorithm which uses a spike train metric as a fitness function. We apply this to parameter discovery in modeling two experimental data sets with spiking neurons; in-vitro current injection responses from a regular spiking pyramidal neuron are modeled using spiking neurons and in-vivo extracellular auditory data is modeled using a two stage model consisting of a stimulus filter and spiking neuron model.

Journal ArticleDOI
TL;DR: In this paper, the authors present a fully-automated image-to-graphs pipeline (i.e., a pipeline that begins with an imaged volume of neural tissue and produces a brain graph without any human interaction).
Abstract: Reconstructing a map of neuronal connectivity is a critical challenge in contemporary neuroscience. Recent advances in high-throughput serial section electron microscopy (EM) have produced massive 3D image volumes of nanoscale brain tissue for the first time. The resolution of EM allows for individual neurons and their synaptic connections to be directly observed. Recovering neuronal networks by manually tracing each neuronal process at this scale is unmanageable, and therefore researchers are developing automated image processing modules. Thus far, state-of-the-art algorithms focus only on the solution to a particular task (e.g., neuron segmentation or synapse identification). In this manuscript we present the first fully-automated images-to-graphs pipeline (i.e., a pipeline that begins with an imaged volume of neural tissue and produces a brain graph without any human interaction). To evaluate overall performance and select the best parameters and methods, we also develop a metric to assess the quality of the output graphs. We evaluate a set of algorithms and parameters, searching possible operating points to identify the best available brain graph for our assessment metric. Finally, we deploy a reference end-to-end version of the pipeline on a large, publicly available data set. This provides a baseline result and framework for community analysis and future algorithm development and testing. All code and data derivatives have been made publicly available in support of eventually unlocking new biofidelic computational primitives and understanding of neuropathologies.

Journal ArticleDOI
TL;DR: A numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy, and demonstrates that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections.
Abstract: Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy...

Journal ArticleDOI
TL;DR: A new version of the authors' manually curated corpus is described that adds 2,111 connectivity statements from 1,828 additional abstracts and cross-validation classification within the new corpus replicates results on the original corpus, recalling 67% of connectivity statements at 51% precision.
Abstract: We describe the WhiteText project, and its progress towards automatically extracting statements of neuroanatomical connectivity from text. We review progress to date on the three main steps of the project: recognition of brain region mentions, standardization of brain region mentions to neuroanatomical nomenclature, and connectivity statement extraction. We further describe a new version of our manually curated corpus that adds 2,111 connectivity statements from 1,828 additional abstracts. Cross-validation classification within the new corpus replicates results on our original corpus, recalling 67% of connectivity statements at 51% precision. The resulting merged corpus provides 5,208 connectivity statements that can be used to seed species-specific connectivity matrices and to better train automated techniques. Finally, we present a new web application that allows fast interactive browsing of the over 70,000 sentences indexed by the system, as a tool for accessing the data and assisting in further curation. Software and data are freely available at http://www.chibi.ubc.ca/WhiteText/.

Journal ArticleDOI
TL;DR: It is shown that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process.
Abstract: Neurons come in a wide variety of shapes and sizes. In a quest to understand this neuronal diversity, researchers have three-dimensionally traced tens of thousands of neurons; many of these tracings are freely available through online repositories like NeuroMorpho.Org and ModelDB. Tracings can be visualized on the computer screen, used for statistical analysis of the properties of different cell types, used to simulate neuronal behavior, and more. We introduce the use of 3D printing as a technique for visualizing traced morphologies. Our method for generating printable versions of a cell or group of cells is to expand dendrite and axon diameters and then to transform the wireframe tracing into a 3D object with a neuronal surface generating algorithm like Constructive Tessellated Neuronal Geometry (CTNG). We show that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process. We share our printable models in a new database, 3DModelDB, and encourage others to do the same with cells that they generate using our code or other methods. To provide additional context, 3DModelDB provides a simulatable version of each cell, links to papers that use or describe it, and links to associated entries in other databases.

Journal ArticleDOI
TL;DR: A software prototype is introduced that enables its users to add semantically richer expressions into a Java object-oriented code and the mapping that allows the transformation of the semantically enriched Java code into theSemantic Web language OWL was proposed and implemented in a library named the Semantic Framework.
Abstract: The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework.

Journal ArticleDOI
TL;DR: An automatic multi-subject fiber clustering method that enables retrieval of group-wise WM fiber bundles and serves the final goal of detecting WM bundles at a population level, thus paving the way to the study of the WM organization across groups.
Abstract: Mapping of structural and functional connectivity may provide deeper understanding of brain function and disfunction. Diffusion Magnetic Resonance Imaging (DMRI) is a powerful technique to non-invasively delineate white matter (WM) tracts and to obtain a three-dimensional description of the structural architecture of the brain. However, DMRI tractography methods produce highly multi-dimensional datasets whose interpretation requires advanced analytical tools. Indeed, manual identification of specific neuroanatomical tracts based on prior anatomical knowledge is time-consuming and prone to operator-induced bias. Here we propose an automatic multi-subject fiber clustering method that enables retrieval of group-wise WM fiber bundles. In order to account for variance across subjects, we developed a multi-subject approach based on a method known as Dominant Sets algorithm, via an intra- and cross-subject clustering. The intra-subject step allows us to reduce the complexity of the raw tractography data, thus obtaining homogeneous neuroanatomically-plausible bundles in each diffusion space. The cross-subject step, characterized by a proper space-invariant metric in the original diffusion space, enables the identification of the same WM bundles across multiple subjects without any prior neuroanatomical knowledge. Quantitative analysis was conducted comparing our algorithm with spectral clustering and affinity propagation methods on synthetic dataset. We also performed qualitative analysis on mouse brain tractography retrieving significant WM structures. The approach serves the final goal of detecting WM bundles at a population level, thus paving the way to the study of the WM organization across groups.

Journal ArticleDOI
TL;DR: The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications and addresses the challenge of data heterogeneity.
Abstract: Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications.

Journal ArticleDOI
TL;DR: The object-oriented approach provides flexibility to adapt to a variety of neuroscience simulators, simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and improves tracking of simulator/simulation evolution.
Abstract: We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project.

Journal ArticleDOI
TL;DR: The simulation tool described in this work considers scalar spatially homogeneous neural fields taking into account a finite axonal transmission speed and synaptic temporal derivatives of first and second order, and is presented with an extension use case.
Abstract: Neural Field models (NFM) play an important role in the understanding of neural population dynamics on a mesoscopic spatial and temporal scale. Their numerical simulation is an essential element in the analysis of their spatio-temporal dynamics. The simulation tool described in this work considers scalar spatially homogeneous neural fields taking into account a finite axonal transmission speed and synaptic temporal derivatives of first and second order. A text-based interface offers complete control of field parameters and several approaches are used to accelerate simulations. A graphical output utilizes video hardware acceleration to display running output with reduced computational hindrance compared to simulators that are exclusively software-based. Diverse applications of the tool demonstrate breather oscillations, static and dynamic Turing patterns and activity spreading with finite propagation speed. The simulator is open source to allow tailoring of code and this is presented with an extension use case.

Journal ArticleDOI
TL;DR: An approach based on MATLAB/Simulink is presented, exploiting the benefits of LEGO-like visual programming and configuration, combined to a small, but easily extendible library of functional software components, hopefully extending the range of default techniques and protocols currently employed in experimental labs across the world.
Abstract: Most of the software platforms for cellular electrophysiology are limited in terms of flexibility, hardware support, ease of use, or re-configuration and adaptation for non-expert users. Moreover, advanced experimental protocols requiring real-time closed-loop operation to investigate excitability, plasticity, dynamics, are largely inaccessible to users without moderate to substantial computer proficiency. Here we present an approach based on MATLAB/Simulink, exploiting the benefits of LEGO-like visual programming and configuration, combined to a small, but easily extendible library of functional software components. We provide and validate several examples, implementing conventional and more sophisticated experimental protocols such as dynamic-clamp or the combined use of intracellular and extracellular methods, involving closed-loop real-time control. The functionality of each of these examples is demonstrated with relevant experiments. These can be used as a starting point to create and support a larger variety of electrophysiological tools and methods, hopefully extending the range of default techniques and protocols currently employed in experimental labs across the world.

Journal ArticleDOI
TL;DR: A software architecture is developed that allows neuroscientists to integrate visualization tools more closely into the modeling tasks and forms the basis for semantic linking of different visualizations to reflect the current workflow.
Abstract: Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work.

Journal ArticleDOI
TL;DR: The implementation of TVB-EduPack is covered, which offers two educational functionalities that seamlessly integrate into TVB's graphical user interface (GUI) that allow flexible customization of the modeling process and self-defined batch- and post-processing applications while benefitting from the full power of the Python language and its toolboxes.
Abstract: The Virtual Brain (TVB; thevirtualbrain.org) is a neuroinformatics platform for full brain network simulation based on individual anatomical connectivity data. The framework addresses clinical and neuroscientific questions by simulating multi-scale neural dynamics that range from local population activity to large-scale brain function and related macroscopic signals like electroencephalography and functional magnetic resonance imaging. TVB is equipped with a graphical and a command-line interface to create models that capture the characteristic biological variability to predict the brain activity of individual subjects. To enable researchers from various backgrounds a quick start into TVB and brain network modeling in general, we developed an educational module: TVB-EduPack. EduPack offers two educational functionalities that seamlessly integrate into TVB's graphical user interface (GUI): (i) interactive tutorials introduce GUI elements, guide through the basic mechanics of software usage and develop complex use-case scenarios; animations, videos and textual descriptions transport essential principles of computational neuroscience and brain modeling; (ii) an automatic script generator records model parameters and produces input files for TVB's Python programming interface; thereby, simulation configurations can be exported as scripts that allow flexible customization of the modeling process and self-defined batch- and post-processing applications while benefitting from the full power of the Python language and its toolboxes. This article covers the implementation of TVB-EduPack and its integration into TVB architecture. Like TVB, EduPack is an open source community project that lives from the participation and contribution of its users. TVB-EduPack can be obtained as part of TVB from thevirtualbrain.org.

Journal ArticleDOI
TL;DR: A novel procedure based in machine learning allows the efficient and unbiased selection of a variety of spontaneous CDPs with different shapes and amplitudes that are assumed to represent the activation of functionally coupled sets of dorsal horn neurones that acquire different, structured configurations in response to nociceptive stimuli.
Abstract: Previous studies aimed to disclose the functional organization of the neuronal networks involved in the generation of the spontaneous cord dorsum potentials (CDPs) generated in the lumbosacral spinal segments used predetermined templates to select specific classes of spontaneous CDPs. Since this procedure was time consuming and required continuous supervision, it was limited to the analysis of two specific types of CDPs (negative CDPs and negative positive CDPs), thus excluding potentials that may reflect activation of other neuronal networks of presumed functional relevance. We now present a novel procedure based in machine learning that allows the efficient and unbiased selection of a variety of spontaneous CDPs with different shapes and amplitudes. The reliability and performance of the present method is evaluated by analyzing the effects on the probabilities of generation of different classes of spontaneous CDPs induced by the intradermic injection of small amounts of capsaicin in the anesthetized cat, a procedure known to induce a state of central sensitization leading to allodynia and hyperalgesia. The results obtained with the selection method presently described allowed detection of spontaneous CDPs with specific shapes and amplitudes that are assumed to represent the activation of functionally coupled sets of dorsal horn neurones that acquire different, structured configurations in response to nociceptive stimuli. These changes are considered as responses tending to adequate transmission of sensory information to specific functional requirements as part of homeostatic adjustments.

Journal ArticleDOI
TL;DR: This paper argues that well-constructed software libraries are repositories of both practical and theoretical content that not only serve data analysis but also help educate current and future scientists which substantially augments traditional publication.
Abstract: Open source software is a fundamental mechanism for storing and disseminating knowledge. This role is critical to science and is arguably equally, if not more, important than traditional publication venues in terms of practical and long-term impact. Well-constructed software libraries are repositories of both practical and theoretical content that not only serve data analysis but also help educate current and future scientists which substantially augments traditional publication. Highly software-engineered resources such as BLAS, Armadillo, Eigen, SciPy, R and its packages, Theano and many more are critical to the science of the Large Hadron Collider, Human Genome Project, the Human Connectome Project, and to smaller projects conducted at Universities throughout the world.

Journal ArticleDOI
TL;DR: It is demonstrated how Golgi can enhance connectomic literature searches with a case study investigating a thalamocortical circuit involving the Nucleus Accumbens and Golgi’s potential and future directions for growth in systems neuroscience and connectomics are explored.
Abstract: Golgi (http://www.usegolgi.com) is a prototype interactive brain map of the rat brain that helps researchers intuitively interact with neuroanatomy, connectomics, and cellular and chemical architecture. The flood of "-omic" data urges new ways to help researchers connect discrete findings to the larger context of the nervous system. Here we explore Golgi's underlying reasoning and techniques and how our design decisions balance the constraints of building both a scientifically useful and usable tool. We demonstrate how Golgi can enhance connectomic literature searches with a case study investigating a thalamocortical circuit involving the Nucleus Accumbens and we explore Golgi's potential and future directions for growth in systems neuroscience and connectomics.