scispace - formally typeset
Search or ask a question

Showing papers in "Neuroinformatics in 2015"


Journal ArticleDOI
TL;DR: This work introduces a framework for supervised segmentation based on multiple modality intensity, geometry, and asymmetry feature sets that interface the supervised learning capabilities of the random forest model with regularized probabilistic segmentation using the recently developed ANTsR package.
Abstract: Segmenting and quantifying gliomas from MRI is an important task for diagnosis, planning intervention, and for tracking tumor changes over time. However, this task is complicated by the lack of prior knowledge concerning tumor location, spatial extent, shape, possible displacement of normal tissue, and intensity signature. To accommodate such complications, we introduce a framework for supervised segmentation based on multiple modality intensity, geometry, and asymmetry feature sets. These features drive a supervised whole-brain and tumor segmentation approach based on random forest-derived probabilities. The asymmetry-related features (based on optimal symmetric multimodal templates) demonstrate excellent discriminative properties within this framework. We also gain performance by generating probability maps from random forest models and using these maps for a refining Markov random field regularized probabilistic segmentation. This strategy allows us to interface the supervised learning capabilities of the random forest model with regularized probabilistic segmentation using the recently developed ANTsR package--a comprehensive statistical and visualization interface between the popular Advanced Normalization Tools (ANTs) and the R statistical project. The reported algorithmic framework was the top-performing entry in the MICCAI 2013 Multimodal Brain Tumor Segmentation challenge. The challenge data were widely varying consisting of both high-grade and low-grade glioma tumor four-modality MRI from five different institutions. Average Dice overlap measures for the final algorithmic assessment were 0.87, 0.78, and 0.74 for "complete", "core", and "enhanced" tumor components, respectively.

245 citations


Journal ArticleDOI
TL;DR: The Scalable Brain Atlas is an atlas viewer that displays brain atlas data as a stack of slices in which stereotaxic coordinates and brain regions can be selected, and supports plugins which run inside the viewer and respond when a new slice, coordinate or region is selected.
Abstract: The Scalable Brain Atlas (SBA) is a collection of web services that provide unified access to a large collection of brain atlas templates for different species. Its main component is an atlas viewer that displays brain atlas data as a stack of slices in which stereotaxic coordinates and brain regions can be selected. These are subsequently used to launch web queries to resources that require coordinates or region names as input. It supports plugins which run inside the viewer and respond when a new slice, coordinate or region is selected. It contains 20 atlas templates in six species, and plugins to compute coordinate transformations, display anatomical connectivity and fiducial points, and retrieve properties, descriptions, definitions and 3d reconstructions of brain regions. The ambition of SBA is to provide a unified representation of all publicly available brain atlases directly in the web browser, while remaining a responsive and light weight resource that specializes in atlas comparisons, searches, coordinate transformations and interactive displays.

217 citations


Journal ArticleDOI
TL;DR: The results suggest that the biological footprint (effect size) has a dramatic influence on prediction performance, and cross-validation estimates of performance, while generally optimistic, correlate well with generalization accuracy on a new dataset.
Abstract: Multivariate pattern analysis (MVPA) methods have become an important tool in neuroimaging, revealing complex associations and yielding powerful prediction models. Despite methodological developments and novel application domains, there has been little effort to compile benchmark results that researchers can reference and compare against. This study takes a significant step in this direction. We employed three classes of state-of-the-art MVPA algorithms and common types of structural measurements from brain Magnetic Resonance Imaging (MRI) scans to predict an array of clinically relevant variables (diagnosis of Alzheimer's, schizophrenia, autism, and attention deficit and hyperactivity disorder; age, cerebrospinal fluid derived amyloid-β levels and mini-mental state exam score). We analyzed data from over 2,800 subjects, compiled from six publicly available datasets. The employed data and computational tools are freely distributed ( https://www.nmr.mgh.harvard.edu/lab/mripredict), making this the largest, most comprehensive, reproducible benchmark image-based prediction experiment to date in structural neuroimaging. Finally, we make several observations regarding the factors that influence prediction performance and point to future research directions. Unsurprisingly, our results suggest that the biological footprint (effect size) has a dramatic influence on prediction performance. Though the choice of image measurement and MVPA algorithm can impact the result, there was no universally optimal selection. Intriguingly, the choice of algorithm seemed to be less critical than the choice of measurement type. Finally, our results showed that cross-validation estimates of performance, while generally optimistic, correlate well with generalization accuracy on a new dataset.

142 citations


Journal ArticleDOI
TL;DR: It is concluded that, in order to avoid artifacts and exclude the several sources of bias that may influence the analysis, an optimal method should comprise a careful preprocessing of the images, be based on multimodal, complementary data, take into account spatial information about the lesions and correct for false positives.
Abstract: White matter hyperintensities (WMH) are commonly seen in the brain of healthy elderly subjects and patients with several neurological and vascular disorders. A truly reliable and fully automated method for quantitative assessment of WMH on magnetic resonance imaging (MRI) has not yet been identified. In this paper, we review and compare the large number of automated approaches proposed for segmentation of WMH in the elderly and in patients with vascular risk factors. We conclude that, in order to avoid artifacts and exclude the several sources of bias that may influence the analysis, an optimal method should comprise a careful preprocessing of the images, be based on multimodal, complementary data, take into account spatial information about the lesions and correct for false positives. All these features should not exclude computational leanness and adaptability to available data.

140 citations


Journal ArticleDOI
TL;DR: The BigNeuron project aims to gather a worldwide community to define and advance the state-of-the-art of single neuron reconstruction by bench-testing as many automated neuron reconstruction methods as possible against as many neuron datasets as possible following standardized data protocols and evaluation methods.
Abstract: The three-dimensional (3D) morphology of axons and dendrites is important for many neuroscience studies. Common tasks such as distinguishing and characterizing neuron phenotypes, modeling projection and potential connectivity patterns, and simulating the electrophysiological behavior of single neurons and neuronal networks all depend on accurate knowledge of 3D neuronal morphology. In fact, such tasks often require the morphology to be explicitly and quantitatively described as opposed to simply illustrated by an image stack1. Therefore a critical first step in many studies is the digital reconstruction of the 3D morphology of neurons from image stacks. Neuron reconstruction methods have evolved over the last 100 years from the 2D hand drawings by Ramon y Cajal and his contemporaries to quantitative tracing of neuron morphologies in 3D with the help of computers. To this day, manual tracing is still the prevailing method even for 3D reconstruction2. However, manual approaches are prohibitively expensive for analyzing image data approaching the scale of terabytes and thousands of image stacks, let alone mining higher-order patterns in these data. The long-standing need to automate the laborious and subjective manual analysis of light-microscopic and other types of microscopic images has motivated a large number of bioimage informatics efforts3. The recent advance in imaging throughput, combined with the desire for large-scale computational modeling, has added a sense of urgency to this need. In 2010 a worldwide neuron reconstruction contest named DIADEM (short for “digital reconstruction of axonal and dendritic morphology”) was organized by several major institutions as a way to stimulate progress and attract new computational researchers to join the technology development community4. The goal of DIADEM was to develop algorithms capable of automatically converting stacks of images visualizing the tree-like shape of neuronal axons and dendrites into faithful 3D digital reconstructions. The contest succeeded in stimulating a burst of progress. However, none of the algorithms presented at the finishing stage of DIADEM reached the originally projected goal of a 20-fold speed-up in the reconstruction process compared to manual reconstruction5. One practical limitation of DIADEM was that the reconstruction methods were implemented in different languages, ran on different platforms, and followed different protocols to load image data and export reconstructions. This hampered a direct comparison of the methods in terms of computational efficiency and has ever since been a stumbling block to further extend the experiment to big-data, high-throughput applications. In addition, several relatively successful methods recently used in various neuroinformatics projects were introduced6 or continued to be developed7 after the DIADEM contest. Current reconstruction techniques, both manual and automated, show tremendous variability in the attributes and completeness of the resulting morphology8. Yet, building a large library of high quality 3D neuron morphologies is essential to comprehensively cataloging the types of cells in a nervous system. Furthermore, enabling comparisons of neuron morphologies across species will provide additional sources of insight into neural function. It would be beneficial for neuron reconstruction related research and applications to aggregate and consolidate the collective progress on automated neuron tracing in a practically useful product for neuroscience applications. One strategy to overcome the difficulties in dealing with different tracing protocols, data formats, usability, and reproducibility, is to port the available methods to a common, versatile software platform. This allows the methods to be bench-tested against very large-scale neuron datasets for effective validation. The BigNeuron project has been formally launched in March 2015 to achieve such goals9. This project aims to gather a worldwide community to define and advance the state-of-the-art of single neuron reconstruction by bench-testing as many automated neuron reconstruction methods as possible against as many neuron datasets as possible following standardized data protocols and evaluation methods. BigNeuron will durably benefit the neuroscience community by establishing a Big Data resource and a set of standardized novel tools for neuron morphologies. To make BigNeuron a success, tangible goals and feasible approaches have to be developed. While the vision for BigNeuron is to continue for a long time through multiple phases, the first phase will last about a year and a half. The goal of this first phase is to establish the basic infrastructure and release useful data, tools, and analyses. A series of events are being organized for 2015. The kick-off algorithm-porting hackathon was held in Beijing, China in March 2015, with more than 20 attendees from various research groups from Asia, Australia, and America. Follow-up hackathons and workshops will be held at several other locations in Europe and USA. The bench-testing will start in the summer of 2015, followed by data analysis open to the world community. The project welcomes and encourages the participation of any individuals and organizations. Subsequent phases may add important layers of complexity such as time-lapse, multi-channel, and multi-neuron data. In the long run, BigNeuron may also enable a robust cloud-based automated service, where researchers could upload an image stack and receive back a digital morphological tracing. To advance the neuron reconstruction field, collaborative community projects such as BigNeuron can powerfully complement the competitive spirit of previous initiatives such as DIADEM. The suitability of various neuron reconstruction methods for specific neuron image datasets can still be quantified (and ranked) from the forthcoming BigNeuron results. However, the synergy of many research groups across the globe may provide a refreshing perspective on how various reconstruction methods and results might be integrated. Without a doubt, bench-testing many reconstruction methods against many neuron image datasets will produce very large-scale reconstruction databases. Consensus reconstructions created from all variants generated with each algorithm will also be deposited in NeuroMorpho.Org (ensuring availability in both the de facto standard SWC and NeuroML formats) as well as potentially other databases. Such Big Data will not only benefit method developers and image contributors, but also provide valuable new data resources for computational modelers and data analysts.

85 citations


Journal ArticleDOI
TL;DR: This research presents a review of the evolution of automatized methods for the segmentation of the hippocampus in MRI and an evaluation of those methods considering the degree of user intervention, computational cost, segmentation accuracy and feasibility of application in a clinical routine.
Abstract: The segmentation of the hippocampus in Magnetic Resonance Imaging (MRI) has been an important procedure to diagnose and monitor several clinical situations. The precise delineation of the borders of this brain structure makes it possible to obtain a measure of the volume and estimate its shape, which can be used to diagnose some diseases, such as Alzheimer's disease, schizophrenia and epilepsy. As the manual segmentation procedure in three-dimensional images is highly time consuming and the reproducibility is low, automated methods introduce substantial gains. On the other hand, the implementation of those methods is a challenge because of the low contrast of this structure in relation to the neighboring areas of the brain. Within this context, this research presents a review of the evolution of automatized methods for the segmentation of the hippocampus in MRI. Many proposed methods for segmentation of the hippocampus have been published in leading journals in the medical image processing area. This paper describes these methods presenting the techniques used and quantitatively comparing the methods based on Dice Similarity Coefficient. Finally, we present an evaluation of those methods considering the degree of user intervention, computational cost, segmentation accuracy and feasibility of application in a clinical routine.

79 citations


Journal ArticleDOI
TL;DR: A biophysical forward-modelling formalism based on the finite element method (FEM) is used to establish quantitatively accurate links between neural activity in the slice and potentials recorded in the MEA set-up, and methods for estimation of current-source density (CSD) from MEA potentials are explored.
Abstract: Microelectrode arrays (MEAs), substrate-integrated planar arrays of up to thousands of closely spaced metal electrode contacts, have long been used to record neuronal activity in in vitro brain slices with high spatial and temporal resolution. However, the analysis of the MEA potentials has generally been mainly qualitative. Here we use a biophysical forward-modelling formalism based on the finite element method (FEM) to establish quantitatively accurate links between neural activity in the slice and potentials recorded in the MEA set-up. Then we develop a simpler approach based on the method of images (MoI) from electrostatics, which allows for computation of MEA potentials by simple formulas similar to what is used for homogeneous volume conductors. As we find MoI to give accurate results in most situations of practical interest, including anisotropic slices covered with highly conductive saline and MEA-electrode contacts of sizable physical extensions, a Python software package (ViMEAPy) has been developed to facilitate forward-modelling of MEA potentials generated by biophysically detailed multicompartmental neurons. We apply our scheme to investigate the influence of the MEA set-up on single-neuron spikes as well as on potentials generated by a cortical network comprising more than 3000 model neurons. The generated MEA potentials are substantially affected by both the saline bath covering the brain slice and a (putative) inadvertent saline layer at the interface between the MEA chip and the brain slice. We further explore methods for estimation of current-source density (CSD) from MEA potentials, and find the results to be much less sensitive to the experimental set-up.

77 citations


Journal ArticleDOI
TL;DR: NeuroMorph bridges the gap that currently exists between rapid reconstruction techniques, offered by computer vision research, and the need to collect measurements of shape and form from segmented structures that is currently done using manual segmentation methods.
Abstract: Serialelectron microscopy imaging is crucial for exploring the structure of cells and tissues. The development of block face scanning electron microscopy methods and their ability to capture large image stacks, some with near isotropic voxels, is proving particularly useful for the exploration of brain tissue. This has led to the creation of numerous algorithms and software for segmenting out different features from the image stacks. However, there are few tools available to view these results and make detailed morphometric analyses on all, or part, of these 3D models. We have addressed this issue by constructing a collection of software tools, called NeuroMorph, with which users can view the segmentation results, in conjunction with the original image stack, manipulate these objects in 3D, and make measurements of any region. This approach to collecting morphometric data provides a faster means of analysing the geometry of structures, such as dendritic spines and axonal boutons. This bridges the gap that currently exists between rapid reconstruction techniques, offered by computer vision research, and the need to collect measurements of shape and form from segmented structures that is currently done using manual segmentation methods.

66 citations


Journal ArticleDOI
TL;DR: The BlastNeuron approach was able to accurately and efficiently retrieve morphologically and functionally similar neuron structures from large morphology database, identify the local common structures, and find clusters of neurons that share similarities in both morphology and molecular profiles.
Abstract: Characterizing the identity and types of neurons in the brain, as well as their associated function, requires a means of quantifying and comparing 3D neuron morphology. Presently, neuron comparison methods are based on statistics from neuronal morphology such as size and number of branches, which are not fully suitable for detecting local similarities and differences in the detailed structure. We developed BlastNeuron to compare neurons in terms of their global appearance, detailed arborization patterns, and topological similarity. BlastNeuron first compares and clusters 3D neuron reconstructions based on global morphology features and moment invariants, independent of their orientations, sizes, level of reconstruction and other variations. Subsequently, BlastNeuron performs local alignment between any pair of retrieved neurons via a tree-topology driven dynamic programming method. A 3D correspondence map can thus be generated at the resolution of single reconstruction nodes. We applied BlastNeuron to three datasets: (1) 10,000+ neuron reconstructions from a public morphology database, (2) 681 newly and manually reconstructed neurons, and (3) neurons reconstructions produced using several independent reconstruction methods. Our approach was able to accurately and efficiently retrieve morphologically and functionally similar neuron structures from large morphology database, identify the local common structures, and find clusters of neurons that share similarities in both morphology and molecular profiles.

61 citations


Journal ArticleDOI
TL;DR: The NeuralAct package takes as input the 3D coordinates of the recording sensors, a cortical model in the same coordinate system, and the activation data to be visualized at each sensor, and renders the resulting activations in color on the cortical model.
Abstract: Electrocorticography (ECoG) records neural signals directly from the surface of the cortex. Due to its high temporal and favorable spatial resolution, ECoG has emerged as a valuable new tool in acquiring cortical activity in cognitive and systems neuroscience. Many studies using ECoG visualized topographies of cortical activity or statistical tests on a three-dimensional model of the cortex, but a dedicated tool for this function has not yet been described. In this paper, we describe the NeuralAct package that serves this purpose. This package takes as input the 3D coordinates of the recording sensors, a cortical model in the same coordinate system (e.g., Talairach), and the activation data to be visualized at each sensor. It then aligns the sensor coordinates with the cortical model, convolves the activation data with a spatial kernel, and renders the resulting activations in color on the cortical model. The NeuralAct package can plot cortical activations of an individual subject as well as activations averaged over subjects. It is capable to render single images as well as sequences of images. The software runs under Matlab and is stable and robust. We here provide the tool and describe its visualization capabilities and procedures. The provided package contains thoroughly documented code and includes a simple demo that guides the researcher through the functionality of the tool.

51 citations


Journal ArticleDOI
TL;DR: A novel representation, the Minimum Shape-Cost (MSC) Tree, is introduced that approximates the dendrite centerline with sub-voxel accuracy and is demonstrated to be the uniqueness of such a shape representation as well as its computational efficiency.
Abstract: The challenges faced in analyzing optical imaging data from neurons include a low signal-to-noise ratio of the acquired images and the multiscale nature of the tubular structures that range in size from hundreds of microns to hundreds of nanometers. In this paper, we address these challenges and present a computational framework for an automatic, three-dimensional (3D) morphological reconstruction of live nerve cells. The key aspects of this approach are: (i) detection of neuronal dendrites through learning 3D tubular models, and (ii) skeletonization by a new algorithm using a morphology-guided deformable model for extracting the dendritic centerline. To represent the neuron morphology, we introduce a novel representation, the Minimum Shape-Cost (MSC) Tree that approximates the dendrite centerline with sub-voxel accuracy and demonstrate the uniqueness of such a shape representation as well as its computational efficiency. We present extensive quantitative and qualitative results that demonstrate the accuracy and robustness of our method.

Journal ArticleDOI
TL;DR: A novel supervised discriminative group sparse representation method that can learn connectivity coefficients that are similar within the same class and distinct between classes, thus helping enhance the diagnostic accuracy and allows the learned common network structure to preserve the network specific and label-related characteristics.
Abstract: Research on an early detection of Mild Cognitive Impairment (MCI), a prodromal stage of Alzheimer’s Disease (AD), with resting-state functional Magnetic Resonance Imaging (rs-fMRI) has been of great interest for the last decade. Witnessed by recent studies, functional connectivity is a useful concept in extracting brain network features and finding biomarkers for brain disease diagnosis. However, it still remains challenging for the estimation of functional connectivity from rs-fMRI due to the inevitable high dimensional problem. In order to tackle this problem, we utilize a group sparse representation along with a structural equation model. Unlike the conventional group sparse representation method that does not explicitly consider class-label information, which can help enhance the diagnostic performance, in this paper, we propose a novel supervised discriminative group sparse representation method by penalizing a large within-class variance and a small between-class variance of connectivity coefficients. Thanks to the newly devised penalization terms, we can learn connectivity coefficients that are similar within the same class and distinct between classes, thus helping enhance the diagnostic accuracy. The proposed method also allows the learned common network structure to preserve the network specific and label-related characteristics. In our experiments on the rs-fMRI data of 37 subjects (12 MCI; 25 healthy normal control) with a cross-validation technique, we demonstrated the validity and effectiveness of the proposed method, showing the diagnostic accuracy of 89.19 % and the sensitivity of 0.9167.

Journal ArticleDOI
TL;DR: An adaptive image enhancement method specifically designed to improve the signal-to-noise ratio of several types of individual neurons and brain vasculature images, based on detecting the salient features of fibrous structures combined with adaptive estimation of the optimal context windows where such saliency would be detected.
Abstract: It is important to digitally reconstruct the 3D morphology of neurons and brain vasculatures. A number of previous methods have been proposed to automate the reconstruction process. However, in many cases, noise and low signal contrast with respect to the image background still hamper our ability to use automation methods directly. Here, we propose an adaptive image enhancement method specifically designed to improve the signal-to-noise ratio of several types of individual neurons and brain vasculature images. Our method is based on detecting the salient features of fibrous structures, e.g. the axon and dendrites combined with adaptive estimation of the optimal context windows where such saliency would be detected. We tested this method for a range of brain image datasets and imaging modalities, including bright-field, confocal and multiphoton fluorescent images of neurons, and magnetic resonance angiograms. Applying our adaptive enhancement to these datasets led to improved accuracy and speed in automated tracing of complicated morphology of neurons and vasculatures.

Journal ArticleDOI
TL;DR: Tuning of the parameters in FSL and the use of proper atlas in SPM showed significant reduction in the systematic bias and the error in ICV estimation via these automated tools.
Abstract: Intracranial volume (ICV) is a standard measure often used in morphometric analyses to correct for head size in brain studies. Inaccurate ICV estimation could introduce bias in the outcome. The current study provides a decision aid in defining protocols for ICV estimation across different subject groups in terms of sampling frequencies that can be optimally used on the volumetric MRI data, and type of software most suitable for use in estimating the ICV measure. Four groups of 53 subjects are considered, including adult controls (AC, adults with Alzheimer's disease (AD), pediatric controls (PC) and group of pediatric epilepsy subjects (PE). Reference measurements were calculated for each subject by manually tracing intracranial cavity without sub-sampling. The reliability of reference measurements were assured through intra- and inter- variation analyses. Three publicly well-known software packages (FreeSurfer Ver. 5.3.0, FSL Ver. 5.0, SPM8 and SPM12) were examined in their ability to automatically estimate ICV across the groups. Results on sub-sampling studies with a 95 % confidence showed that in order to keep the accuracy of the inter-leaved slice sampling protocol above 99 %, sampling period cannot exceed 20 mm for AC, 25 mm for PC, 15 mm for AD and 17 mm for the PE groups. The study assumes a priori knowledge about the population under study into the automated ICV estimation. Tuning of the parameters in FSL and the use of proper atlas in SPM showed significant reduction in the systematic bias and the error in ICV estimation via these automated tools. SPM12 with the use of pediatric template is found to be a more suitable candidate for PE group. SPM12 and FSL subjected to tuning are the more appropriate tools for the PC group. The random error is minimized for FS in AD group and SPM8 showed less systematic bias. Across the AC group, both SPM12 and FS performed well but SPM12 reported lesser amount of systematic bias.

Journal ArticleDOI
TL;DR: Studies on heterogeneous ensembles of real neuronal 3-D reconstructions drawn from the NeuroMorpho database show that the proposed method identifies meaningful grouping patterns among neurons based on arbor morphology, and revealing the underlying morphological differences.
Abstract: This paper presents a robust unsupervised harmonic co-clustering method for profiling arbor morphologies for ensembles of reconstructed brain cells (e.g., neurons, microglia) based on quantitative measurements of the cellular arbors. Specifically, this method can identify groups and sub-groups of cells with similar arbor morphologies, and simultaneously identify the hierarchical grouping patterns among the quantitative arbor measurements. The robustness of the proposed algorithm derives from use of the diffusion distance measure for comparing multivariate data points, harmonic analysis theory, and a Haar-like wavelet basis for multivariate data smoothing. This algorithm is designed to be practically usable, and is embedded into the actively linked three-dimensional (3-D) visualization and analytics system in the free and open source FARSIGHT image analysis toolkit for interactive exploratory population-scale neuroanatomic studies. Studies on synthetic datasets demonstrate its superiority in clustering data matrices compared to recent hierarchical clustering algorithms. Studies on heterogeneous ensembles of real neuronal 3-D reconstructions drawn from the NeuroMorpho database show that the proposed method identifies meaningful grouping patterns among neurons based on arbor morphology, and revealing the underlying morphological differences.

Journal ArticleDOI
TL;DR: It has been known for over a decade that seemingly anonymized data can be related to publicly available information to identify specific individuals, and there have been increased efforts to share research data to enable scientific discovery and achieve cost efficiencies.
Abstract: Genetic data has provided valuable insights into disease cause and risk as well as drug discovery and development in neuroscience. For example, human genetics studies have provided insights into cognition (Glahn et al. 2013) and psychiatric disorders (Kao et al. 2010). The genetic basis of several inherited disorders such as Down's Syndrome and Tay-Sachs disease are well known, and other associations such as the role of APOE in Alzheimer's disease are still extensively studied. However, despite advances in understanding the human genome, there are concerns about the privacy of genetic data and potential discrimination resulting from its disclosure, and there has been incomplete oversight of genetic testing (Scheuner et al. 2008). At the same time, there have been increased efforts to share research data to enable scientific discovery and achieve cost efficiencies. It has become clear that no scientist can guarantee absolute privacy, and it is also increasingly recognized that research will work better if scientists have more information about the people they study and that being identifiable has some benefits (Angrist 2013). There are examples of pioneering efforts in neuroscience research. The fMRI Data Center is a leader in open-access data sharing in the functional neuroimaging community, overcoming logistical, cultural and funding barriers (Mennes et al. 2013). Similarly, the INCF Task Force on Neuroimaging Datasharing has started work on tools to ease and automate sharing of raw, processed, and derived neuroimaging data and metadata (Poline et al. 2012). In the United States, legislation such as the Health Insurance Portability and Accountability Act (HIPAA) (G o s t i n 2 0 0 1) a n d t h e G e n e t i c I n f o r m a t i o n Nondiscrimination Act have attempted to limit access to sensitive data and discrimination related to health insurance and employment, but it has been known for over a decade that seemingly anonymized data can be related to publicly available information to identify specific individuals (Braun et al. 2009) using diagnosis codes (Tamersoy et al. 2010), rare visible disorders (Eguale et al. 2005), allele frequencies (Craig et al. 2011), place and date of birth (Acquisti and Gross 2009), a combination of a surname with age and state (Gymrek et al. 2013), and patient health location visit patterns (Malin 2007). Re-identification methods have included genotype-phenotype inferences, family structures , and dictionary attacks (Malin 2005). In total, …

Journal ArticleDOI
TL;DR: This work presents a procedure to automatically build a [123I]FP-CIT SPECT template in the standard Montreal Neurological Institute (MNI) space using the Otsu’s method, and designs a posterized version of a brain image in the MNI space.
Abstract: Spatial affine registration of brain images to a common template is usually performed as a preprocessing step in intersubject and intrasubject comparison studies, computer-aided diagnosis, region of interest selection and brain segmentation in tomography. Nevertheless, it is not straightforward to build a template of [123I]FP-CIT SPECT brain images because they exhibit very low intensity values outside the striatum. In this work, we present a procedure to automatically build a [123I]FP-CIT SPECT template in the standard Montreal Neurological Institute (MNI) space. The proposed methodology consists of a head voxel selection using the Otsu’s method, followed by a posterization of the source images to three different levels: background, head, and striatum. Analogously, we also design a posterized version of a brain image in the MNI space; subsequently, we perform a spatial affine registration of the posterized source images to this image. The intensity of the transformed images is normalized linearly, assuming that the histogram of the intensity values follows an alpha-stable distribution. Lastly, we build the [123I]FP-CIT SPECT template by means of the transformed and normalized images. The proposed methodology is a fully automatic procedure that has been shown to work accurately even when a high-resolution magnetic resonance image for each subject is not available.

Journal ArticleDOI
TL;DR: Wyrm is presented, an open source BCI toolbox in Python that can be used as a toolbox for analysis and visualization of neurophysiological data and in real-time settings, like an online BCI application.
Abstract: In the last years Python has gained more and more traction in the scientific community. Projects like NumPy, SciPy, and Matplotlib have created a strong foundation for scientific computing in Python and machine learning packages like scikit-learn or packages for data analysis like Pandas are building on top of it. In this paper we present Wyrm (https://github.com/bbci/wyrm), an open source BCI toolbox in Python. Wyrm is applicable to a broad range of neuroscientific problems. It can be used as a toolbox for analysis and visualization of neurophysiological data and in real-time settings, like an online BCI application. In order to prevent software defects, Wyrm makes extensive use of unit testing. We will explain the key aspects of Wyrm’s software architecture and design decisions for its data structure, and demonstrate and validate the use of our toolbox by presenting our approach to the classification tasks of two different data sets from the BCI Competition III. Furthermore, we will give a brief analysis of the data sets using our toolbox, and demonstrate how we implemented an online experiment using Wyrm. With Wyrm we add the final piece to our ongoing effort to provide a complete, free and open source BCI system in Python.

Journal ArticleDOI
TL;DR: Using Bayesian network classifiers, the interneurons are accurately characterize and classify and identify useful predictor variables and open up new possibilities for an objective and pragmatic classification of interneuron subsets.
Abstract: An accepted classification of GABAergic interneurons of the cerebral cortex is a major goal in neuroscience. A recently proposed taxonomy based on patterns of axonal arborization promises to be a pragmatic method for achieving this goal. It involves characterizing interneurons according to five axonal arborization features, called F1-F5, and classifying them into a set of predefined types, most of which are established in the literature. Unfortunately, there is little consensus among expert neuroscientists regarding the morphological definitions of some of the proposed types. While supervised classifiers were able to categorize the interneurons in accordance with experts' assignments, their accuracy was limited because they were trained with disputed labels. Thus, here we automatically classify interneuron subsets with different label reliability thresholds (i.e., such that every cell's label is backed by at least a certain (threshold) number of experts). We quantify the cells with parameters of axonal and dendritic morphologies and, in order to predict the type, also with axonal features F1-F4 provided by the experts. Using Bayesian network classifiers, we accurately characterize and classify the interneurons and identify useful predictor variables. In particular, we discriminate among reliable examples of common basket, horse-tail, large basket, and Martinotti cells with up to 89.52% accuracy, and single out the number of branches at 180 μm from the soma, the convex hull 2D area, and the axonal features F1-F4 as especially useful predictors for distinguishing among these types. These results open up new possibilities for an objective and pragmatic classification of interneurons.

Journal ArticleDOI
TL;DR: ModelView is developed, a web application for ModelDB that presents a graphical view of model structure augmented with contextual information forNEURON and NEURON-runnable models, and key features of the user interface along with the data analysis, storage, and visualization algorithms are explained.
Abstract: ModelDB ( modeldb.yale.edu ), a searchable repository of source code of more than 950 published computational neuroscience models, seeks to promote model reuse and reproducibility. Code sharing is a first step; however, model source code is often large and not easily understood. To aid users, we have developed ModelView, a web application for ModelDB that presents a graphical view of model structure augmented with contextual information for NEURON and NEURON-runnable (e.g. NeuroML, PyNN) models. Web presentation provides a rich, simulator-independent environment for interacting with graphs. The necessary data is generated by combining manual curation, text-mining the source code, querying ModelDB, and simulator introspection. Key features of the user interface along with the data analysis, storage, and visualization algorithms are explained. With this tool, researchers can examine and assess the structure of hundreds of models in ModelDB in a standardized presentation without installing any software, downloading the model, or reading model source code.

Journal ArticleDOI
TL;DR: An automatic method for segmenting the cerebellar peduncles, including the dSCP, using volumetric segmentation concepts based on extracted DTI features is presented.
Abstract: The cerebellar peduncles, comprising the superior cerebellar peduncles (SCPs), the middle cerebellar peduncle (MCP), and the inferior cerebellar peduncles (ICPs), are white matter tracts that connect the cerebellum to other parts of the central nervous system. Methods for automatic segmentation and quantification of the cerebellar peduncles are needed for objectively and efficiently studying their structure and function. Diffusion tensor imaging (DTI) provides key information to support this goal, but it remains challenging because the tensors change dramatically in the decussation of the SCPs (dSCP), the region where the SCPs cross. This paper presents an automatic method for segmenting the cerebellar peduncles, including the dSCP. The method uses volumetric segmentation concepts based on extracted DTI features. The dSCP and noncrossing portions of the peduncles are modeled as separate objects, and are initially classified using a random forest classifier together with the DTI features. To obtain geometrically correct results, a multi-object geometric deformable model is used to refine the random forest classification. The method was evaluated using a leave-one-out cross-validation on five control subjects and four patients with spinocerebellar ataxia type 6 (SCA6). It was then used to evaluate group differences in the peduncles in a population of 32 controls and 11 SCA6 patients. In the SCA6 group, we have observed significant decreases in the volumes of the dSCP and the ICPs and significant increases in the mean diffusivity in the noncrossing SCPs, the MCP, and the ICPs. These results are consistent with a degeneration of the cerebellar peduncles in SCA6 patients.

Journal ArticleDOI
TL;DR: Following the open invitation of flycircuit.tw to copy, transform, and redistribute the material for non-commercial re-use, here the authors announce inclusion of this dataset in non-proprietary SWC format, along with additional metadata and morphometric measurements, under “Chiang archive” in NeuroMorpho.Org version 6.0.
Abstract: NeuroMorphoOrg is a centralized repository of neuronal reconstructions hosting data from a variety of species, brain regions, and experimental conditions1 This resource aims to provide dense coverage of available data by including all digital tracings described in peer-reviewed publications that the authors are willing to share2 Although most reconstructions to date are acquired manually or semi-manually3, the transition to quasi-automated methods is widely considered as necessary for long-term progress4 The 2010 DIADEM competition (DiademChallengeorg) helped foster considerable advances towards tracing automation5 and was followed one year later by the large-scale reconstruction of more than 16,000 Drosophila neurons6 The public posting of all image stacks and corresponding digital tracings on flycircuittw after an additional year7 constituted the first (and so far only) success in high-throughput digital morphology Although flycircuittw reconstructions are beginning to enable new analysis and discoveries by independent research groups8, these data were posted in a commercial format (vsg3dcom/amira/skeletonization) and lacked useful information such as the somatic brain region Following the open invitation of flycircuittw to copy, transform, and redistribute the material for non-commercial re-use, here we announce inclusion of this dataset in non-proprietary SWC format, along with additional metadata and morphometric measurements, under “Chiang archive” in NeuroMorphoOrg version 60 With this major release, the number of NeuroMorphoOrg reconstructions more than doubles from 11,335 to 27,385

Journal ArticleDOI
TL;DR: This fine-grained analysis reveals significant differences mostly localized around the splenium areas between both blind groups and the sighted group and, importantly, specific dissimilarities between the LB and CB groups, illustrating the existence of a sensitive period for reorganization.
Abstract: Blindness represents a unique model to study how visual experience may shape the development of brain organization. Exploring how the structure of the corpus callosum (CC) reorganizes ensuing visual deprivation is of particular interest due to its important functional implication in vision (e.g., via the splenium of the CC). Moreover, comparing early versus late visually deprived individuals has the potential to unravel the existence of a sensitive period for reshaping the CC structure. Here, we develop a novel framework to capture a complete set of shape differences in the CC between congenitally blind (CB), late blind (LB) and sighted control (SC) groups. The CCs were manually segmented from T1-weighted brain MRI and modeled by 3D tetrahedral meshes. We statistically compared the combination of local area and thickness at each point between subject groups. Differences in area are found using surface tensor-based morphometry; thickness is estimated by tracing the streamlines in the volumetric harmonic field. Group differences were assessed on this combined measure using Hotelling's T(2) test. Interestingly, we observed that the total callosal volume did not differ between the groups. However, our fine-grained analysis reveals significant differences mostly localized around the splenium areas between both blind groups and the sighted group (general effects of blindness) and, importantly, specific dissimilarities between the LB and CB groups, illustrating the existence of a sensitive period for reorganization. The new multivariate statistics also gave better effect sizes for detecting morphometric differences, relative to other statistics. They may boost statistical power for CC morphometric analyses.

Journal ArticleDOI
TL;DR: A divide-and-conquer framework is employed that tackles the challenges of high-throughput images of neurons and enables the integration of multiple automatic algorithms and a tool for quantifying dendritic length that is fundamental for analyzing growth of neuronal network is presented.
Abstract: High-throughput automated fluorescent imaging and screening are important for studying neuronal development, functions, and pathogenesis. An automatic approach of analyzing images acquired in automated fashion, and quantifying dendritic characteristics is critical for making such screens high-throughput. However, automatic and effective algorithms and tools, especially for the images of mature mammalian neurons with complex arbors, have been lacking. Here, we present algorithms and a tool for quantifying dendritic length that is fundamental for analyzing growth of neuronal network. We employ a divide-and-conquer framework that tackles the challenges of high-throughput images of neurons and enables the integration of multiple automatic algorithms. Within this framework, we developed algorithms that adapt to local properties to detect faint branches. We also developed a path search that can preserve the curvature change to accurately measure dendritic length with arbor branches and turns. In addition, we proposed an ensemble strategy of three estimation algorithms to further improve the overall efficacy. We tested our tool on images for cultured mouse hippocampal neurons immunostained with a dendritic marker for high-throughput screen. Results demonstrate the effectiveness of our proposed method when comparing the accuracy with previous methods. The software has been implemented as an ImageJ plugin and available for use.

Journal ArticleDOI
TL;DR: The main novelties of the proposed method are the use of a small set of Multiscale Isotropic Laplacian filters, acting as self-steerable filters, for a quick and efficient binary segmentation of dendritic arbors and axons.
Abstract: Centerline tracing in dendritic structures acquired from confocal images of neurons is an essential tool for the construction of geometrical representations of a neuronal network from its coarse scale up to its fine scale structures. In this paper, we propose an algorithm for centerline extraction that is both highly accurate and computationally efficient. The main novelties of the proposed method are (1) the use of a small set of Multiscale Isotropic Laplacian filters, acting as self-steerable filters, for a quick and efficient binary segmentation of dendritic arbors and axons; (2) an automated centerline seed points detection method based on the application of a simple 3D finite-length filter. The performance of this algorithm, which is validated on data from the DIADEM set appears to be very competitive when compared with other state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: This paper presents a method for the comparison of tree-like shapes that takes into account both topological and geometrical information, and presents the mean shape of each population.
Abstract: Trees are a special type of graph that can be found in various disciplines. In the field of biomedical imaging, trees have been widely studied as they can be used to describe structures such as neurons, blood vessels and lung airways. It has been shown that the morphological characteristics of these structures can provide information on their function aiding the characterization of pathological states. Therefore, it is important to develop methods that analyze their shape and quantify differences between their structures. In this paper, we present a method for the comparison of tree-like shapes that takes into account both topological and geometrical information. This method, which is based on the Elastic Shape Analysis Framework, also computes the mean shape of a population of trees. As a first application, we have considered the comparison of axon morphology. The performance of our method has been evaluated on two sets of images. For the first set of images, we considered four different populations of neurons from different animals and brain sections from the NeuroMorpho.org open database. The second set was composed of a database of 3D confocal microscopy images of three populations of axonal trees (normal and two types of mutations) of the same type of neurons. We have calculated the inter and intra class distances between the populations and embedded the distance in a classification scheme. We have compared the performance of our method against three other state of the art algorithms, and results showed that the proposed method better distinguishes between the populations. Furthermore, we present the mean shape of each population. These shapes present a more complete picture of the morphological characteristics of each population, compared to the average value of certain predefined features.

Journal ArticleDOI
TL;DR: This work reports on an extension of a multilayer ontology to the representation of instruments used to assess brain and cognitive functions and behavior in humans, and specifies how various scores used in the neurosciences are represented.
Abstract: Advances in neuroscience are underpinned by large, multicenter studies and a mass of heterogeneous datasets. When investigating the relationships between brain anatomy and brain functions under normal and pathological conditions, measurements obtained from a broad range of brain imaging techniques are correlated with the information on each subject’s neurologic states, cognitive assessments and behavioral scores derived from questionnaires and tests. The development of ontologies in neuroscience appears to be a valuable way of gathering and handling properly these heterogeneous data – particularly through the use of federated architectures. We recently proposed a multilayer ontology for sharing brain images and regions of interest in neuroimaging. Here, we report on an extension of this ontology to the representation of instruments used to assess brain and cognitive functions and behavior in humans. This extension consists of a ‘core’ ontology that accounts for the properties shared by all instruments supplemented by ‘domain’ ontologies that conceptualize standard instruments. We also specify how this core ontology has been refined to build domain ontologies dedicated to widely used instruments and how various scores used in the neurosciences are represented. Lastly, we discuss our design choices, the ontology’s limitations and planned extensions aimed at querying and reasoning across distributed data sources.

Journal ArticleDOI
TL;DR: This study demonstrates that quantitative image biomarkers such as intracranial and brain volume can be extracted from routinely acquired clinical imaging data, which enables secondary use of clinical images for research into quantitative biomarkers at a hitherto unprecedented scale.
Abstract: We propose an infrastructure for the automated anonymization, extraction and processing of image data stored in clinical data repositories to make routinely acquired imaging data available for research purposes. The automated system, which was tested in the context of analyzing routinely acquired MR brain imaging data, consists of four modules: subject selection using PACS query, anonymization of privacy sensitive information and removal of facial features, quality assurance on DICOM header and image information, and quantitative imaging biomarker extraction. In total, 1,616 examinations were selected based on the following MRI scanning protocols: dementia protocol (246), multiple sclerosis protocol (446) and open question protocol (924). We evaluated the effectiveness of the infrastructure in accessing and successfully extracting biomarkers from routinely acquired clinical imaging data. To examine the validity, we compared brain volumes between patient groups with positive and negative diagnosis, according to the patient reports. Overall, success rates of image data retrieval and automatic processing were 82.5 %, 82.3 % and 66.2 % for the three protocol groups respectively, indicating that a large percentage of routinely acquired clinical imaging data can be used for brain volumetry research, despite image heterogeneity. In line with the literature, brain volumes were found to be significantly smaller (p-value <0.001) in patients with a positive diagnosis of dementia (915 ml) compared to patients with a negative diagnosis (939 ml). This study demonstrates that quantitative image biomarkers such as intracranial and brain volume can be extracted from routinely acquired clinical imaging data. This enables secondary use of clinical images for research into quantitative biomarkers at a hitherto unprecedented scale.

Journal ArticleDOI
TL;DR: This paper evaluates a new statistical algorithm based on bootstrapping to determine if it can improve accuracy of prediction using a subjects left-out validation of a DTI analysis, and presents one statistical technique that might be used to extract diagnostically relevant information from a full brain analysis.
Abstract: There is a compelling need for early, accurate diagnosis of Parkinson’s disease (PD). Various magnetic resonance imaging modalities are being explored as an adjunct to diagnosis. A significant challenge in using MR imaging for diagnosis is developing appropriate algorithms for extracting diagnostically relevant information from brain images. In previous work, we have demonstrated that individual subject variability can have a substantial effect on identifying and determining the borders of regions of analysis, and that this variability may impact on prediction accuracy. In this paper we evaluate a new statistical algorithm to determine if we can improve accuracy of prediction using a subjects left-out validation of a DTI analysis. Twenty subjects with PD and 22 healthy controls were imaged to evaluate if a full brain diffusion tensor imaging-fractional anisotropy (DTI-FA) map might be capable of segregating PD from controls. In this paper, we present a new statistical algorithm based on bootstrapping. We compare the capacity of this algorithm to classify the identity of subjects left out of the analysis with the accuracy of other statistical techniques, including standard cluster-thresholding. The bootstrapped analysis approach was able to correctly discriminate the 20 subjects with PD from the 22 healthy controls (area under the receiver operator curve or AUROC 0.90); however the sensitivity and specificity of standard cluster-thresholding techniques at various voxel-specific thresholds were less effective (AUROC 0.72–0.75). Based on these results sufficient information to generate diagnostically relevant statistical maps may already be collected by current MRI scanners. We present one statistical technique that might be used to extract diagnostically relevant information from a full brain analysis.

Journal ArticleDOI
TL;DR: This work proposes new implementations and parallelizations of the Synchronization Likelihood algorithm with significantly better performance both in time and in memory use, allowing performing analyses that were not feasible before from a computational standpoint.
Abstract: Measures of functional connectivity are commonly employed in neuroimaging research. Among the most popular measures is the Synchronization Likelihood which provides a non-linear estimate of the statistical dependencies between the activity time courses of different brain areas. One aspect which has limited a wider use of this algorithm is the fact that it is very computationally and memory demanding. In the present work we propose new implementations and parallelizations of the Synchronization Likelihood algorithm with significantly better performance both in time and in memory use. As a result both the amount of required computational time is reduced by 3 orders of magnitude and the amount of memory needed for calculations is reduced by 2 orders of magnitude. This allows performing analyses that were not feasible before from a computational standpoint.