scispace - formally typeset
Search or ask a question

Showing papers in "Neuroinformatics in 2018"


Journal ArticleDOI
TL;DR: A novel end-to-end adversarial critic network with a multi-scale L1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels is proposed.
Abstract: Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN's discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.

376 citations


Journal ArticleDOI
TL;DR: This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification and achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for Classification pMCI vs.NC, demonstrating the promising classification performance.
Abstract: Accurate and early diagnosis of Alzheimer’s disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.

245 citations


Journal ArticleDOI
TL;DR: The Topological Morphology Descriptor is invented, a method for encoding the spatial structure of any tree as a “barcode”, a unique topological signature that couples the topology of the branches with their spatial extents by tracking their topological evolution in 3-dimensional space.
Abstract: Many biological systems consist of branching structures that exhibit a wide variety of shapes. Our understanding of their systematic roles is hampered from the start by the lack of a fundamental means of standardizing the description of complex branching patterns, such as those of neuronal trees. To solve this problem, we have invented the Topological Morphology Descriptor (TMD), a method for encoding the spatial structure of any tree as a "barcode", a unique topological signature. As opposed to traditional morphometrics, the TMD couples the topology of the branches with their spatial extents by tracking their topological evolution in 3-dimensional space. We prove that neuronal trees, as well as stochastically generated trees, can be accurately categorized based on their TMD profiles. The TMD retains sufficient global and local information to create an unbiased benchmark test for their categorization and is able to quantify and characterize the structural differences between distinct morphological groups. The use of this mathematically rigorous method will advance our understanding of the anatomy and diversity of branching morphologies.

134 citations


Journal ArticleDOI
TL;DR: The obtained intra-consensus variability was substantially lower compared to the intra- and inter-rater variabilities, showing improved reliability of lesion segmentation by the proposed protocol, and may represent a more precise target to evaluate, compare against and also train, the automatic segmentations.
Abstract: Quantified volume and count of white-matter lesions based on magnetic resonance (MR) images are important biomarkers in several neurodegenerative diseases. For a routine extraction of these biomarkers an accurate and reliable automated lesion segmentation is required. To objectively and reliably determine a standard automated method, however, creation of standard validation datasets is of extremely high importance. Ideally, these datasets should be publicly available in conjunction with standardized evaluation methodology to enable objective validation of novel and existing methods. For validation purposes, we present a novel MR dataset of 30 multiple sclerosis patients and a novel protocol for creating reference white-matter lesion segmentations based on multi-rater consensus. On these datasets three expert raters individually segmented white-matter lesions, using in-house developed semi-automated lesion contouring tools. Later, the raters revised the segmentations in several joint sessions to reach a consensus on segmentation of lesions. To evaluate the variability, and as quality assurance, the protocol was executed twice on the same MR images, with a six months break. The obtained intra-consensus variability was substantially lower compared to the intra- and inter-rater variabilities, showing improved reliability of lesion segmentation by the proposed protocol. Hence, the obtained reference segmentations may represent a more precise target to evaluate, compare against and also train, the automatic segmentations. To encourage further use and research we will publicly disseminate on our website http://lit.fe.uni-lj.si/tools the tools used to create lesion segmentations, the original and preprocessed MR image datasets and the consensus lesion segmentations.

84 citations


Journal ArticleDOI
TL;DR: This work uses a sparse version of Multiple Kernel Learning (MKL) to simultaneously learn the contribution of each brain region, previously defined by an atlas, to the decision function and shows how this can lead to improved overall generalisation performance.
Abstract: Pattern recognition models have been increasingly applied to neuroimaging data over the last two decades. These applications have ranged from cognitive neuroscience to clinical problems. A common limitation of these approaches is that they do not incorporate previous knowledge about the brain structure and function into the models. Previous knowledge can be embedded into pattern recognition models by imposing a grouping structure based on anatomically or functionally defined brain regions. In this work, we present a novel approach that uses group sparsity to model the whole brain multivariate pattern as a combination of regional patterns. More specifically, we use a sparse version of Multiple Kernel Learning (MKL) to simultaneously learn the contribution of each brain region, previously defined by an atlas, to the decision function. Our application of MKL provides two beneficial features: (1) it can lead to improved overall generalisation performance when the grouping structure imposed by the atlas is consistent with the data; (2) it can identify a subset of relevant brain regions for the predictive model. In order to investigate the effect of the grouping in the proposed MKL approach we compared the results of three different atlases using three different datasets. The method has been implemented in the new version of the open-source Pattern Recognition for Neuroimaging Toolbox (PRoNTo).

58 citations


Journal ArticleDOI
TL;DR: A gait energy image (GEI) based Siamese neural network is proposed to automatically extract robust and discriminative spatial gait features for human identification and this framework impressively outperforms state-of-the-art methods.
Abstract: The integration of the latest breakthroughs in bioinformatics technology from one side and artificial intelligence from another side, enables remarkable advances in the fields of intelligent security guard computational biology, healthcare, and so on Among them, biometrics based automatic human identification is one of the most fundamental and significant research topic Human gait, which is a biometric features with the unique capability, has gained significant attentions as the remarkable characteristics of remote accessed, robust and security in the biometrics based human identification However, the existed methods cannot well handle the indistinctive inter-class differences and large intra-class variations of human gait in real-world situation In this paper, we have developed an efficient spatial-temporal gait features with deep learning for human identification First of all, we proposed a gait energy image (GEI) based Siamese neural network to automatically extract robust and discriminative spatial gait features for human identification Furthermore, we exploit the deep 3-dimensional convolutional networks to learn the human gait convolutional 3D (C3D) as the temporal gait features Finally, the GEI and C3D gait features are embedded into the null space by the Null Foley-Sammon Transform (NFST) In the new space, the spatial-temporal features are sufficiently combined with distance metric learning to drive the similarity metric to be small for pairs of gait from the same person, and large for pairs from different persons Consequently, the experiments on the world’s largest gait database show our framework impressively outperforms state-of-the-art methods

55 citations


Journal ArticleDOI
Yatong Jiang1, Bingtao Liu1, Yu Linghui1, Chenggang Yan1, Hujun Bian1 
TL;DR: The novel improved collaborative filtering-based miRNA-disease association prediction (ICFMDA) approach is proposed and it is hoped that ICFMDA would be useful in future miRNA and brain researches, and achieve better understanding of the nervous system in molecular level, cellular level, cell change process, and thus can support the research of human brain.
Abstract: The era of human brain science research is dawning. Researchers utilize the various multi-disciplinary knowledge to explore the human brain,such as physiology and bioinformatics. The emerging disease association prediction technology can speed up the study of diseases, so as to better understanding the structure and function of human body. There are increasing evidences that miRNA plays a significant role in nervous system development, adult function, plasticity, and vulnerability to neurological disease states. In this paper ,we proposed the novel improved collaborative filtering-based miRNA-disease association prediction (ICFMDA) approach. Known miRNA-disease associations can be viewed as a bipartite network between diseases and miRNAs. ICFMDA defined significance SIG between pairs of diseases or miRNAs to model the preference on the choices of other entities. The collaborative filtering algorithm is further improved by incorporating similarity matrices to enable the prediction for new miRNA or disease without known associations. Potential miRNA-disease associations are scored with the addition of bidirectional recommendation results with low computational cost. ICFMDA achieved a 0.9076 AUC of ROC curve in global leave-one-out cross validation, which outperformed the state-of-the-art models. ICFMDA is a compact and accurate tool for potential miRNA-disease association prediction. We hope that ICFMDA would be useful in future miRNA and brain researches,and achieve better understanding of the nervous system in molecular level, cellular level, cell change process, and thus can support the research of human brain.

36 citations


Journal ArticleDOI
TL;DR: The platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies and supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster.
Abstract: We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.

33 citations


Journal ArticleDOI
TL;DR: The results showed that using PLS, SVM showed poorer accuracies with highest accuracy achieved than without PLS but not significantly, which may support early clinical diagnosis or risk determination by identifying neurobiological markers to distinguish between ASD and healthy controls.
Abstract: The advances in neuroimaging methods reveal that resting-state functional fMRI (rs-fMRI) connectivity measures can be potential diagnostic biomarkers for autism spectrum disorder (ASD). Recent data sharing projects help us replicating the robustness of these biomarkers in different acquisition conditions or preprocessing steps across larger numbers of individuals or sites. It is necessary to validate the previous results by using data from multiple sites by diminishing the site variations. We investigated partial least square regression (PLS), a domain adaptive method to adjust the effects of multicenter acquisition. A sparse Multivariate Pattern Analysis (MVVPA) framework in a leave one site out cross validation (LOSOCV) setting has been proposed to discriminate ASD from healthy controls using data from six sites in the Autism Brain Imaging Data Exchange (ABIDE). Classification features were obtained using 42 bilateral Brodmann areas without presupposing any prior hypothesis. Our results showed that using PLS, SVM showed poorer accuracies with highest accuracy achieved (62%) than without PLS but not significantly. The regions occurred in two or more informative connections are Dorsolateral Prefrontal Cortex, Somatosensory Association Cortex, Primary Auditory Cortex, Inferior Temporal Gyrus and Temporopolar area. These interrupted regions are involved in executive function, speech, visual perception, sense and language which are associated with ASD. Our findings may support early clinical diagnosis or risk determination by identifying neurobiological markers to distinguish between ASD and healthy controls.

30 citations


Journal ArticleDOI
TL;DR: A deep multiscales multitask learning network (DMML-Net) integrating a multiscale multi-output learning and a multitask regression learning into a fully convolutional network is proposed and achieves high performance on T1/T2-weighted MRI scans from 200 subjects, making it an efficient tool for clinical LNFS diagnosis.
Abstract: Pathogenesis-based diagnosis is a key step to prevent and control lumbar neural foraminal stenosis (LNFS). It conducts both early diagnosis and comprehensive assessment by drawing crucial pathological links between pathogenic factors and LNFS. Automated pathogenesis-based diagnosis would simultaneously localize and grade multiple spinal organs (neural foramina, vertebrae, intervertebral discs) to diagnose LNFS and discover pathogenic factors. The automated way facilitates planning optimal therapeutic schedules and relieving clinicians from laborious workloads. However, no successful work has been achieved yet due to its extreme challenges since 1) multiple targets: each lumbar spine has at least 17 target organs, 2) multiple scales: each type of target organ has structural complexity and various scales across subjects, and 3) multiple tasks, i.e., simultaneous localization and diagnosis of all lumbar organs, are extremely difficult than individual tasks. To address these huge challenges, we propose a deep multiscale multitask learning network (DMML-Net) integrating a multiscale multi-output learning and a multitask regression learning into a fully convolutional network. 1) DMML-Net merges semantic representations to reinforce the salience of numerous target organs. 2) DMML-Net extends multiscale convolutional layers as multiple output layers to boost the scale-invariance for various organs. 3) DMML-Net joins a multitask regression module and a multitask loss module to prompt the mutual benefit between tasks. Extensive experimental results demonstrate that DMML-Net achieves high performance (0.845 mean average precision) on T1/T2-weighted MRI scans from 200 subjects. This endows our method an efficient tool for clinical LNFS diagnosis.

27 citations


Journal ArticleDOI
TL;DR: This paper shows that a generic FOV normalization approach is possible in multi-site diverse images, and improves skull stripping accuracy and consistency for multiple skull stripping algorithms.
Abstract: Multi-site brain MRI analysis is needed in big data neuroimaging studies, but challenging. The challenges lie in almost every analysis step including skull stripping. The diversities in multi-site brain MR images make it difficult to tune parameters specific to subjects or imaging protocols. Alternatively, using constant parameter settings often leads to inaccurate, inconsistent and even failed skull stripping results. One reason is that images scanned at different sites, under different scanners or protocols, and/or by different technicians often have very different fields of view (FOVs). Normalizing FOV is currently done manually or using ad hoc pre-processing steps, which do not always generalize well to multi-site diverse images. In this paper, we show that (a) a generic FOV normalization approach is possible in multi-site diverse images; we show experiments on images acquired from Philips, GE, Siemens scanners, from 1.0T, 1.5T, 3.0T field of strengths, and from subjects 0–90 years of ages; and (b) generic FOV normalization improves skull stripping accuracy and consistency for multiple skull stripping algorithms; we show this effect for 5 skull stripping algorithms including FSL’s BET, AFNI’s 3dSkullStrip, FreeSurfer’s HWA, BrainSuite’s BSE, and MASS. We have released our FOV normalization software at http://www.nitrc.org/projects/normalizefov .

Journal ArticleDOI
TL;DR: A novel computational method with Short Acyclic Connections in Heterogeneous Graph (SACMDA) that could be effectively applied to new diseases and new miRNAs without any known associations, which overcomes the limitations of many previous methods.
Abstract: MiRNA-disease association is important to disease diagnosis and treatment. Prediction of miRNA-disease associations is receiving increasing attention. Using the huge number of known databases to predict potential associations between miRNAs and diseases is an important topic in the field of biology and medicine. In this paper, we propose a novel computational method of with Short Acyclic Connections in Heterogeneous Graph (SACMDA). SACMDA obtains AUCs of 0.8770 and 0.8368 during global and local leave-one-out cross validation, respectively. Furthermore, SACMDA has been applied to three important human cancers for performance evaluation. As a result, 92% (Colon Neoplasms), 96% (Carcinoma Hepatocellular) and 94% (Esophageal Neoplasms) of top 50 predicted miRNAs are confirmed by recent experimental reports. What's more, SACMDA could be effectively applied to new diseases and new miRNAs without any known associations, which overcomes the limitations of many previous methods.

Journal ArticleDOI
TL;DR: The results suggest that BIANCA (Brain Intensity AbNormality Classification Algorithm) is a reliable and fast segmentation method to extract masks of WMH in patients with extensive lesions.
Abstract: White matter hyperintensities (WMH) are a hallmark of small vessel diseases (SVD). Yet, no automated segmentation method is readily and widely used, especially in patients with extensive WMH where lesions are close to the cerebral cortex. BIANCA (Brain Intensity AbNormality Classification Algorithm) is a new fully automated, supervised method for WMH segmentation. In this study, we optimized and compared BIANCA against a reference method with manual editing in a cohort of patients with extensive WMH. This was achieved in two datasets: a clinical protocol with 90 patients having 2-dimensional FLAIR and an advanced protocol with 66 patients having 3-dimensional FLAIR. We first determined simultaneously which input modalities (FLAIR alone or FLAIR + T1) and which training sets were better compared to the reference. Three strategies for the selection of the threshold that is applied to the probabilistic output of BIANCA were then evaluated: chosen at the group level, based on Fazekas score or determined individually. Accuracy of the segmentation was assessed through measures of spatial agreement and volumetric correspondence with respect to reference segmentation. Based on all our tests, we identified multimodal inputs (FLAIR + T1), mixed WMH load training set and individual threshold selection as the best conditions to automatically segment WMH in our cohort. A median Dice similarity index of 0.80 (0.80) and an intraclass correlation coefficient of 0.97 (0.98) were obtained for the clinical (advanced) protocol. However, Bland-Altman plots identified a difference with the reference method that was linearly related to the total burden of WMH. Our results suggest that BIANCA is a reliable and fast segmentation method to extract masks of WMH in patients with extensive lesions.

Journal ArticleDOI
TL;DR: This paper develops two age prediction models that were trained using healthy control data from the ABIDE, CoRR, DLBS and NKI Rockland neuroimaging datasets and provides access to predictive modeling software running on a persistent cloud-based Amazon Web Services (AWS) compute instance.
Abstract: The availability of cloud computing services has enabled the widespread adoption of the “software as a service” (SaaS) approach for software distribution, which utilizes network-based access to applications running on centralized servers. In this paper we apply the SaaS approach to neuroimaging-based age prediction. Our system, named “NAPR” (Neuroanatomical Age Prediction using R), provides access to predictive modeling software running on a persistent cloud-based Amazon Web Services (AWS) compute instance. The NAPR framework allows external users to estimate the age of individual subjects using cortical thickness maps derived from their own locally processed T1-weighted whole brain MRI scans. As a demonstration of the NAPR approach, we have developed two age prediction models that were trained using healthy control data from the ABIDE, CoRR, DLBS and NKI Rockland neuroimaging datasets (total N = 2367, age range 6–89 years). The provided age prediction models were trained using (i) relevance vector machines and (ii) Gaussian processes machine learning methods applied to cortical thickness surfaces obtained using Freesurfer v5.3. We believe that this transparent approach to out-of-sample evaluation and comparison of neuroimaging age prediction models will facilitate the development of improved age prediction models and allow for robust evaluation of the clinical utility of these methods.

Journal ArticleDOI
TL;DR: The multi-layer multi-target regression (MMR) is proposed which enables simultaneously modeling intrinsic inter-target correlations and nonlinear input-output relationships in a general compositional framework and has been evaluated by extensive experiments on the ADNI database with MRI data, and produced high accuracy surpassing previous regression models.
Abstract: Accurate and automatic prediction of cognitive assessment from multiple neuroimaging biomarkers is crucial for early detection of Alzheimer's disease. The major challenges arise from the nonlinear relationship between biomarkers and assessment scores and the inter-correlation among them, which have not yet been well addressed. In this paper, we propose multi-layer multi-target regression (MMR) which enables simultaneously modeling intrinsic inter-target correlations and nonlinear input-output relationships in a general compositional framework. Specifically, by kernelized dictionary learning, the MMR can effectively handle highly nonlinear relationship between biomarkers and assessment scores; by robust low-rank linear learning via matrix elastic nets, the MMR can explicitly encode inter-correlations among multiple assessment scores; moreover, the MMR is flexibly and allows to work with non-smooth l2,1-norm loss function, which enables calibration of multiple targets with disparate noise levels for more robust parameter estimation. The MMR can be efficiently solved by an alternating optimization algorithm via gradient descent with guaranteed convergence. The MMR has been evaluated by extensive experiments on the ADNI database with MRI data, and produced high accuracy surpassing previous regression models, which demonstrates its great effectiveness as a new multi-target regression model for clinical multivariate prediction.

Journal ArticleDOI
TL;DR: Estimating the model parameters from analytically computed spectral power, the proposed neural mass model fits very well to the observed EEG power spectra, particularly to the power spectral peaks within δ − (0 − 4 Hz) and α − (8 − 13 Hz) frequency ranges.
Abstract: Mathematical modeling is a powerful tool that enables researchers to describe the experimentally observed dynamics of complex systems. Starting with a robust model including model parameters, it is necessary to choose an appropriate set of model parameters to reproduce experimental data. However, estimating an optimal solution of the inverse problem, i.e., finding a set of model parameters that yields the best possible fit to the experimental data, is a very challenging problem. In the present work, we use different optimization algorithms based on a frequentist approach, as well as Monte Carlo Markov Chain methods based on Bayesian inference techniques to solve the considered inverse problems. We first probe two case studies with synthetic data and study models described by a stochastic non-delayed linear second-order differential equation and a stochastic linear delay differential equation. In a third case study, a thalamo-cortical neural mass model is fitted to the EEG spectral power measured during general anesthesia induced by anesthetics propofol and desflurane. We show that the proposed neural mass model fits very well to the observed EEG power spectra, particularly to the power spectral peaks within δ - (0 - 4 Hz) and α - (8 - 13 Hz) frequency ranges. Furthermore, for each case study, we perform a practical identifiability analysis by estimating the confidence regions of the parameter estimates and interpret the corresponding correlation and sensitivity matrices. Our results indicate that estimating the model parameters from analytically computed spectral power, we are able to accurately estimate the unknown parameters while avoiding the computational costs due to numerical integration of the model equations.

Journal ArticleDOI
TL;DR: This paper presents a novel method to segment the soma structures with complex geometry and shows that the proposed method can outperform the existing soma segmentation methods regarding the accuracy and can be used for enhancing the results of existing neuron tracing methods.
Abstract: The automatic neuron reconstruction is important since it accelerates the collection of 3D neuron models for the neuronal morphological studies. The majority of the previous neuron reconstruction methods only focused on tracing neuron fibres without considering the somatic surface. Thus, topological errors often present around the soma area in the results obtained by these tracing methods. Segmentation of the soma structures can be embedded in the existing neuron tracing methods to reduce such topological errors. In this paper, we present a novel method to segment the soma structures with complex geometry. It can be applied along with the existing methods in a fully automated pipeline. An approximate bounding block is firstly estimated based on a geodesic distance transform. Then the soma segmentation is obtained by evolving the surface with a set of morphological operators inside the initial bounding region. By evaluating the methods against the challenging images released by the BigNeuron project, we showed that the proposed method can outperform the existing soma segmentation methods regarding the accuracy. We also showed that the soma segmentation can be used for enhancing the results of existing neuron tracing methods.

Journal ArticleDOI
TL;DR: A sparse representation based decoding framework to explore the neural correlates between the computational audio features and functional brain activities under free listening conditions and shows that the auditory saliency feature can be well decoded from brain activity patterns by the methods.
Abstract: In recent years, natural stimuli such as audio excerpts or video streams have received increasing attention in neuroimaging studies. Compared with conventional simple, idealized and repeated artificial stimuli, natural stimuli contain more unrepeated, dynamic and complex information that are more close to real-life. However, there is no direct correspondence between the stimuli and any sensory or cognitive functions of the brain, which makes it difficult to apply traditional hypothesis-driven analysis methods (e.g., the general linear model (GLM)). Moreover, traditional data-driven methods (e.g., independent component analysis (ICA)) lack quantitative modeling of stimuli, which may limit the power of analysis models. In this paper, we propose a sparse representation based decoding framework to explore the neural correlates between the computational audio features and functional brain activities under free listening conditions. First, we adopt a biologically-plausible auditory saliency feature to quantitatively model the audio excerpts and meanwhile develop sparse representation/dictionary learning method to learn an over-complete dictionary basis of brain activity patterns. Then, we reconstruct the auditory saliency features from the learned fMRI-derived dictionaries. After that, a group-wise analysis procedure is conducted to identify the associated brain regions and networks. Experiments showed that the auditory saliency feature can be well decoded from brain activity patterns by our methods, and the identified brain regions and networks are consistent and meaningful. At last, our method is evaluated and compared with ICA method and experimental results demonstrated the superiority of our methods.

Journal ArticleDOI
TL;DR: A novel robust reduced rank graph regression based method in a linear regression framework by considering correlations inherent in neuroimaging data and genetic data jointly, which could achieve competitive performance in terms of regression performance between brain structural measures and the Single Nucleotide Polymorphisms (SNPs).
Abstract: To characterize associations between genetic and neuroimaging data, a variety of analytic methods have been proposed in neuroimaging genetic studies. These methods have achieved promising performance by taking into account inherent correlation in either the neuroimaging data or the genetic data alone. In this study, we propose a novel robust reduced rank graph regression based method in a linear regression framework by considering correlations inherent in neuroimaging data and genetic data jointly. Particularly, we model the association analysis problem in a reduced rank regression framework with the genetic data as a feature matrix and the neuroimaging data as a response matrix by jointly considering correlations among the neuroimaging data as well as correlations between the genetic data and the neuroimaging data. A new graph representation of genetic data is adopted to exploit their inherent correlations, in addition to robust loss functions for both the regression and the data representation tasks, and a square-root-operator applied to the robust loss functions for achieving adaptive sample weighting. The resulting optimization problem is solved using an iterative optimization method whose convergence has been theoretically proved. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset have demonstrated that our method could achieve competitive performance in terms of regression performance between brain structural measures and the Single Nucleotide Polymorphisms (SNPs), compared with state-of-the-art alternative methods.

Journal ArticleDOI
TL;DR: The question of correspondence between deep learning and how the brain works is focused on, and a warning that any correspondences betweenDeep learning methods and the brain may not generalize to all deep learning.
Abstract: Especially young colleagues are fascinated by the potential of deep learning for neuroscience. This was obvious at the recent Society for Neuroscience meeting in Washington DC, where the few posters that had the magical words in their title attracted large crowds of attendees who seemed almost exclusively in their twenties. The success of deep learning of data representation has led to impressive applications in image, video and speech processing. Compared to these, recent advances in applying reinforcement learning to playing games are outright mind blowing, with AlphaGo Zero achieving superhuman performance in just three days of training on a single machine with specialized hardware. It is, therefore, easy to predict that the interest in deep learning among young computational neuroscientists will only increase, but the reality may be more complex than they surmise. In this Editorial, I will focus on the question of correspondence between deep learning and how the brain works. I will not consider the many opportunities of applying deep learning as a supporting technology. The original breakthrough leading to the success of deep learning tested the method on an image recognition task, classifying handwritten digits. Correspondingly, most of the applications of deep learning to computational neuroscience are about understanding the visual system (including the posters at the recent Society for Neuroscience meeting). As pointed out in a recent review one category of deep learning models, goal-driven hierarchical convolutional neural networks, has been very successful at predicting neural responses in several layers of primate visual cortex, including V1, V2, V4 and inferior temporal cortex (IT). But the authors also point out that this success is probably due to convolutional neural networks closely mimicking the overall architecture of cortex, in particular implementing features similar to receptive fields of increasing size across the hierarchy. This leads to a warning that any correspondences between deep learning methods and the brain may not generalize to all deep learning. In fact, though the field of machine learning has clearly been inspired by neuroscience, it has never seen this as a limitation on the methods it can use. For example, the breakthrough referred to earlier was a method to teach layers in a multilayer network one at a time, something that is hard to imagine occurring in a real brain. Deep learning networks have typically also many more layers than corresponding brain systems and one of the current hypes are Bvery deep^ models with tens of layers. A recent breakthrough, also used in AlphaGo Zero, are residual networks where shortcut connections are used that connect units in lower layers directly with units in higher layers. Residual networks are an example of deep learning methods that do not reflect real neural systems, this would be like V1 densely projecting directly to V4 or IT. Conversely, there are well known brain circuits that have clearly quite different architectures than visual cortex, like for example the olfactory system. Another difference between deep learning and human brains is the number of training examples required, with

Journal ArticleDOI
TL;DR: The method showed a consistent improvement as compared to other solutions, especially for subjects with enlarged lateral ventricles, and provided a superior inter-subject alignment in cortical regions, with the most marked improvement in the frontal lobe.
Abstract: During aging the brain undergoes a series of structural changes, in size, shape as well as tissue composition. In particular, cortical atrophy and ventricular enlargement are often present in the brain of elderly individuals. This poses serious challenges in the spatial registration of structural MR images. In this study, we addressed this open issue by proposing an enhanced framework for MR registration and segmentation. Our solution was compared with other approaches based on the tools available in SPM12, a widely used software package. Performance of the different methods was assessed on 229 T1-weighted images collected in healthy individuals, with age ranging between 55 and 90 years old. Our method showed a consistent improvement as compared to other solutions, especially for subjects with enlarged lateral ventricles. It also provided a superior inter-subject alignment in cortical regions, with the most marked improvement in the frontal lobe. We conclude that our method is a valid alternative to standard approaches based on SPM12, and is particularly suitable for the processing of structural MR images of brains with cortical atrophy and ventricular enlargement. The method is integrated in our software toolbox MRTool, which is freely available to the scientific community.

Journal ArticleDOI
TL;DR: An automated and efficient open-source software for the analysis of multi-site neuronal spike signals, specifically tailored to process data coming from different Multi-Electrode Arrays setups, guarantying, in those specific cases, automated processing.
Abstract: We implemented an automated and efficient open-source software for the analysis of multi-site neuronal spike signals. The software package, named SPICODYN, has been developed as a standalone windows GUI application, using C# programming language with Microsoft Visual Studio based on .NET framework 4.5 development environment. Accepted input data formats are HDF5, level 5 MAT and text files, containing recorded or generated time series spike signals data. SPICODYN processes such electrophysiological signals focusing on: spiking and bursting dynamics and functional-effective connectivity analysis. In particular, for inferring network connectivity, a new implementation of the transfer entropy method is presented dealing with multiple time delays (temporal extension) and with multiple binary patterns (high order extension). SPICODYN is specifically tailored to process data coming from different Multi-Electrode Arrays setups, guarantying, in those specific cases, automated processing. The optimized implementation of the Delayed Transfer Entropy and the High-Order Transfer Entropy algorithms, allows performing accurate and rapid analysis on multiple spike trains from thousands of electrodes.

Journal ArticleDOI
TL;DR: A novel method for feature selection based on a single-layer neural network which incorporates cross-validation during feature selection and stability selection through iterative subsampling is presented which finds increased classifier accuracy, reduced computational cost and greater consistency with which relevant features are selected.
Abstract: Multi-voxel pattern analysis often necessitates feature selection due to the high dimensional nature of neuroimaging data. In this context, feature selection techniques serve the dual purpose of potentially increasing classification accuracy and revealing sets of features that best discriminate between classes. However, feature selection techniques in current, widespread use in the literature suffer from a number of deficits, including the need for extended computational time, lack of consistency in selecting features relevant to classification, and only marginal increases in classifier accuracy. In this paper we present a novel method for feature selection based on a single-layer neural network which incorporates cross-validation during feature selection and stability selection through iterative subsampling. Comparing our approach to popular alternative feature selection methods, we find increased classifier accuracy, reduced computational cost and greater consistency with which relevant features are selected. Furthermore, we demonstrate that importance mapping, a technique used to identify voxels relevant to classification, can lead to the selection of irrelevant voxels due to shared activation patterns across categories. Our method, owing to its relatively simple architecture, flexibility and speed, can provide a viable alternative for researchers to identify sets of features that best discriminate classes.

Journal ArticleDOI
TL;DR: A robust automatic soma detection method developed based on the machine learning technique that tries to identify all the somas in the images of neurons in the FlyCircuit database.
Abstract: Computing and analyzing the neuronal structure is essential to studying connectome. Two important tasks for such analysis are finding the soma and constructing the neuronal structure. Finding the soma is considered more important because it is required for some neuron tracing algorithms. We describe a robust automatic soma detection method developed based on the machine learning technique. Images of neurons were three-dimensional confocal microscopic images in the FlyCircuit database. The testing data were randomly selected raw images that contained noises and partial neuronal structures. The number of somas in the images was not known in advance. Our method tries to identify all the somas in the images. Experimental results showed that the method is efficient and robust.

Journal ArticleDOI
TL;DR: This work conducted a comprehensive survey of the coding ability of multiple cortical locations toward different stimulus attributes in V1, and quantified the decoding performance profile at different sub-areas and layers of V1.
Abstract: Visual cortex forms the basis of visual processing and plays important roles in visual encoding. By using the recently published Allen Brain Observatory dataset consisting of large-scale calcium imaging of mouse V1 activities under visual stimuli, we were able to obtain high-quality data capturing simultaneous neuronal activities at multiple sub-areas and cortical depths of V1. Using prediction models, we analyzed the activity profiles related to static and drifting grating stimuli. We conducted a comprehensive survey of the coding ability of multiple cortical locations toward different stimulus attributes. Specifically, we focused on orientations and spatial frequencies (for static stimuli), as well as moving directions and speed (for drifting stimuli). By using results produced from a prediction model, we quantified the decoding performance profile at different sub-areas and layers of V1. In addition, we analyzed the interactions and interference between different stimulus attributes. The insights obtained from these discoveries would contribute to more precise and quantitative understanding of V1 coding mechanisms.

Journal ArticleDOI
TL;DR: Both the qualitative and quantitative results show that the proposed patch-based label fusion with structured discriminant embedding method to automatically segment the hippocampal structure from the target image in a voxel-wise manner outperforms the conventional multi-atlas based segmentation methods.
Abstract: Automatic and accurate segmentation of hippocampal structures in medical images is of great importance in neuroscience studies. In multi-atlas based segmentation methods, to alleviate the misalignment when registering atlases to the target image, patch-based methods have been widely studied to improve the performance of label fusion. However, weights assigned to the fused labels are usually computed based on predefined features (e.g. image intensities), thus being not necessarily optimal. Due to the lack of discriminating features, the original feature space defined by image intensities may limit the description accuracy. To solve this problem, we propose a patch-based label fusion with structured discriminant embedding method to automatically segment the hippocampal structure from the target image in a voxel-wise manner. Specifically, multi-scale intensity features and texture features are first extracted from the image patch for feature representation. Margin fisher analysis (MFA) is then applied to the neighboring samples in the atlases for the target voxel, in order to learn a subspace in which the distance between intra-class samples is minimized and the distance between inter-class samples is simultaneously maximized. Finally, the k-nearest neighbor (kNN) classifier is employed in the learned subspace to determine the final label for the target voxel. In the experiments, we evaluate our proposed method by conducting hippocampus segmentation using the ADNI dataset. Both the qualitative and quantitative results show that our method outperforms the conventional multi-atlas based segmentation methods.

Journal ArticleDOI
TL;DR: This paper develops a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs).
Abstract: Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.

Journal ArticleDOI
TL;DR: A manually annotated gold standard is provided for evaluation of the registration framework involved in template generation and mapping of the larval central nervous system (CNS) and spatial mapping of expression patterns from different larvae into a reference space defined by the standard template.
Abstract: The larval brain of the fruit fly Drosophila melanogaster is a small, tractable model system for neuroscience. Genes for fluorescent marker proteins can be expressed in defined, spatially restricted neuron populations. Here, we introduce the methods for 1) generating a standard template of the larval central nervous system (CNS), 2) spatial mapping of expression patterns from different larvae into a reference space defined by the standard template. We provide a manually annotated gold standard that serves for evaluation of the registration framework involved in template generation and mapping. A method for registration quality assessment enables the automatic detection of registration errors, and a semi-automatic registration method allows one to correct registrations, which is a prerequisite for a high-quality, curated database of expression patterns. All computational methods are available within the larvalign software package: https://github.com/larvalign/larvalign/releases/tag/v1.0.

Journal ArticleDOI
TL;DR: A new analysis method is proposed that computes one high-resolution average cortical profile per brain region extracting myeloarchitectural information from T1-weighted MRI scans that are routinely acquired at a conventional field strength and provides a step forward to study cortical myeloArchitecture in vivo at conventional magnetic field strength both in health and disease.
Abstract: Studies into cortical thickness in psychiatric diseases based on T1-weighted MRI frequently report on aberrations in the cerebral cortex. Due to limitations in image resolution for studies conducted at conventional MRI field strengths (e.g. 3 Tesla (T)) this information cannot be used to establish which of the cortical layers may be implicated. Here we propose a new analysis method that computes one high-resolution average cortical profile per brain region extracting myeloarchitectural information from T1-weighted MRI scans that are routinely acquired at a conventional field strength. To assess this new method, we acquired standard T1-weighted scans at 3 T and compared them with state-of-the-art ultra-high resolution T1-weighted scans optimised for intracortical myelin contrast acquired at 7 T. Average cortical profiles were computed for seven different brain regions. Besides a qualitative comparison between the 3 T scans, 7 T scans, and results from literature, we tested if the results from dynamic time warping-based clustering are similar for the cortical profiles computed from 7 T and 3 T data. In addition, we quantitatively compared cortical profiles computed for V1, V2 and V7 for both 7 T and 3 T data using a priori information on their relative myelin concentration. Although qualitative comparisons show that at an individual level average profiles computed for 7 T have more pronounced features than 3 T profiles the results from the quantitative analyses suggest that average cortical profiles computed from T1-weighted scans acquired at 3 T indeed contain myeloarchitectural information similar to profiles computed from the scans acquired at 7 T. The proposed method therefore provides a step forward to study cortical myeloarchitecture in vivo at conventional magnetic field strength both in health and disease.

Journal ArticleDOI
Peipeng Liang1, Yachao Xu1, Fei Lan1, Daqing Ma, Kuncheng Li1 
TL;DR: Findings provide new evidences that midazolam-induced light sedation is related to the disruption of cortical functional integration, and have new implications to the neural basis of consciousness.
Abstract: While some previous work suggests that midazolam-induced light sedation results from the functional disconnection within resting state network, little is known about the underlying alterations of cerebral blood flow (CBF) associated with its effects. A randomized, double-blind, within-subject, cross-over design was adopted, while 12 healthy young volunteers were scanned with arterial spin-labeling (ASL) perfusion MRI both before and after an injection of either saline or midazolam. The contrast of MRI signal before and after midazolam administration revealed the CBF decrease in the bilateral mesial thalamus and precuneus/posterior cingulate cortex (PCC). These effects were confirmed after controlling for any effect of injection as well as head motions. These findings provide new evidences that midazolam-induced light sedation is related to the disruption of cortical functional integration, and have new implications to the neural basis of consciousness.