scispace - formally typeset
Search or ask a question

Showing papers by "Erik C. B. Johnson published in 2019"


Posted ContentDOI
16 Oct 2019-bioRxiv
TL;DR: Overall, these brain-linked CSF biomarker panels represent a promising step toward a physiologically comprehensive tool that could meaningfully enhance the prognostic and therapeutic management of AD.
Abstract: Alzheimer’s disease (AD) features a complex web of pathological processes beyond amyloid accumulation and tau-mediated neuronal death. To meaningfully advance AD therapeutics, there is an urgent need for novel biomarkers that comprehensively reflect these disease mechanisms. Here we applied an integrative proteomics approach to identify cerebrospinal fluid (CSF) biomarkers linked to a diverse set of pathophysiological processes in the diseased brain. Using multiplex proteomics, we identified >3,500 proteins across 40 CSF samples from control and AD patients and >12,000 proteins across 48 postmortem brain tissues from control, asymptomatic AD (AsymAD), AD, and other neurodegenerative cases. Co-expression network analysis of the brain tissues resolved 44 protein modules, nearly half of which significantly correlated with AD neuropathology. Fifteen modules robustly overlapped with proteins quantified in the CSF, including 271 CSF markers highly altered in AD. These 15 overlapping modules were collapsed into five panels of brain-linked fluid markers representing a variety of cortical functions. Neuron-enriched synaptic and metabolic panels demonstrated decreased levels in the AD brain but increased levels in diseased CSF. Conversely, glial-enriched myelination and immunity panels were highly increased in both the brain and CSF. Using high-throughput proteomic analysis, proteins from these panels were validated in an independent CSF cohort of control, AsymAD, and AD samples. Remarkably, several validated markers were significantly altered in AsymAD CSF and appeared to stratify subpopulations within this cohort. Overall, these brain-linked CSF biomarker panels represent a promising step toward a physiologically comprehensive tool that could meaningfully enhance the prognostic and therapeutic management of AD.

19 citations


Journal ArticleDOI
TL;DR: It is proposed that CompTryp standards can be generated for any protein of interest, providing an efficient method to improve the robustness and reproducibility for MS analysis of clinical and research samples.
Abstract: Here, we report a method for the generation of complementary tryptic (CompTryp) isotope-labeled peptide standards for the relative and absolute quantification of proteins by mass spectrometry (MS). These standards can be digested in parallel with either trypsin (Tryp-C) or trypsin-N (Tryp-N), to generate peptides that significantly overlap in primary sequence having C- and N-terminal arginine and lysine residues, respectively. As a proof of concept, an isotope-labeled CompTryp standard was synthesized for Tau, a well-established biomarker in Alzheimer's disease (AD), which included both N- and C-terminal heavy isotope-labeled (15N and 13C) arginine residues and flanking amino acid sequences to monitor proteolytic digestion. Despite having the exact same mass, the N- and C-terminal heavy Tau peptides are distinguishable by retention time and MS/MS fragmentation profiles. The isotope-labeled Tau CompTryp standard was added to human cerebrospinal fluid (CSF) followed by parallel digestion with Tryp-N and Tryp-C. The native and isotope-labeled peptide pairs were quantified by parallel reaction monitoring (PRM) in a single assay. Notably, both tryptic peptides were effective at quantifying Tau in human CSF, and both showed a significant difference in CSF Tau levels between AD and controls. Treating these CompTryp Tau peptide measurements as independent replicates also improved the coefficient of variation and correlation with Tau immunoassays. More broadly, we propose that CompTryp standards can be generated for any protein of interest, providing an efficient method to improve the robustness and reproducibility for MS analysis of clinical and research samples.

12 citations


Posted ContentDOI
22 Apr 2019-bioRxiv
TL;DR: An ecosystem of neuroimaging data analysis pipelines that utilize open source algorithms to create standardized modules and end-to-end optimized approaches are developed, that connects and extends existing open-source projects to provide large-scale data storage, reproducible algorithms, and workflow execution engines.
Abstract: Emerging neuroimaging datasets (collected through modalities such as Electron Microscopy, Calcium Imaging, or X-ray Microtomography) describe the location and properties of neurons and their connections at unprecedented scale, promising new ways of understanding the brain. These modern imaging techniques used to interrogate the brain can quickly accumulate gigabytes to petabytes of structural brain imaging data. Unfortunately, many neuroscience laboratories lack the computational expertise or resources to work with datasets of this size: computer vision tools are often not portable or scalable, and there is considerable difficulty in reproducing results or extending methods. We developed an ecosystem of neuroimaging data analysis pipelines that utilize open source algorithms to create standardized modules and end-to-end optimized approaches. As exemplars we apply our tools to estimate synapse-level connectomes from electron microscopy data and cell distributions from X-ray microtomography data. To facilitate scientific discovery, we propose a generalized processing framework, that connects and extends existing open-source projects to provide large-scale data storage, reproducible algorithms, and workflow execution engines. Our accessible methods and pipelines demonstrate that approaches across multiple neuroimaging experiments can be standardized and applied to diverse datasets. The techniques developed are demonstrated on neuroimaging datasets, but may be applied to similar problems in other domains.

7 citations


Posted ContentDOI
31 Oct 2019-bioRxiv
TL;DR: This work integrated functional genomics data from postmortem brain, including label-free quantitative proteomics and RNA-seq based transcriptomics in an unprecedented dataset of over 1000 individuals, to identify conserved, high confidence proteomic changes during the progression of dementia that were absent in other neurodegenerative disorders.
Abstract: Summary Data-driven analyses of human brain across neurodegenerative diseases possess the potential for identifying disease-specific and shared biological processes. We integrated functional genomics data from postmortem brain, including label-free quantitative proteomics and RNA-seq based transcriptomics in an unprecedented dataset of over 1000 individuals across 5 cohorts representing Alzheimer’s disease (AD), asymptomatic AD, Progressive Supranuclear Palsy (PSP), and control patients, as a core analysis of the Accelerating Medicines Project – Alzheimer’s Disease (AMP-AD) consortium. We identified conserved, high confidence proteomic changes during the progression of dementias that were absent in other neurodegenerative disorders. We defined early changes in asymptomatic AD cases that included microglial, astrocyte, and immune response modules and later changes related to synaptic processes and mitochondria, many, but not all of which were conserved at the transcriptomic level. This included a novel module C3, which is enriched in MAPK signaling, and only identified in proteomic networks. To understand the relationship of core molecular processes with causal genetic drivers, we identified glial, immune, and cell-cell interaction processes in modules C8 and C10, which were robustly preserved in multiple independent data sets, up-regulated early in the disease course, and enriched in AD common genetic risk. In contrast to AD, PSP genetic risk was enriched in module C1, which represented synaptic processes, clearly demonstrating that despite shared pathology such as synaptic loss and glial inflammatory changes, AD and PSP have distinct causal drivers. These conserved, high confidence proteomic changes enriched in genetic risk represent new targets for drug discovery. Highlights We distinguish robust early and late proteomic changes in AD in multiple cohorts. We identify changes in dementias that are not preserved in other neurodegenerative diseases. AD genetic risk is enriched in early up-regulated glial-immune modules and PSP in synaptic modules. Almost half of the variance in protein expression reflects gene expression, but an equal fraction is post-transcriptional or -translational.

5 citations



Proceedings ArticleDOI
01 Nov 2019
TL;DR: This work investigates transfer learning for Electron Microscopy datasets, exploring transfer to different regions within a dataset, between datasets from different species, and for datasets collected with different image acquisition techniques, and investigates the impact of algorithm performance at different workflow stages.
Abstract: Neuroscientists are collecting Electron Microscopy (EM) datasets at increasingly faster rates. This modality offers an unprecedented map of brain structure at the resolution of individual neurons and their synaptic connections. Despite sophisticated image processing algorithms such as Flood Filling Networks, these huge datasets often require large amounts of hand-labeled data for algorithm training, followed by significant human proofreading. Many of these challenges are common across neuroscience modalities (and in other domains), but we use EM as a use case because the scale of this data emphasizes the opportunity and impact of rapidly transferring methods to new datasets. We investigate transfer learning for these work-flows, exploring transfer to different regions within a dataset, between datasets from different species, and for datasets collected with different image acquisition techniques. For EM data, we investigate the impact of algorithm performance at different workflow stages. Finally, we assess the impact of candidate transfer learning strategies in environments with no training labels. This work provides a library of algorithms, pipelines, and baselines on established datasets. We enable rapid assessment and improvements to processing pipelines, and an opportunity to quickly and effectively analyze new datasets for the neuroscience community.

1 citations