scispace - formally typeset
Search or ask a question

Showing papers in "The Journal of Neuroscience in 2015"


Journal ArticleDOI
TL;DR: It is quantitatively shown that there indeed exists an explicit gradient for feature complexity in the ventral pathway of the human brain, and this provides strong support for the hypothesis that object categorization is a guiding principle in the functional organization of the primate ventral stream.
Abstract: Converging evidence suggests that the primate ventral visual pathway encodes increasingly complex stimulus features in downstream areas. We quantitatively show that there indeed exists an explicit gradient for feature complexity in the ventral pathway of the human brain. This was achieved by mapping thousands of stimulus features of increasing complexity across the cortical sheet using a deep neural network. Our approach also revealed a fine-grained functional specialization of downstream areas of the ventral stream. Furthermore, it allowed decoding of representations from human brain activity at an unsurpassed degree of accuracy, confirming the quality of the developed approach. Stimulus features that successfully explained neural responses indicate that population receptive fields were explicitly tuned for object categorization. This provides strong support for the hypothesis that object categorization is a guiding principle in the functional organization of the primate ventral stream.

879 citations


Journal ArticleDOI
TL;DR: Granger causality (G-causality) analysis is used for the characterization of functional circuits underpinning perception, cognition, behavior, and consciousness in neuroscience.
Abstract: ### Introduction A key challenge in neuroscience and, in particular, neuroimaging, is to move beyond identification of regional activations toward the characterization of functional circuits underpinning perception, cognition, behavior, and consciousness. Granger causality (G-causality) analysis

657 citations


Journal ArticleDOI
TL;DR: Attempts to understand the headache pain itself point to activation of the trigeminovascular pathway as a prerequisite for explaining why the pain is restricted to the head, often affecting the periorbital area and the eye, and intensifies when intracranial pressure increases.
Abstract: Migraine is a common, multifactorial, disabling, recurrent, hereditary neurovascular headache disorder. It usually strikes sufferers a few times per year in childhood and then progresses to a few times per week in adulthood, particularly in females. Attacks often begin with warning signs (prodromes) and aura (transient focal neurological symptoms) whose origin is thought to involve the hypothalamus, brainstem, and cortex. Once the headache develops, it typically throbs, intensifies with an increase in intracranial pressure, and presents itself in association with nausea, vomiting, and abnormal sensitivity to light, noise, and smell. It can also be accompanied by abnormal skin sensitivity (allodynia) and muscle tenderness. Collectively, the symptoms that accompany migraine from the prodromal stage through the headache phase suggest that multiple neuronal systems function abnormally. As a consequence of the disease itself or its genetic underpinnings, the migraine brain is altered structurally and functionally. These molecular, anatomical, and functional abnormalities provide a neuronal substrate for an extreme sensitivity to fluctuations in homeostasis, a decreased ability to adapt, and the recurrence of headache. Advances in understanding the genetic predisposition to migraine, and the discovery of multiple susceptible gene variants (many of which encode proteins that participate in the regulation of glutamate neurotransmission and proper formation of synaptic plasticity) define the most compelling hypothesis for the generalized neuronal hyperexcitability and the anatomical alterations seen in the migraine brain. Regarding the headache pain itself, attempts to understand its unique qualities point to activation of the trigeminovascular pathway as a prerequisite for explaining why the pain is restricted to the head, often affecting the periorbital area and the eye, and intensifies when intracranial pressure increases.

521 citations


Journal ArticleDOI
TL;DR: In this article, a longitudinal analysis was conducted to test whether individual differences in pubertal development and risk-taking behavior were contributors to longitudinal change in nucleus accumbens activity.
Abstract: Prior studies have highlighted adolescence as a period of increased risk-taking, which is postulated to result from an overactive reward system in the brain. Longitudinal studies are pivotal for testing these brain-behavior relations because individual slopes are more sensitive for detecting change. The aim of the current study was twofold: (1) to test patterns of age-related change (i.e., linear, quadratic, and cubic) in activity in the nucleus accumbens, a key reward region in the brain, in relation to change in puberty (self-report and testosterone levels), laboratory risk-taking and self-reported risk-taking tendency; and (2) to test whether individual differences in pubertal development and risk-taking behavior were contributors to longitudinal change in nucleus accumbens activity. We included 299 human participants at the first time point and 254 participants at the second time point, ranging between ages 8–27 years, time points were separated by a 2 year interval. Neural responses to rewards, pubertal development (self-report and testosterone levels), laboratory risk-taking (balloon analog risk task; BART), and self-reported risk-taking tendency (Behavior Inhibition System/Behavior Activation System questionnaire) were collected at both time points. The longitudinal analyses confirmed the quadratic age pattern for nucleus accumbens activity to rewards (peaking in adolescence), and the same quadratic pattern was found for laboratory risk-taking (BART). Nucleus accumbens activity change was further related to change in testosterone and self-reported reward-sensitivity (BAS Drive). Thus, this longitudinal analysis provides new insight in risk-taking and reward sensitivity in adolescence: (1) confirming an adolescent peak in nucleus accumbens activity, and (2) underlining a critical role for pubertal hormones and individual differences in risk-taking tendency.

437 citations


Journal ArticleDOI
TL;DR: It is found, in two separate human studies, that 1/f electrophysiological noise increases with aging, and it is observed that this age-related 1/ f noise statistically mediates age- related working memory decline.
Abstract: Aging is associated with performance decrements across multiple cognitive domains. The neural noise hypothesis, a dominant view of the basis of this decline, posits that aging is accompanied by an increase in spontaneous, noisy baseline neural activity. Here we analyze data from two different groups of human subjects: intracranial electrocorticography from 15 participants over a 38 year age range (15-53 years) and scalp EEG data from healthy younger (20-30 years) and older (60-70 years) adults to test the neural noise hypothesis from a 1/f noise perspective. Many natural phenomena, including electrophysiology, are characterized by 1/f noise. The defining characteristic of 1/f is that the power of the signal frequency content decreases rapidly as a function of the frequency (f) itself. The slope of this decay, the noise exponent (χ), is often <-1 for electrophysiological data and has been shown to approach white noise (defined as χ = 0) with increasing task difficulty. We observed, in both electrophysiological datasets, that aging is associated with a flatter (more noisy) 1/f power spectral density, even at rest, and that visual cortical 1/f noise statistically mediates age-related impairments in visual working memory. These results provide electrophysiological support for the neural noise hypothesis of aging. Significance statement: Understanding the neurobiological origins of age-related cognitive decline is of critical scientific, medical, and public health importance, especially considering the rapid aging of the world's population. We find, in two separate human studies, that 1/f electrophysiological noise increases with aging. In addition, we observe that this age-related 1/f noise statistically mediates age-related working memory decline. These results significantly add to this understanding and contextualize a long-standing problem in cognition by encapsulating age-related cognitive decline within a neurocomputational model of 1/f noise-induced deficits in neural communication.

424 citations


Journal ArticleDOI
TL;DR: This work underlines the contribution of dopamine to effort-based decision making and uncovers a specific role of noradrenaline in energizing behavior to face challenges.
Abstract: Motivation determines multiple aspects of behavior, including action selection and energization of behavior. Several components of the underlying neural systems have been examined closely, but the specific role of the different neuromodulatory systems in motivation remains unclear. Here, we compare directly the activity of dopaminergic neurons from the substantia nigra pars compacta and noradrenergic neurons from the locus coeruleus in monkeys performing a task manipulating the reward/effort trade-off. Consistent with previous reports, dopaminergic neurons encoded the expected reward, but we found that they also anticipated the upcoming effort cost in connection with its negative influence on action selection. Conversely, the firing of noradrenergic neurons increased with both pupil dilation and effort production in relation to the energization of behavior. Therefore, this work underlines the contribution of dopamine to effort-based decision making and uncovers a specific role of noradrenaline in energizing behavior to face challenges.

378 citations


Journal ArticleDOI
TL;DR: It is shown in a murine TBI model that CSF movement through the recently characterized glymphatic pathway transports biomarkers to blood via the cervical lymphatics, and concludes that routine TBI patient management may limit the clinical utility of blood-based biomarkers because their brain-to-blood transport depends on Glymphatic activity.
Abstract: The nonspecific and variable presentation of traumatic brain injury (TBI) has motivated an intense search for blood-based biomarkers that can objectively predict the severity of injury. However, it is not known how cytosolic proteins released from traumatized brain tissue reach the peripheral blood. Here we show in a murine TBI model that CSF movement through the recently characterized glymphatic pathway transports biomarkers to blood via the cervical lymphatics. Clinically relevant manipulation of glymphatic activity, including sleep deprivation and cisternotomy, suppressed or eliminated TBI-induced increases in serum S100β, GFAP, and neuron specific enolase. We conclude that routine TBI patient management may limit the clinical utility of blood-based biomarkers because their brain-to-blood transport depends on glymphatic activity.

363 citations


Journal ArticleDOI
TL;DR: The optical and protein engineering strategies that form the basis of this “all-optical” approach are now sufficiently advanced to enable single-neuron and single-action potential precision for simultaneous readout and manipulation from the same functionally defined neurons in the intact brain.
Abstract: There have been two recent revolutionary advances in neuroscience: First, genetically encoded activity sensors have brought the goal of optical detection of single action potentials in vivo within reach. Second, optogenetic actuators now allow the activity of neurons to be controlled with millisecond precision. These revolutions have now been combined, together with advanced microscopies, to allow “all-optical” readout and manipulation of activity in neural circuits with single-spike and single-neuron precision. This is a transformational advance that will open new frontiers in neuroscience research. Harnessing the power of light in the all-optical approach requires coexpression of genetically encoded activity sensors and optogenetic probes in the same neurons, as well as the ability to simultaneously target and record the light from the selected neurons. It has recently become possible to combine sensors and optical strategies that are sufficiently sensitive and cross talk free to enable single-action-potential sensitivity and precision for both readout and manipulation in the intact brain. The combination of simultaneous readout and manipulation from the same genetically defined cells will enable a wide range of new experiments as well as inspire new technologies for interacting with the brain. The advances described in this review herald a future where the traditional tools used for generations by physiologists to study and interact with the brain—stimulation and recording electrodes—can largely be replaced by light. We outline potential future developments in this field and discuss how the all-optical strategy can be applied to solve fundamental problems in neuroscience. SIGNIFICANCE STATEMENT This review describes the nexus of dramatic recent developments in optogenetic probes, genetically encoded activity sensors, and novel microscopies, which together allow the activity of neural circuits to be recorded and manipulated entirely using light. The optical and protein engineering strategies that form the basis of this “all-optical” approach are now sufficiently advanced to enable single-neuron and single-action potential precision for simultaneous readout and manipulation from the same functionally defined neurons in the intact brain. These advances promise to illuminate many fundamental challenges in neuroscience, including transforming our search for the neural code and the links between neural circuit activity and behavior.

323 citations


Journal ArticleDOI
TL;DR: Using glycoprotein (G)-deleted rabies virus to identify the direct monosynaptic inputs to genetically targeted neurons and this approach has been widely used for sophisticated circuit-tracing.
Abstract: ### Introduction Since the introduction of methods using glycoprotein (G)-deleted rabies virus (RV dG ) to identify the direct monosynaptic inputs to genetically targeted neurons eight years ago ([Wickersham et al., 2007b][1]), this approach has been widely used for sophisticated circuit-tracing

320 citations


Journal ArticleDOI
TL;DR: The results suggest that a bilateral attentional control network comprising the intraparietal sulcus, precuneus, and dorsolateral prefrontal cortex is involved in selecting what dimensions are relevant to the task at hand, effectively updating the task representation through trial and error.
Abstract: In recent years, ideas from the computational field of reinforcement learning have revolutionized the study of learning in the brain, famously providing new, precise theories of how dopamine affects learning in the basal ganglia. However, reinforcement learning algorithms are notorious for not scaling well to multidimensional environments, as is required for real-world learning. We hypothesized that the brain naturally reduces the dimensionality of real-world problems to only those dimensions that are relevant to predicting reward, and conducted an experiment to assess by what algorithms and with what neural mechanisms this “representation learning” process is realized in humans. Our results suggest that a bilateral attentional control network comprising the intraparietal sulcus, precuneus, and dorsolateral prefrontal cortex is involved in selecting what dimensions are relevant to the task at hand, effectively updating the task representation through trial and error. In this way, cortical attention mechanisms interact with learning in the basal ganglia to solve the “curse of dimensionality” in reinforcement learning.

297 citations


Journal ArticleDOI
TL;DR: It is reported that exposure to subchronic variable stress (SCVS) induces depression-associated behaviors in female mice, whereas males are resilient as they do not develop these behavioral abnormalities, and transcriptional analysis of nucleus accumbens revealed markedly different patterns of stress regulation of gene expression between the sexes.
Abstract: Depression and anxiety disorders are more prevalent in females, but the majority of research in animal models, the first step in finding new treatments, has focused predominantly on males. Here we report that exposure to subchronic variable stress (SCVS) induces depression-associated behaviors in female mice, whereas males are resilient as they do not develop these behavioral abnormalities. In concert with these different behavioral responses, transcriptional analysis of nucleus accumbens (NAc), a major brain reward region, by use of RNA sequencing (RNA-seq) revealed markedly different patterns of stress regulation of gene expression between the sexes. Among the genes displaying sex differences was DNA methyltransferase 3a (Dnmt3a), which shows a greater induction in females after SCVS. Interestingly, Dnmt3a expression levels were increased in the NAc of depressed humans, an effect seen in both males and females. Local overexpression of Dnmt3a in NAc rendered male mice more susceptible to SCVS, whereas Dnmt3a knock-out in this region rendered females more resilient, directly implicating this gene in stress responses. Associated with this enhanced resilience of female mice upon NAc knock-out of Dnmt3a was a partial shift of the NAc female transcriptome toward the male pattern after SCVS. These data indicate that males and females undergo different patterns of transcriptional regulation in response to stress and that a DNA methyltransferase in NAc contributes to sex differences in stress vulnerability. SIGNIFICANCE STATEMENT Women have a higher incidence of depression than men. However, preclinical models, the first step in developing new diagnostics and therapeutics, have been performed mainly on male subjects. Using a stress-based animal model of depression that causes behavioral effects in females but not males, we demonstrate a sex-specific transcriptional profile in brain reward circuitry. This transcriptional profile can be altered by removal of an epigenetic mechanism, which normally suppresses DNA transcription, creating a hybrid male/female transcriptional pattern. Removal of this epigenetic mechanism also induces behavioral resilience to stress in females. These findings shed new light onto molecular factors controlling sex differences in stress response.

Journal ArticleDOI
TL;DR: Robust evidence is provided for this network's active contribution to working memory by revealing dynamic reconfiguration in its interactions with other networks and offer an explanation within the global workspace theoretical framework.
Abstract: The default mode network (DMN) has been traditionally assumed to hinder behavioral performance in externally focused, goal-directed paradigms and to provide no active contribution to human cognition. However, recent evidence suggests greater DMN activity in an array of tasks, especially those that involve self-referential and memory-based processing. Although data that robustly demonstrate a comprehensive functional role for DMN remains relatively scarce, the global workspace framework, which implicates the DMN in global information integration for conscious processing, can potentially provide an explanation for the broad range of higher-order paradigms that report DMN involvement. We used graph theoretical measures to assess the contribution of the DMN to global functional connectivity dynamics in 22 healthy volunteers during an fMRI-based n-back working-memory paradigm with parametric increases in difficulty. Our predominant finding is that brain modularity decreases with greater task demands, thus adapting a more global workspace configuration, in direct relation to increases in reaction times to correct responses. Flexible default mode regions dynamically switch community memberships and display significant changes in their nodal participation coefficient and strength, which may reflect the observed whole-brain changes in functional connectivity architecture. These findings have important implications for our understanding of healthy brain function, as they suggest a central role for the DMN in higher cognitive processing. SIGNIFICANCE STATEMENT The default mode network (DMN) has been shown to increase its activity during the absence of external stimulation, and hence was historically assumed to disengage during goal-directed tasks. Recent evidence, however, implicates the DMN in self-referential and memory-based processing. We provide robust evidence for this network's active contribution to working memory by revealing dynamic reconfiguration in its interactions with other networks and offer an explanation within the global workspace theoretical framework. These promising findings may help redefine our understanding of the exact DMN role in human cognition.

Journal ArticleDOI
TL;DR: The two-state model of motor learning is a close approximation of sensorimotor learning, but it is unable to describe adequately the various implicit learning operations that forge the learning curve.
Abstract: A popular model of human sensorimotor learning suggests that a fast process and a slow process work in parallel to produce the canonical learning curve (Smith et al., 2006). Recent evidence supports the subdivision of sensorimotor learning into explicit and implicit processes that simultaneously subserve task performance (Taylor et al., 2014). We set out to test whether these two accounts of learning processes are homologous. Using a recently developed method to assay explicit and implicit learning directly in a sensorimotor task, along with a computational modeling analysis, we show that the fast process closely resembles explicit learning and the slow process approximates implicit learning. In addition, we provide evidence for a subdivision of the slow/implicit process into distinct manifestations of motor memory. We conclude that the two-state model of motor learning is a close approximation of sensorimotor learning, but it is unable to describe adequately the various implicit learning operations that forge the learning curve. Our results suggest that a wider net be cast in the search for the putative psychological mechanisms and neural substrates underlying the multiplicity of processes involved in motor learning.

Journal ArticleDOI
TL;DR: It is suggested that IL activity during extinction training likely facilitates storage of extinction in target structures, but contrary to current models, IL activity does not appear to be necessary for retrieval of extinction memory.
Abstract: Previous rodent studies have implicated the infralimbic (IL) subregion of the medial prefrontal cortex in extinction of auditory fear conditioning. However, these studies used pharmacological inactivation or electrical stimulation techniques, which lack temporal precision and neuronal specificity. Here, we used an optogenetic approach to either activate (with channelrhodopsin) or silence (with halorhodopsin) glutamatergic IL neurons during conditioned tones delivered in one of two phases: extinction training or extinction retrieval. Activating IL neurons during extinction training reduced fear expression and strengthened extinction memory the following day. Silencing IL neurons during extinction training had no effect on within-session extinction, but impaired the retrieval of extinction the following day, indicating that IL activity during extinction tones is necessary for the formation of extinction memory. Surprisingly, however, silencing IL neurons optogenetically or pharmacologically during the retrieval of extinction 1 day or 1 week following extinction training had no effect. Our findings suggest that IL activity during extinction training likely facilitates storage of extinction in target structures, but contrary to current models, IL activity does not appear to be necessary for retrieval of extinction memory.

Journal ArticleDOI
TL;DR: The results suggest that complement C3, or its downstream signaling, is detrimental to synapses during aging, and a novel and prominent role for complement protein C3 in mediating aged-related and region-specific changes in synaptic function and plasticity in the aging brain.
Abstract: The complement system is part of the innate immune response responsible for removing pathogens and cellular debris, in addition to helping to refine CNS neuronal connections via microglia-mediated pruning of inappropriate synapses during brain development. However, less is known about the role of complement during normal aging. Here, we studied the role of the central complement component, C3, in synaptic health and aging. We examined behavior as well as electrophysiological, synaptic, and neuronal changes in the brains of C3 -deficient male mice ( C3 KO) compared with age-, strain-, and gender-matched C57BL/6J (wild-type, WT) control mice at postnatal day 30, 4 months, and 16 months of age. We found the following: (1) region-specific and age-dependent synapse loss in aged WT mice that was not observed in C3 KO mice; (2) age-dependent neuron loss in hippocampal CA3 (but not in CA1) that followed synapse loss in aged WT mice, neither of which were observed in aged C3 KO mice; and (3) significantly enhanced LTP and cognition and less anxiety in aged C3 KO mice compared with aged WT mice. Importantly, CA3 synaptic puncta were similar between WT and C3 KO mice at P30. Together, our results suggest a novel and prominent role for complement protein C3 in mediating aged-related and region-specific changes in synaptic function and plasticity in the aging brain. SIGNIFICANCE STATEMENT The complement cascade, part of the innate immune response to remove pathogens, also plays a role in synaptic refinement during brain development by the removal of weak synapses. We investigated whether complement C3, a central component, affects synapse loss during aging. Wild-type (WT) and C3 knock-out ( C3 KO) mice were examined at different ages. The mice were similar at 1 month of age. However, with aging, WT mice lost synapses in specific brain regions, especially in hippocampus, an area important for memory, whereas C3 KO mice were protected. Aged C3 KO mice also performed better on learning and memory tests than aged WT mice. Our results suggest that complement C3, or its downstream signaling, is detrimental to synapses during aging.

Journal ArticleDOI
TL;DR: It is proposed that the most popular sleep posture (lateral) has evolved to optimize waste removal during sleep and that posture must be considered in diagnostic imaging procedures developed in the future to assess CSF-ISF transport in humans.
Abstract: The glymphatic pathway expedites clearance of waste, including soluble amyloid β (Aβ) from the brain. Transport through this pathway is controlled by the brain's arousal level because, during sleep or anesthesia, the brain's interstitial space volume expands (compared with wakefulness), resulting in faster waste removal. Humans, as well as animals, exhibit different body postures during sleep, which may also affect waste removal. Therefore, not only the level of consciousness, but also body posture, might affect CSF–interstitial fluid (ISF) exchange efficiency. We used dynamic-contrast-enhanced MRI and kinetic modeling to quantify CSF-ISF exchange rates in anesthetized rodents' brains in supine, prone, or lateral positions. To validate the MRI data and to assess specifically the influence of body posture on clearance of Aβ, we used fluorescence microscopy and radioactive tracers, respectively. The analysis showed that glymphatic transport was most efficient in the lateral position compared with the supine or prone positions. In the prone position, in which the rat's head was in the most upright position (mimicking posture during the awake state), transport was characterized by “retention” of the tracer, slower clearance, and more CSF efflux along larger caliber cervical vessels. The optical imaging and radiotracer studies confirmed that glymphatic transport and Aβ clearance were superior in the lateral and supine positions. We propose that the most popular sleep posture (lateral) has evolved to optimize waste removal during sleep and that posture must be considered in diagnostic imaging procedures developed in the future to assess CSF-ISF transport in humans. SIGNIFICANCE STATEMENT The rodent brain removes waste better during sleep or anesthesia compared with the awake state. Animals exhibit different body posture during the awake and sleep states, which might affect the brain's waste removal efficiency. We investigated the influence of body posture on brainwide transport of inert tracers of anesthetized rodents. The major finding of our study was that waste, including Aβ, removal was most efficient in the lateral position (compared with the prone position), which mimics the natural resting/sleeping position of rodents. Although our finding awaits testing in humans, we speculate that the lateral position during sleep has advantage with regard to the removal of waste products including Aβ, because clinical studies have shown that sleep drives Aβ clearance from the brain.

Journal ArticleDOI
TL;DR: Interactions between noise and aging may require an acute synaptopathy, but a single synaptopathic exposure can accelerate cochlear aging.
Abstract: Cochlear synaptic loss, rather than hair cell death, is the earliest sign of damage in both noise- and age-related hearing impairment (Kujawa and Liberman, 2009; Sergeyenko et al., 2013). Here, we compare cochlear aging after two types of noise exposure: one producing permanent synaptic damage without hair cell loss and another producing neither synaptopathy nor hair cell loss. Adult mice were exposed (8–16 kHz, 100 or 91 dB SPL for 2 h) and then evaluated from 1 h to ∼20 months after exposure. Cochlear function was assessed via distortion product otoacoustic emissions and auditory brainstem responses (ABRs). Cochlear whole mounts and plastic sections were studied to quantify hair cells, cochlear neurons, and the synapses connecting them. The synaptopathic noise (100 dB) caused 35–50 dB threshold shifts at 24 h. By 2 weeks, thresholds had recovered, but synaptic counts and ABR amplitudes at high frequencies were reduced by up to ∼45%. As exposed animals aged, synaptopathy was exacerbated compared with controls and spread to lower frequencies. Proportional ganglion cell losses followed. Threshold shifts first appeared >1 year after exposure and, by ∼20 months, were up to 18 dB greater in the synaptopathic noise group. Outer hair cell losses were exacerbated in the same time frame (∼10% at 32 kHz). In contrast, the 91 dB exposure, producing transient threshold shift without acute synaptopathy, showed no acceleration of synaptic loss or cochlear dysfunction as animals aged, at least to ∼1 year after exposure. Therefore, interactions between noise and aging may require an acute synaptopathy, but a single synaptopathic exposure can accelerate cochlear aging.

Journal ArticleDOI
TL;DR: In this paper, the role of triggering receptor expressed on myeloid cells-2 (TREM2) during ischemic stroke was explored in both in vitro and in vivo stroke models and a potential endogenous TREM2 ligand was identified.
Abstract: Clearing cellular debris after brain injury represents an important mechanism in regaining tissue homeostasis and promoting functional recovery. Triggering receptor expressed on myeloid cells-2 (TREM2) is a newly identified receptor expressed on microglia and is thought to phagocytose damaged brain cells. The precise role of TREM2 during ischemic stroke has not been fully understood. We explore TREM2 in both in vitro and in vivo stroke models and identify a potential endogenous TREM2 ligand. TREM2 knockdown in microglia reduced microglial activation to an amoeboid phenotype and decreased the phagocytosis of injured neurons. Phagocytosis and infarcted brain tissue resorption was reduced in TREM2 knock-out (KO) mice compared with wild-type (WT) mice. TREM2 KO mice also had worsened neurological recovery and decreased viable brain tissue in the ipsilateral hemisphere. The numbers of activated microglia and phagocytes in TREM2 KO mice were decreased compared with WT mice, and foamy macrophages were nearly absent in the TREM2 KO mice. Postischemia, TREM2 was highly expressed on microglia and TREM2-Fc fusion protein (used as a probe to identify potential TREM2 binding partners) bound to an unknown TREM2 ligand that colocalized to neurons. Oxygen glucose deprivation-exposed neuronal media, or cellular fractions containing nuclei or purified DNA, but not cytosolic fractions, stimulated signaling through TREM2. TREM2-Fc fusion protein pulled down nucleic acids from ischemic brain lysate. These findings establish the relevance of TREM2 in the phagocytosis of the infarcted brain and emphasize its role in influencing neurological outcomes following stroke. Further, nucleic acids may be one potential ligand of TREM2 in brain ischemia.

Journal ArticleDOI
TL;DR: Evidence is reviewed that, in addition to spatial context, the hippocampus encodes a wide variety of information about temporal and situational context, about the systematic organization of events in abstract space, and about routes through maps of cognition and space.
Abstract: More than 50 years of research have led to the general agreement that the hippocampus contributes to memory, but there has been a major schism among theories of hippocampal function over this time. Some researchers argue that the hippocampus plays a broad role in episodic and declarative memory, whereas others argue for a specific role in the creation of spatial cognitive maps and navigation. Although both views have merit, neither provides a complete account of hippocampal function. Guided by recent reviews that attempt to bridge between these views, here we suggest that reconciliation can be accomplished by exploring hippocampal function from the perspective of Tolman9s (1948) original conception of a cognitive map as organizing experience and guiding behavior across all domains of cognition. We emphasize recent studies in animals and humans showing that hippocampal networks support a broad range of domains of cognitive maps, that these networks organize specific experiences within the contextually relevant map, and that network activity patterns reflect behavior guided through cognitive maps. These results are consistent with a framework that bridges theories of hippocampal function by conceptualizing the hippocampus as organizing incoming information within the context of a multidimensional cognitive map of spatial, temporal, and associational context. SIGNIFICANCE STATEMENT Research of hippocampal function is dominated by two major views. The spatial view argues that the hippocampus tracks routes through space, whereas the memory view suggests a broad role in declarative memory. Both views rely on considerable evidence, but neither provides a complete account of hippocampal function. Here we review evidence that, in addition to spatial context, the hippocampus encodes a wide variety of information about temporal and situational context, about the systematic organization of events in abstract space, and about routes through maps of cognition and space. We argue that these findings cross the boundaries of the memory and spatial views and offer new insights into hippocampal function as a system supporting a broad range of cognitive maps.

Journal ArticleDOI
TL;DR: It is found that many of the known hereditary deafness genes are much more highly expressed inhair cells than surrounding cells, suggesting that genes preferentially expressed in hair cells are good candidates for unknown deafs genes.
Abstract: Hair cells of the inner ear are essential for hearing and balance. As a consequence, pathogenic variants in genes specifically expressed in hair cells often cause hereditary deafness. Hair cells are few in number and not easily isolated from the adjacent supporting cells, so the biochemistry and molecular biology of hair cells can be difficult to study. To study gene expression in hair cells, we developed a protocol for hair cell isolation by FACS. With nearly pure hair cells and surrounding cells, from cochlea and utricle and from E16 to P7, we performed a comprehensive cell type-specific RNA-Seq study of gene expression during mouse inner ear development. Expression profiling revealed new hair cell genes with distinct expression patterns: some are specific for vestibular hair cells, others for cochlear hair cells, and some are expressed just before or after maturation of mechanosensitivity. We found that many of the known hereditary deafness genes are much more highly expressed in hair cells than surrounding cells, suggesting that genes preferentially expressed in hair cells are good candidates for unknown deafness genes.

Journal ArticleDOI
TL;DR: The results indicate that HMGB1 was released from the ischemic brain in the hyperacute phase of stroke in mice and patients, andHMGB1-RAGE signaling resulted in functional exhaustion of mature monocytes and lymphopenia, the hallmarks of immune suppression after extensive ischemia.
Abstract: Acute brain lesions induce profound alterations of the peripheral immune response comprising the opposing phenomena of early immune activation and subsequent immunosuppression. The mechanisms underlying this brain-immune signaling are largely unknown. We used animal models for experimental brain ischemia as a paradigm of acute brain lesions and additionally investigated a large cohort of stroke patients. We analyzed release of HMGB1 isoforms by mass spectrometry and investigated its inflammatory potency and signaling pathways by immunological in vivo and in vitro techniques. Features of the complex behavioral sickness behavior syndrome were characterized by homecage behavior analysis. HMGB1 downstream signaling, particularly with RAGE, was studied in various transgenic animal models and by pharmacological blockade. Our results indicate that the cytokine-inducing, fully reduced isoform of HMGB1 was released from the ischemic brain in the hyperacute phase of stroke in mice and patients. Cytokines secreted in the periphery in response to brain injury induced sickness behavior, which could be abrogated by inhibition of the HMGB1-RAGE pathway or direct cytokine neutralization. Subsequently, HMGB1-release induced bone marrow egress and splenic proliferation of bone marrow-derived suppressor cells, inhibiting the adaptive immune responses in vivo and vitro. Furthermore, HMGB1-RAGE signaling resulted in functional exhaustion of mature monocytes and lymphopenia, the hallmarks of immune suppression after extensive ischemia. This study introduces the HMGB1-RAGE-mediated pathway as a key mechanism explaining the complex postischemic brain-immune interactions.

Journal ArticleDOI
TL;DR: The results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing,” and that subcortical areas encode the temporal structure of clearly audible sound.
Abstract: Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.”

Journal ArticleDOI
TL;DR: Inspiration is identified as the most important driving force for CSF flow in humans and Inspiratory thoracic pressure reduction is expected to directly modulate the hydrostatic pressure conditions for the low-resistance paravenous, venous, and lymphatic clearance routes of CSF.
Abstract: The mechanisms behind CSF flow in humans are still not fully known. CSF circulates from its primary production sites at the choroid plexus through the brain ventricles to reach the outer surface of the brain in the subarachnoid spaces from where it drains into venous bloodstream and cervical lymphatics. According to a recent concept of brain fluid transport, established in rodents, CSF from the brain surface also enters the brain tissue along para-arterial routes and exits through paravenous spaces again into subarachnoid compartments. This unidirectional flow is mainly driven by arterial pulsation. To investigate how CSF flow is regulated in humans, we applied a novel real-time magnetic resonance imaging technique at high spatial (0.75 mm) and temporal (50 ms) resolution in healthy human subjects. We observed significant CSF flow exclusively with inspiration. In particular, during forced breathing, high CSF flow was elicited during every inspiration, whereas breath holding suppressed it. Only a minor flow component could be ascribed to cardiac pulsation. The present results unambiguously identify inspiration as the most important driving force for CSF flow in humans. Inspiratory thoracic pressure reduction is expected to directly modulate the hydrostatic pressure conditions for the low-resistance paravenous, venous, and lymphatic clearance routes of CSF. Furthermore, the experimental approach opens new clinical opportunities to study the pathophysiology of various forms of hydrocephalus and to design therapeutic strategies in relation to CSF flow alterations.

Journal ArticleDOI
TL;DR: It is discovered that in mouse models, activated B-lymphocytes infiltrate infarcted tissue in the weeks after stroke and immunostaining of human postmortem tissue revealed that a B-LYmphocyte response to stroke also occurs in the brain of some people with stroke and dementia.
Abstract: Each year, 10 million people worldwide survive the neurologic injury associated with a stroke. Importantly, stroke survivors have more than twice the risk of subsequently developing dementia compared with people who have never had a stroke. The link between stroke and the later development of dementia is not understood. There are reports of oligoclonal bands in the CSF of stroke patients, suggesting that in some people a B-lymphocyte response to stroke may occur in the CNS. Therefore, we tested the hypothesis that a B-lymphocyte response to stroke could contribute to the onset of dementia. We discovered that, in mouse models, activated B-lymphocytes infiltrate infarcted tissue in the weeks after stroke. B-lymphocytes undergo isotype switching, and IgM, IgG, and IgA antibodies are found in the neuropil adjacent to the lesion. Concurrently, mice develop delayed deficits in LTP and cognition. Genetic deficiency, and the pharmacologic ablation of B-lymphocytes using an anti-CD20 antibody, prevents the appearance of delayed cognitive deficits. Furthermore, immunostaining of human postmortem tissue revealed that a B-lymphocyte response to stroke also occurs in the brain of some people with stroke and dementia. These data suggest that some stroke patients may develop a B-lymphocyte response to stroke that contributes to dementia, and is potentially treatable with FDA-approved drugs that target B cells.

Journal ArticleDOI
TL;DR: Astrocytic GLT-1 performs critical functions required for normal weight gain, resistance to epilepsy, and survival, however, the contribution of astrocyic GLt-1 to glutamate uptake into synaptosomes is less than expected, and the contributions of neuronal GLT -1 tosynaptosomal glutamate uptake is greater than expected based on their relative protein expression.
Abstract: GLT-1 (EAAT2; slc1a2) is the major glutamate transporter in the brain, and is predominantly expressed in astrocytes, but at lower levels also in excitatory terminals. We generated a conditional GLT-1 knock-out mouse to uncover cell-type-specific functional roles of GLT-1. Inactivation of the GLT-1 gene was achieved in either neurons or astrocytes by expression of synapsin-Cre or inducible human GFAP-CreERT2. Elimination of GLT-1 from astrocytes resulted in loss of ∼80% of GLT-1 protein and of glutamate uptake activity that could be solubilized and reconstituted in liposomes. This loss was accompanied by excess mortality, lower body weight, and seizures suggesting that astrocytic GLT-1 is of major importance. However, there was only a small (15%) reduction that did not reach significance of glutamate uptake into crude forebrain synaptosomes. In contrast, when GLT-1 was deleted in neurons, both the GLT-1 protein and glutamate uptake activity that could be solubilized and reconstituted in liposomes were virtually unaffected. These mice showed normal survival, weight gain, and no seizures. However, the synaptosomal glutamate uptake capacity (Vmax) was reduced significantly (40%). In conclusion, astrocytic GLT-1 performs critical functions required for normal weight gain, resistance to epilepsy, and survival. However, the contribution of astrocytic GLT-1 to glutamate uptake into synaptosomes is less than expected, and the contribution of neuronal GLT-1 to synaptosomal glutamate uptake is greater than expected based on their relative protein expression. These results have important implications for the interpretation of the many previous studies assessing glutamate uptake capacity by measuring synaptosomal uptake.

Journal ArticleDOI
TL;DR: This study is the first to demonstrate that mindfulness-related pain relief is mechanistically distinct from placebo analgesia, and confirms the existence of multiple, cognitively driven, supraspinal mechanisms for pain modulation.
Abstract: Mindfulness meditation reduces pain in experimental and clinical settings. However, it remains unknown whether mindfulness meditation engages pain-relieving mechanisms other than those associated with the placebo effect (e.g., conditioning, psychosocial context, beliefs). To determine whether the analgesic mechanisms of mindfulness meditation are different from placebo, we randomly assigned 75 healthy, human volunteers to 4 d of the following: (1) mindfulness meditation, (2) placebo conditioning, (3) sham mindfulness meditation, or (4) book-listening control intervention. We assessed intervention efficacy using psychophysical evaluation of experimental pain and functional neuroimaging. Importantly, all cognitive manipulations (i.e., mindfulness meditation, placebo conditioning, sham mindfulness meditation) significantly attenuated pain intensity and unpleasantness ratings when compared to rest and the control condition ( p p = 0.032) and pain unpleasantness ( p p = 0.030) and pain unpleasantness ( p = 0.043) ratings more than sham mindfulness meditation. Mindfulness-meditation-related pain relief was associated with greater activation in brain regions associated with the cognitive modulation of pain, including the orbitofrontal, subgenual anterior cingulate, and anterior insular cortex. In contrast, placebo analgesia was associated with activation of the dorsolateral prefrontal cortex and deactivation of sensory processing regions (secondary somatosensory cortex). Sham mindfulness meditation-induced analgesia was not correlated with significant neural activity, but rather by greater reductions in respiration rate. This study is the first to demonstrate that mindfulness-related pain relief is mechanistically distinct from placebo analgesia. The elucidation of this distinction confirms the existence of multiple, cognitively driven, supraspinal mechanisms for pain modulation. SIGNIFICANCE STATEMENT Recent findings have demonstrated that mindfulness meditation significantly reduces pain. Given that the “gold standard” for evaluating the efficacy of behavioral interventions is based on appropriate placebo comparisons, it is imperative that we establish whether there is an effect supporting meditation-related pain relief above and beyond the effects of placebo. Here, we provide novel evidence demonstrating that mindfulness meditation produces greater pain relief and employs distinct neural mechanisms than placebo cream and sham mindfulness meditation. Specifically, mindfulness meditation-induced pain relief activated higher-order brain regions, including the orbitofrontal and cingulate cortices. In contrast, placebo analgesia was associated with decreased pain-related brain activation. These findings demonstrate that mindfulness meditation reduces pain through unique mechanisms and may foster greater acceptance of meditation as an adjunct pain therapy.

Journal ArticleDOI
TL;DR: The response of neurons in sensory cortex to repeated stimulus presentations is highly variable and the importance of cortical state in controlling cortical operation is highlighted and can help reconcile previous studies, which differed widely in their estimate of neuronal variability and pairwise correlations.
Abstract: The response of neurons in sensory cortex to repeated stimulus presentations is highly variable. To investigate the nature of this variability, we compared the spike activity of neurons in the primary visual cortex (V1) of cats with that of their afferents from lateral geniculate nucleus (LGN), in response to similar stimuli. We found variability to be much higher in V1 than in LGN. To investigate the sources of the additional variability, we measured the spiking activity of large V1 populations and found that much of the variability was shared across neurons: the variable portion of the responses of one neuron could be well predicted from the summed activity of the rest of the neurons. Variability thus mostly reflected global fluctuations affecting all neurons. The size and prevalence of these fluctuations, both in responses to stimuli and in ongoing activity, depended on cortical state, being larger in synchronized states than in more desynchronized states. Contrary to previous reports, these fluctuations invested the overall population, regardless of preferred orientation. The global fluctuations substantially increased variability in single neurons and correlations among pairs of neurons. Once this effect was removed, pairwise correlations were reduced and were similar regardless of cortical state. These results highlight the importance of cortical state in controlling cortical operation and can help reconcile previous studies, which differed widely in their estimate of neuronal variability and pairwise correlations.

Journal ArticleDOI
TL;DR: The results suggest that shared mechanisms underlie two forms of metacognitive evaluation that are often treated separately, with consequent implications for current theories of their neurocognitive basis.
Abstract: Empirical evidence indicates that people can provide accurate evaluations of their own thoughts and actions by means of both error detection and confidence judgments. This study investigates the foundations of these metacognitive abilities, specifically focusing on the relationship between confidence and error judgments in human perceptual decision making. Electroencephalography studies have identified the error positivity (Pe)—an event-related component observed following incorrect choices—as a robust neural index of participants' awareness of their errors in simple decision tasks. Here we assessed whether the Pe also varies in a graded way with participants' subjective ratings of decision confidence, as expressed on a 6-point scale after each trial of a dot count perceptual decision task. We observed clear, graded modulation of the Pe by confidence, with monotonic reduction in Pe amplitude associated with increasing confidence in the preceding choice. This effect was independent of objective accuracy. Multivariate decoding analyses indicated that neural markers of error detection were predictive of varying levels of confidence in correct decisions, including subtle shifts in high-confidence trials. These results suggest that shared mechanisms underlie two forms of metacognitive evaluation that are often treated separately, with consequent implications for current theories of their neurocognitive basis.

Journal ArticleDOI
TL;DR: Data suggest that a shift in the relative expression of neuronal NKCC1 and KCC2, similar to that observed in immature neurons during development, may contribute to astrogliosis-associated seizures.
Abstract: Epilepsy is one of the most common chronic neurologic diseases, yet approximately one-third of affected patients do not respond to anticonvulsive drugs that target neurons or neuronal circuits. Reactive astrocytes are commonly found in putative epileptic foci and have been hypothesized to be disease contributors because they lose essential homeostatic capabilities. However, since brain pathology induces astrocytes to become reactive, it is difficult to distinguish whether astrogliosis is a cause or a consequence of epileptogenesis. We now present a mouse model of genetically induced, widespread chronic astrogliosis after conditional deletion of β1-integrin (Itgβ1). In these mice, astrogliosis occurs in the absence of other pathologies and without BBB breach or significant inflammation. Electroencephalography with simultaneous video recording revealed that these mice develop spontaneous seizures during the first six postnatal weeks of life and brain slices show neuronal hyperexcitability. This was not observed in mice with neuronal-targeted β1-integrin deletion, supporting the hypothesis that astrogliosis is sufficient to induce epileptic seizures. Whole-cell patch-clamp recordings from astrocytes further suggest that the heightened excitability was associated with impaired astrocytic glutamate uptake. Moreover, the relative expression of the cation-chloride cotransporters (CCC) NKCC1 (Slc12a2) and KCC2 (Slc12a5), which are responsible for establishing the neuronal Cl− gradient that governs GABAergic inhibition were altered and the NKCC1 inhibitor bumetanide eliminated seizures in a subgroup of mice. These data suggest that a shift in the relative expression of neuronal NKCC1 and KCC2, similar to that observed in immature neurons during development, may contribute to astrogliosis-associated seizures.

Journal ArticleDOI
TL;DR: It is found that evidence for static or dynamic response boundaries may depend on specific paradigms or procedures, such as the extent of task practice, and the difficulty of selecting between collapsing and fixed bounds models has received insufficient attention in previous research.
Abstract: For nearly 50 years, the dominant account of decision-making holds that noisy information is accumulated until a fixed threshold is crossed. This account has been tested extensively against behavioral and neurophysiological data for decisions about consumer goods, perceptual stimuli, eyewitness testimony, memories, and dozens of other paradigms, with no systematic misfit between model and data. Recently, the standard model has been challenged by alternative accounts that assume that less evidence is required to trigger a decision as time passes. Such “collapsing boundaries” or “urgency signals” have gained popularity in some theoretical accounts of neurophysiology. Nevertheless, evidence in favor of these models is mixed, with support coming from only a narrow range of decision paradigms compared with a long history of support from dozens of paradigms for the standard theory. We conducted the first large-scale analysis of data from humans and nonhuman primates across three distinct paradigms using powerful model-selection methods to compare evidence for fixed versus collapsing bounds. Overall, we identified evidence in favor of the standard model with fixed decision boundaries. We further found that evidence for static or dynamic response boundaries may depend on specific paradigms or procedures, such as the extent of task practice. We conclude that the difficulty of selecting between collapsing and fixed bounds models has received insufficient attention in previous research, calling into question some previous results.