scispace - formally typeset
Search or ask a question

Showing papers on "Receptive field published in 2020"


Journal ArticleDOI
04 Dec 2020-Science
TL;DR: Electrical stimulation of the visual cortex with a neuroprosthetic device allows artificial vision with shape and motion perception, and demonstrates the potential of electrical stimulation to restore functional, life-enhancing vision in the blind.
Abstract: Blindness affects 40 million people across the world. A neuroprosthesis could one day restore functional vision in the blind. We implanted a 1024-channel prosthesis in areas V1 and V4 of the visual cortex of monkeys and used electrical stimulation to elicit percepts of dots of light (called phosphenes) on hundreds of electrodes, the locations of which matched the receptive fields of the stimulated neurons. Activity in area V4 predicted phosphene percepts that were elicited in V1. We simultaneously stimulated multiple electrodes to impose visible patterns composed of a number of phosphenes. The monkeys immediately recognized them as simple shapes, motions, or letters. These results demonstrate the potential of electrical stimulation to restore functional, life-enhancing vision in the blind.

120 citations


Journal ArticleDOI
20 May 2020-Nature
TL;DR: It is shown that feedback projections onto excitatory neurons in the mouse primary visual cortex generate a second receptive field that is driven by stimuli outside of the classical feedforward receptive field, with responses mediated by higher visual areas.
Abstract: Animals sense the environment through pathways that link sensory organs to the brain. In the visual system, these feedforward pathways define the classical feedforward receptive field (ffRF), the area in space in which visual stimuli excite a neuron1. The visual system also uses visual context-the visual scene surrounding a stimulus-to predict the content of the stimulus2, and accordingly, neurons have been identified that are excited by stimuli outside their ffRF3-8. However, the mechanisms that generate excitation to stimuli outside the ffRF are unclear. Here we show that feedback projections onto excitatory neurons in the mouse primary visual cortex generate a second receptive field that is driven by stimuli outside the ffRF. The stimulation of this feedback receptive field (fbRF) elicits responses that are slower and are delayed in comparison with those resulting from the stimulation of the ffRF. These responses are preferentially reduced by anaesthesia and by silencing higher visual areas. Feedback inputs from higher visual areas have scattered receptive fields relative to their putative targets in the primary visual cortex, which enables the generation of the fbRF. Neurons with fbRFs are located in cortical layers that receive strong feedback projections and are absent in the main input layer, which is consistent with a laminar processing hierarchy. The observation that large, uniform stimuli-which cover both the fbRF and the ffRF-suppress these responses indicates that the fbRF and the ffRF are mutually antagonistic. Whereas somatostatin-expressing inhibitory neurons are driven by these large stimuli, inhibitory neurons that express parvalbumin and vasoactive intestinal peptide have mutually antagonistic fbRF and ffRF, similar to excitatory neurons. Feedback projections may therefore enable neurons to use context to estimate information that is missing from the ffRF and to report differences in stimulus features across visual space, regardless of whether excitation occurs inside or outside the ffRF. By complementing the ffRF, the fbRF that we identify here could contribute to predictive processing.

105 citations


Journal ArticleDOI
TL;DR: It is reported that a large percentage of retrosplenial cortex neurons have spatial receptive fields that are active when environmental boundaries are positioned at a specific orientation and distance relative to the animal itself.
Abstract: The retrosplenial cortex is reciprocally connected with multiple structures implicated in spatial cognition, and damage to the region itself produces numerous spatial impairments. Here, we sought to characterize spatial correlates of neurons within the region during free exploration in two-dimensional environments. We report that a large percentage of retrosplenial cortex neurons have spatial receptive fields that are active when environmental boundaries are positioned at a specific orientation and distance relative to the animal itself. We demonstrate that this vector-based location signal is encoded in egocentric coordinates, is localized to the dysgranular retrosplenial subregion, is independent of self-motion, and is context invariant. Further, we identify a subpopulation of neurons with this response property that are synchronized with the hippocampal theta oscillation. Accordingly, the current work identifies a robust egocentric spatial code in retrosplenial cortex that can facilitate spatial coordinate system transformations and support the anchoring, generation, and utilization of allocentric representations.

100 citations


Journal ArticleDOI
04 Mar 2020-Nature
TL;DR: Computational modelling, imaging and single-cell ablation in layer 2/3 of the mouse vibrissal somatosensory cortex reveals that recurrent activity in cortical neurons can drive input-specific amplification during behaviour.
Abstract: Most cortical synapses are local and excitatory. Local recurrent circuits could implement amplification, allowing pattern completion and other computations1-4. Cortical circuits contain subnetworks that consist of neurons with similar receptive fields and increased connectivity relative to the network average5,6. Cortical neurons that encode different types of information are spatially intermingled and distributed over large brain volumes5-7, and this complexity has hindered attempts to probe the function of these subnetworks by perturbing them individually8. Here we use computational modelling, optical recordings and manipulations to probe the function of recurrent coupling in layer 2/3 of the mouse vibrissal somatosensory cortex during active tactile discrimination. A neural circuit model of layer 2/3 revealed that recurrent excitation enhances sensory signals by amplification, but only for subnetworks with increased connectivity. Model networks with high amplification were sensitive to damage: loss of a few members of the subnetwork degraded stimulus encoding. We tested this prediction by mapping neuronal selectivity7 and photoablating9,10 neurons with specific selectivity. Ablation of a small proportion of layer 2/3 neurons (10-20, less than 5% of the total) representing touch markedly reduced responses in the spared touch representation, but not in other representations. Ablations most strongly affected neurons with stimulus responses that were similar to those of the ablated population, which is also consistent with network models. Recurrence among cortical neurons with similar selectivity therefore drives input-specific amplification during behaviour.

86 citations


Journal ArticleDOI
TL;DR: A functional taxonomy of cells with vectorial receptive fields as reported in experiments and proposed in theoretical work is provided and their contribution to spatial cognition is described.
Abstract: Several types of neurons involved in spatial navigation and memory encode the distance and direction (that is, the vector) between an agent and items in its environment. Such vectorial information provides a powerful basis for spatial cognition by representing the geometric relationships between the self and the external world. Here, we review the explicit encoding of vectorial information by neurons in and around the hippocampal formation, far from the sensory periphery. The parahippocampal, retrosplenial and parietal cortices, as well as the hippocampal formation and striatum, provide a plethora of examples of vector coding at the single neuron level. We provide a functional taxonomy of cells with vectorial receptive fields as reported in experiments and proposed in theoretical work. The responses of these neurons may provide the fundamental neural basis for the (bottom-up) representation of environmental layout and (top-down) memory-guided generation of visuospatial imagery and navigational planning.

75 citations


Journal ArticleDOI
06 Mar 2020-eLife
TL;DR: Bidirectional activity modulations of LP or its projection to the primary auditory cortex (A1) in awake mice reveal that LP improves auditory processing in A1 supragranular-layer neurons by sharpening their receptive fields and frequency tuning, as well as increasing the signal-to-noise ratio (SNR).
Abstract: Lateral posterior nucleus (LP) of thalamus, the rodent homologue of primate pulvinar, projects extensively to sensory cortices. However, its functional role in sensory cortical processing remains largely unclear. Here, bidirectional activity modulations of LP or its projection to the primary auditory cortex (A1) in awake mice reveal that LP improves auditory processing in A1 supragranular-layer neurons by sharpening their receptive fields and frequency tuning, as well as increasing the signal-to-noise ratio (SNR). This is achieved through a subtractive-suppression mechanism, mediated largely by LP-to-A1 axons preferentially innervating specific inhibitory neurons in layer 1 and superficial layers. LP is strongly activated by specific sensory signals relayed from the superior colliculus (SC), contributing to the maintenance and enhancement of A1 processing in the presence of auditory background noise and threatening visual looming stimuli respectively. Thus, a multisensory bottom-up SC-pulvinar-A1 pathway plays a role in contextual and cross-modality modulation of auditory cortical processing.

73 citations


Journal ArticleDOI
TL;DR: It is shown that a small number of active cells reliably represent visual contents of a natural image across trials regardless of response variability, due to the diverse and partially overlapping representations of individual cells.
Abstract: Natural scenes sparsely activate neurons in the primary visual cortex (V1). However, how sparsely active neurons reliably represent complex natural images and how the information is optimally decoded from these representations have not been revealed. Using two-photon calcium imaging, we recorded visual responses to natural images from several hundred V1 neurons and reconstructed the images from neural activity in anesthetized and awake mice. A single natural image is linearly decodable from a surprisingly small number of highly responsive neurons, and the remaining neurons even degrade the decoding. Furthermore, these neurons reliably represent the image across trials, regardless of trial-to-trial response variability. Based on our results, diverse, partially overlapping receptive fields ensure sparse and reliable representation. We suggest that information is reliably represented while the corresponding neuronal patterns change across trials and collecting only the activity of highly responsive neurons is an optimal decoding strategy for the downstream neurons. Natural scenes sparsely activate V1 neurons. Here, the authors show that a small number of active cells reliably represent visual contents of a natural image across trials regardless of response variability, due to the diverse and partially overlapping representations of individual cells.

64 citations


Journal ArticleDOI
26 Jun 2020-eLife
TL;DR: Predicting the neural responses recorded invasively from the auditory cortex of neurosurgical patients as they listened to speech, this approach significantly improves the prediction accuracy of auditory cortical responses, particularly in nonprimary areas.
Abstract: Our understanding of nonlinear stimulus transformations by neural circuits is hindered by the lack of comprehensive yet interpretable computational modeling frameworks. Here, we propose a data-driven approach based on deep neural networks to directly model arbitrarily nonlinear stimulus-response mappings. Reformulating the exact function of a trained neural network as a collection of stimulus-dependent linear functions enables a locally linear receptive field interpretation of the neural network. Predicting the neural responses recorded invasively from the auditory cortex of neurosurgical patients as they listened to speech, this approach significantly improves the prediction accuracy of auditory cortical responses, particularly in nonprimary areas. Moreover, interpreting the functions learned by neural networks uncovered three distinct types of nonlinear transformations of speech that varied considerably from primary to nonprimary auditory regions. The ability of this framework to capture arbitrary stimulus-response mappings while maintaining model interpretability leads to a better understanding of cortical processing of sensory signals.

46 citations


Journal ArticleDOI
TL;DR: A visual spatial task for mice is developed that elicits behavioural improvements consistent with the effects of spatial attention, as well as neural correlates from subthreshold responses in single cells to spikes and LFP at network level, providing new insight into rapid cognitive enhancement of sensory signals in visual cortex.
Abstract: Internal brain states strongly modulate sensory processing during behaviour. Studies of visual processing in primates show that attention to space selectively improves behavioural and neural responses to stimuli at the attended locations. Here we develop a visual spatial task for mice that elicits behavioural improvements consistent with the effects of spatial attention, and simultaneously measure network, cellular, and subthreshold activity in primary visual cortex. During trial-by-trial behavioural improvements, local field potential (LFP) responses to stimuli detected inside the receptive field (RF) strengthen. Moreover, detection inside the RF selectively enhances excitatory and inhibitory neuron responses to task-irrelevant stimuli and suppresses noise correlations and low frequency LFP fluctuations. Whole-cell patch-clamp recordings reveal that detection inside the RF increases synaptic activity that depolarizes membrane potential responses at the behaviorally relevant location. Our study establishes that mice display fundamental signatures of visual spatial attention spanning behavioral, network, cellular, and synaptic levels, providing new insight into rapid cognitive enhancement of sensory signals in visual cortex.

42 citations


Journal ArticleDOI
TL;DR: The general anatomy, function and neuronal diversity of cranial sensory ganglia is summarized and an overview of the current knowledge of the transcriptional networks controlling neurogenesis and neuronal diversification in the developing sensory system is provided.
Abstract: Sensory fibers of the peripheral nervous system carry sensation from specific sense structures or use different tissues and organs as receptive fields, and convey this information to the central nervous system. In the head of vertebrates, each cranial sensory ganglia and associated nerves perform specific functions. Sensory ganglia are composed of different types of specialized neurons in which two broad categories can be distinguished, somatosensory neurons relaying all sensations that are felt and visceral sensory neurons sensing the internal milieu and controlling body homeostasis. While in the trunk somatosensory neurons composing the dorsal root ganglia are derived exclusively from neural crest cells, somato- and visceral sensory neurons of cranial sensory ganglia have a dual origin, with contributions from both neural crest and placodes. As most studies on sensory neurogenesis have focused on dorsal root ganglia, our understanding of the molecular mechanisms underlying the embryonic development of the different cranial sensory ganglia remains today rudimentary. However, using single-cell RNA sequencing, recent studies have made significant advances in the characterization of the neuronal diversity of most sensory ganglia. Here we summarize the general anatomy, function and neuronal diversity of cranial sensory ganglia. We then provide an overview of our current knowledge of the transcriptional networks controlling neurogenesis and neuronal diversification in the developing sensory system, focusing on cranial sensory ganglia, highlighting specific aspects of their development and comparing it to that of trunk sensory ganglia.

37 citations


Journal ArticleDOI
TL;DR: The ability to estimate somatosensory pRFs in humans provides an unprecedented opportunity to examine the neural mechanisms underlying somatosensation and is critical for studying how the brain, body, and environment interact to inform perception and action.

Journal ArticleDOI
19 Aug 2020-Neuron
TL;DR: Large-scale multi-electrode array recordings from retinas of treatment-naive patients who underwent enucleation surgery for choroidal malignant melanomas identify robust differences in the function of midget and parasol ganglion cells, consistent asymmetries between their ON and OFF types and divergence in thefunction of human versus non-human primate retinas.

Journal ArticleDOI
04 Nov 2020-eLife
TL;DR: Reconstructions of natural images from responses of the four numerically-dominant macaque RGC types exhibited similar spatial properties, suggesting that the results are relevant for natural vision.
Abstract: The visual message conveyed by a retinal ganglion cell (RGC) is often summarized by its spatial receptive field, but in principle also depends on the responses of other RGCs and natural image statistics. This possibility was explored by linear reconstruction of natural images from responses of the four numerically-dominant macaque RGC types. Reconstructions were highly consistent across retinas. The optimal reconstruction filter for each RGC - its visual message - reflected natural image statistics, and resembled the receptive field only when nearby, same-type cells were included. ON and OFF cells conveyed largely independent, complementary representations, and parasol and midget cells conveyed distinct features. Correlated activity and nonlinearities had statistically significant but minor effects on reconstruction. Simulated reconstructions, using linear-nonlinear cascade models of RGC light responses that incorporated measured spatial properties and nonlinearities, produced similar results. Spatiotemporal reconstructions exhibited similar spatial properties, suggesting that the results are relevant for natural vision.

Journal ArticleDOI
TL;DR: Rec receptive fields (RFs) of motion-sensitive neurons in the diencephalon and midbrain are characterized, showing that RFs of many pretectal neurons are large and sample the lower visual field, whereasRFs of tectal neuron are mostly small-size selective and samples the upper nasal visual field more densely.

Journal ArticleDOI
TL;DR: The results show that neurochemically distinct neuronal subtypes in the primary auditory cortex have different contributions to the integration of different frequency components of an acoustic stimulus, providing evidence of a common mechanism for cortical computations used for global integration of stimulus features.
Abstract: Sensory systems integrate multiple stimulus features to generate coherent percepts. Spectral surround suppression, the phenomenon by which sound-evoked responses of auditory neurons are suppressed by stimuli outside their receptive field, is an example of this integration taking place in the auditory system. While this form of global integration is commonly observed in auditory cortical neurons, and potentially used by the nervous system to separate signals from noise, the mechanisms that underlie this suppression of activity are not well understood. We evaluated the contributions to spectral surround suppression of the two most common inhibitory cell types in the cortex, parvalbumin-expressing (PV+) and somatostatin-expressing (SOM+) interneurons, in mice of both sexes. We found that inactivating SOM+ cells, but not PV+ cells, significantly reduces sustained spectral surround suppression in excitatory cells, indicating a dominant causal role for SOM+ cells in the integration of information across multiple frequencies. The similarity of these results to those from other sensory cortices provides evidence of common mechanisms across the cerebral cortex for generating global percepts from separate features.SIGNIFICANCE STATEMENT To generate coherent percepts, sensory systems integrate simultaneously occurring features of a stimulus, yet the mechanisms by which this integration occurs are not fully understood. Our results show that neurochemically distinct neuronal subtypes in the primary auditory cortex have different contributions to the integration of different frequency components of an acoustic stimulus. Together with findings from other sensory cortices, our results provide evidence of a common mechanism for cortical computations used for global integration of stimulus features.

Journal ArticleDOI
09 Mar 2020-eLife
TL;DR: A method for maximum likelihood estimation of subunits by soft-clustering spike-triggered stimuli is presented, and its effectiveness in visual neurons is demonstrated, including macaque V1 neurons.
Abstract: Responses of sensory neurons are often modeled using a weighted combination of rectified linear subunits. Since these subunits often cannot be measured directly, a flexible method is needed to infer their properties from the responses of downstream neurons. We present a method for maximum likelihood estimation of subunits by soft-clustering spike-triggered stimuli, and demonstrate its effectiveness in visual neurons. For parasol retinal ganglion cells in macaque retina, estimated subunits partitioned the receptive field into compact regions, likely representing aggregated bipolar cell inputs. Joint clustering revealed shared subunits between neighboring cells, producing a parsimonious population model. Closed-loop validation, using stimuli lying in the null space of the linear receptive field, revealed stronger nonlinearities in OFF cells than ON cells. Responses to natural images, jittered to emulate fixational eye movements, were accurately predicted by the subunit model. Finally, the generality of the approach was demonstrated in macaque V1 neurons.

Journal ArticleDOI
TL;DR: Two-photon dendritic imaging with a genetically-encoded glutamate sensor in awake monkeys is performed, and excitatory synaptic inputs on dendrites of individual V1 superficial layer neurons are mapped with high spatial and temporal resolution.
Abstract: The integration of synaptic inputs onto dendrites provides the basis for neuronal computation. Whereas recent studies have begun to outline the spatial organization of synaptic inputs on individual neurons, the underlying principles related to the specific neural functions are not well understood. Here we perform two-photon dendritic imaging with a genetically-encoded glutamate sensor in awake monkeys, and map the excitatory synaptic inputs on dendrites of individual V1 superficial layer neurons with high spatial and temporal resolution. We find a functional integration and trade-off between orientation-selective and color-selective inputs in basal dendrites of individual V1 neurons. Synaptic inputs on dendrites are spatially clustered by stimulus feature, but functionally scattered in multidimensional feature space, providing a potential substrate of local feature integration on dendritic branches. Furthermore, apical dendrite inputs have larger receptive fields and longer response latencies than basal dendrite inputs, suggesting a dominant role for apical dendrites in integrating feedback in visual information processing. The integration of synaptic inputs onto dendrites provides the basis for neuronal computation. Here the authors perform two-photon dendritic imaging with a genetically-encoded glutamate sensor in awake monkeys, and map the excitatory synaptic inputs on dendrites of individual V1 superficial layer neurons with high spatial and temporal resolution.

Journal ArticleDOI
25 Nov 2020-Neuron
TL;DR: This work finds that high acuity stereopsis emerges during an early postnatal critical period when binocular neurons in the primary visual cortex sharpen their receptive field tuning properties and is achieved by dismantling the binocular circuit present at critical period onset and building it anew.

Journal ArticleDOI
TL;DR: Four morphologically distinct types of mouse retinal ganglion cells with overlapping excitatory synaptic input exhibit type-specific dendritic integration profiles, and it is shown that differences between cell types can likely be explained by differences in backpropagation efficiency, arising from the specific combinations of dendrite morphology and ion channel densities.
Abstract: Neural computation relies on the integration of synaptic inputs across a neuron’s dendritic arbour. However, it is far from understood how different cell types tune this process to establish cell-type specific computations. Here, using two-photon imaging of dendritic Ca2+ signals, electrical recordings of somatic voltage and biophysical modelling, we demonstrate that four morphologically distinct types of mouse retinal ganglion cells with overlapping excitatory synaptic input (transient Off alpha, transient Off mini, sustained Off, and F-mini Off) exhibit type-specific dendritic integration profiles: in contrast to the other types, dendrites of transient Off alpha cells were spatially independent, with little receptive field overlap. The temporal correlation of dendritic signals varied also extensively, with the highest and lowest correlation in transient Off mini and transient Off alpha cells, respectively. We show that differences between cell types can likely be explained by differences in backpropagation efficiency, arising from the specific combinations of dendritic morphology and ion channel densities. Neurons compute by integrating synaptic inputs across their dendritic arbor. Here, the authors show that distinct cell-types of mouse retinal ganglion cells that receive similar excitatory inputs have different biophysical mechanisms of input integration to generate their unique response tuning.

Journal ArticleDOI
TL;DR: It is found that surround suppression is not equally represented across mouse visual areas: primary visual cortex has substantially more surround suppression than higher visual areas, and one higher area has significantly less suppression than two others examined, suggesting that these areas have distinct functional roles.
Abstract: Neurons in the visual system integrate over a wide range of spatial scales. This diversity is thought to enable both local and global computations. To understand how spatial information is encoded across the mouse visual system, we use two-photon imaging to measure receptive fields (RFs) and size-tuning in primary visual cortex (V1) and three downstream higher visual areas (HVAs: LM (lateromedial), AL (anterolateral), and PM (posteromedial)) in mice of both sexes. Neurons in PM, compared with V1 or the other HVAs, have significantly larger RF sizes and less surround suppression, independent of stimulus eccentricity or contrast. To understand how this specialization of RFs arises in the HVAs, we measured the spatial properties of V1 inputs to each area. Spatial integration of V1 axons was remarkably similar across areas and significantly different from the tuning of neurons in their target HVAs. Thus, unlike other visual features studied in this system, specialization of spatial integration in PM cannot be explained by specific projections from V1 to the HVAs. Further, the differences in RF properties could not be explained by differences in convergence of V1 inputs to the HVAs. Instead, our data suggest that distinct inputs from other areas or connectivity within PM may support the area's unique ability to encode global features of the visual scene, whereas V1, LM, and AL may be more specialized for processing local features.SIGNIFICANCE STATEMENT Surround suppression is a common feature of visual processing whereby large stimuli are less effective at driving neuronal responses than smaller stimuli. This is thought to enhance efficiency in the population code and enable higher-order processing of visual information, such as figure-ground segregation. However, this comes at the expense of global computations. Here we find that surround suppression is not equally represented across mouse visual areas: primary visual cortex has substantially more surround suppression than higher visual areas, and one higher area has significantly less suppression than two others examined, suggesting that these areas have distinct functional roles. Thus, we have identified a novel dimension of specialization in the mouse visual cortex that may enable both local and global computations.

Journal ArticleDOI
TL;DR: This work documents an attentional modulation of pre-stimulus inter-trial phase coherence of low frequency local field potentials (LFP) in visual area MT of macaque monkeys and reveals that phase coherent increases following a spatial cue deploying attention towards the receptive field of the recorded neural population.
Abstract: Attention selectively routes the most behaviorally relevant information from the stream of sensory inputs through the hierarchy of cortical areas. Previous studies have shown that visual attention depends on the phase of oscillatory brain activities. These studies mainly focused on the stimulus presentation period, rather than the pre-stimulus period. Here, we hypothesize that selective attention controls the phase of oscillatory neural activities to efficiently process relevant information. We document an attentional modulation of pre-stimulus inter-trial phase coherence (a measure of deviation between instantaneous phases of trials) of low frequency local field potentials (LFP) in visual area MT of macaque monkeys. Our data reveal that phase coherence increases following a spatial cue deploying attention towards the receptive field of the recorded neural population. We further show that the attentional enhancement of phase coherence is positively correlated with the modulation of the stimulus-induced firing rate, and importantly, a higher phase coherence is associated with a faster behavioral response. These results suggest a functional utilization of intrinsic neural oscillatory activities for an enhanced processing of upcoming stimuli.

Posted ContentDOI
07 Oct 2020-bioRxiv
TL;DR: This model with its novel readout sets a new state-of-the-art for neural response prediction in mouse visual cortex from natural images, generalizes between animals, and captures better characteristic cortical features than current task-driven pre-training approaches such as VGG16.
Abstract: Deep neural networks (DNN) have set new standards at predicting responses of neural populations to visual input. Most such DNNs consist of a convolutional network (core) shared across all neurons which learns a representation of neural computation in visual cortex and a neuron-specific readout that linearly combines the relevant features in this representation. The goal of this paper is to test whether such a representation is indeed generally characteristic for visual cortex, i.e. generalizes between animals of a species, and what factors contribute to obtaining such a generalizing core. To push all non-linear computations into the core where the generalizing cortical features should be learned, we devise a novel readout that reduces the number of parameters per neuron in the readout by up to two orders of magnitude compared to the previous state-of-the-art. It does so by taking advantage of retinotopy and learns a Gaussian distribution over the neuron9s receptive field position. With this new readout we train our network on neural responses from mouse primary visual cortex (V1) and obtain a gain in performance of 7% compared to the previous state-of-the-art network. We then investigate whether the convolutional core indeed captures general cortical features by using the core in transfer learning to a different animal. When transferring a core trained on thousands of neurons from various animals and scans we exceed the performance of training directly on that animal by 12%, and outperform a commonly used VGG16 core pre-trained on imagenet by 33%. In addition, transfer learning with our data-driven core is more data-efficient than direct training, achieving the same performance with only 40% of the data. Our model with its novel readout thus sets a new state-of-the-art for neural response prediction in mouse visual cortex from natural images, generalizes between animals, and captures better characteristic cortical features than current task-driven pre-training approaches such as VGG16.

Journal ArticleDOI
TL;DR: It is hypothesized that early exposure to background noise can improve signal-in-noise processing, and the resulting receptive field plasticity in the primary auditory cortex can reveal functional principles guiding that important task.

Journal ArticleDOI
TL;DR: These findings reveal antagonistic center-surround mechanisms for direction selectivity and demonstrate how context-dependent receptive field reorganization enables flexible computations.

Journal ArticleDOI
TL;DR: It is shown that optogenetically activated ganglion cells are each sensitive to a small region of visual space, and a simple model based on this small receptive field predicted accurately their responses to complex stimuli, and estimated the maximal acuity expected by a patient.
Abstract: In many cases of inherited retinal degenerations, ganglion cells are spared despite photoreceptor cell death, making it possible to stimulate them to restore visual function. Several studies have shown that it is possible to express an optogenetic protein in ganglion cells and make them light sensitive, a promising strategy to restore vision. However the spatial resolution of optogenetically-reactivated retinas has rarely been measured, especially in the primate. Since the optogenetic protein is also expressed in axons, it is unclear if these neurons will only be sensitive to the stimulation of a small region covering their somas and dendrites, or if they will also respond to any stimulation overlapping with their axon, dramatically impairing spatial resolution. Here we recorded responses of mouse and macaque retinas to random checkerboard patterns following an in vivo optogenetic therapy. We show that optogenetically activated ganglion cells are each sensitive to a small region of visual space. A simple model based on this small receptive field predicted accurately their responses to complex stimuli. From this model, we simulated how the entire population of light sensitive ganglion cells would respond to letters of different sizes. We then estimated the maximal acuity expected by a patient, assuming it could make an optimal use of the information delivered by this reactivated retina. The obtained acuity is above the limit of legal blindness. Our model also makes interesting predictions on how acuity might vary upon changing the therapeutic strategy, assuming an optimal use of the information present in the retinal activity. Optogenetic therapy could thus potentially lead to high resolution vision, under conditions that our model helps to determinine.

Journal ArticleDOI
TL;DR: It is found that mouse visual cortex contains a region in which pRFs are considerably smaller, the ‘focea’, which represents a location in space directly in front of, and slightly above, the mouse.
Abstract: The representation of space in mouse visual cortex was initially thought to be relatively uniform, with no strong biases towards any particular region of space. This contrasts with the primate visual cortex with its over-representation of the fovea, placing potential limits on the translation of research in mice to humans. Here we reveal a previously unsuspected organization of the visual cortex of mice that resembles that fovea-centric organization of human visual cortex. Using population receptive-field (pRF) mapping techniques, which allow estimates to be made of aggregate receptive field sizes, we found that mouse visual cortex contains a region in which pRFs are considerably smaller. This region, the ‘focea’, represents a location in space directly in front of, and slightly above, the mouse. Using two-photon imaging we show that the smaller pRFs are due to a more orderly representation of space and an over-representation of binocular regions of the visual scene. We also show that RFs of single neurons in areas LM, AL and AM are smaller at the focea. Mice have improved visual resolution in this region of space and freely-moving mice make compensatory eye-movements to hold this region in front of them. Our results indicate that the representation of space in mouse visual cortex is non-uniform and mice have spatial biases in their visual processing. The presence of a focea has important implications for the use of the mouse model of vision.

Journal ArticleDOI
TL;DR: How vLGN processes visual information is asked, making comparisons with dLGN and SC for perspective, suggesting that vL GN enables rapid movement by releasing target motor structures from inhibition.
Abstract: Even though the lateral geniculate nucleus of the thalamus (LGN) is associated with form vision, that is not its sole role. Only the dorsal portion of LGN (dLGN) projects to V1. The ventral division (vLGN) connects subcortically, sending inhibitory projections to sensorimotor structures, including the superior colliculus (SC) and regions associated with certain behavioral states, such as fear (Monavarfeshani et al., 2017; Salay et al., 2018). We combined computational, physiological, and anatomical approaches to explore visual processing in vLGN of mice of both sexes, making comparisons to dLGN and SC for perspective. Compatible with past, qualitative descriptions, the receptive fields we quantified in vLGN were larger than those in dLGN, and most cells preferred bright versus dark stimuli (Harrington, 1997). Dendritic arbors spanned the length and/or width of vLGN and were often asymmetric, positioned to collect input from large but discrete territories. By contrast, arbors in dLGN are compact (Krahe et al., 2011). Consistent with spatially coarse receptive fields in vLGN, visually evoked changes in spike timing were less precise than for dLGN and SC. Notably, however, the membrane currents and spikes of some cells in vLGN displayed gamma oscillations whose phase and strength varied with stimulus pattern, as for SC (Stitt et al., 2013). Thus, vLGN can engage its targets using oscillation-based and conventional rate codes. Finally, dark shadows activate SC and drive escape responses, whereas vLGN prefers bright stimuli. Thus, one function of long-range inhibitory projections from vLGN might be to enable movement by releasing motor targets, such as SC, from suppression.SIGNIFICANCE STATEMENT Only the dorsal lateral geniculate nucleus (dLGN) connects to cortex to serve form vision; the ventral division (vLGN) projects subcortically to sensorimotor nuclei, including the superior colliculus (SC), via long-range inhibitory connections. Here, we asked how vLGN processes visual information, making comparisons with dLGN and SC for perspective. Cells in vLGN versus dLGN had wider dendritic arbors, larger receptive fields, and fired with lower temporal precision, consistent with a modulatory role. Like SC, but not dLGN, visual stimuli entrained oscillations in vLGN, perhaps reflecting shared strategies for visuomotor processing. Finally, most neurons in vLGN preferred bright shapes, whereas dark stimuli activate SC and drive escape behaviors, suggesting that vLGN enables rapid movement by releasing target motor structures from inhibition.

Posted ContentDOI
22 May 2020-bioRxiv
TL;DR: It is shown that in awake mice topographically organized cortical feedback modulates spatial integration in dLGN by sharpening receptive fields (RFs) and increasing surround suppression, likely via recruitment of neurons in visTRN.
Abstract: En route from retina to cortex, visual information travels through the dorsolateral geniculate nucleus of the thalamus (dLGN), where extensive cortico-thalamic (CT) feedback has been suggested to modulate spatial processing. How this modulation arises from direct excitatory and indirect inhibitory CT feedback components remains enigmatic. We show that in awake mice topographically organized cortical feedback modulates spatial integration in dLGN by sharpening receptive fields (RFs) and increasing surround suppression. Guided by a network model revealing wide-scale inhibitory CT feedback necessary to reproduce these effects, we targeted the visual sector of the thalamic reticular nucleus (visTRN) for recordings. We found that visTRN neurons have large receptive fields, show little surround suppression, and have strong feedback-dependent responses to large stimuli, making them an ideal candidate for mediating feedback-enhanced surround suppression in dLGN. We conclude that cortical feedback sculpts spatial integration in dLGN, likely via recruitment of neurons in visTRN.

Journal ArticleDOI
TL;DR: Investigating how binocular disparity is processed in the mouse visual system will help delineating the role of mouse higher areas for visual processing, but also shed light on how the mammalian brain computes stereopsis.
Abstract: Binocular disparity, the difference between the two eyes' images, is a powerful cue to generate the 3D depth percept known as stereopsis. In primates, binocular disparity is processed in multiple areas of the visual cortex, with distinct contributions of higher areas to specific aspects of depth perception. Mice, too, can perceive stereoscopic depth, and neurons in primary visual cortex (V1) and higher-order, lateromedial (LM) and rostrolateral (RL) areas were found to be sensitive to binocular disparity. A detailed characterization of disparity tuning across mouse visual areas is lacking, however, and acquiring such data might help clarifying the role of higher areas for disparity processing and establishing putative functional correspondences to primate areas. We used two-photon calcium imaging in female mice to characterize the disparity tuning properties of neurons in visual areas V1, LM, and RL in response to dichoptically presented binocular gratings, as well as random dot correlograms (RDC). In all three areas, many neurons were tuned to disparity, showing strong response facilitation or suppression at optimal or null disparity, respectively, even in neurons classified as monocular by conventional ocular dominance (OD) measurements. Neurons in higher areas exhibited broader and more asymmetric disparity tuning curves compared with V1, as observed in primate visual cortex. Finally, we probed neurons' sensitivity to true stereo correspondence by comparing responses to correlated RDC (cRDC) and anticorrelated RDC (aRDC). Area LM, akin to primate ventral visual stream areas, showed higher selectivity for correlated stimuli and reduced anticorrelated responses, indicating higher-level disparity processing in LM compared with V1 and RL. SIGNIFICANCE STATEMENT A major cue for inferring 3D depth is disparity between the two eyes' images. Investigating how binocular disparity is processed in the mouse visual system will not only help delineating the role of mouse higher areas for visual processing, but also shed light on how the mammalian brain computes stereopsis. We found that binocular integration is a prominent feature of mouse visual cortex, as many neurons are selectively and strongly modulated by binocular disparity. Comparison of responses to correlated and anticorrelated random dot correlograms (RDC) revealed that lateromedial area (LM) is more selective to correlated stimuli, while less sensitive to anticorrelated stimuli compared with primary visual cortex (V1) and rostrolateral area (RL), suggesting higher-level disparity processing in LM, resembling primate ventral visual stream areas.

Journal ArticleDOI
TL;DR: An extensive review of the pigeon visual system, focusing on the known cell types, receptive field characteristics, mechanisms of perception/visual attention, and projection profiles of neurons in the thalamofugal and tectofugal pathways, concludes with a discussion of object and face processing in birds.