scispace - formally typeset
Search or ask a question

Showing papers by "Claudia Clopath published in 2020"


Journal ArticleDOI
TL;DR: This work provides a set of guidelines for establishing successful long-term collaborations between AI researchers and application-domain experts, relate them to existing AI4SG projects and identify key opportunities for future AI applications targeted towards social good.
Abstract: Advances in machine learning (ML) and artificial intelligence (AI) present an opportunity to build better tools and solutions to help address some of the world’s most pressing challenges, and deliver positive social impact in accordance with the priorities outlined in the United Nations’ 17 Sustainable Development Goals (SDGs). The AI for Social Good (AI4SG) movement aims to establish interdisciplinary partnerships centred around AI applications towards SDGs. We provide a set of guidelines for establishing successful long-term collaborations between AI researchers and application-domain experts, relate them to existing AI4SG projects and identify key opportunities for future AI applications targeted towards social good. The AI for Social Good movement aims to apply AI/ML tools to help in delivering on the United Nations’ sustainable development goals (SDGs). Here, the authors identify the challenges and propose guidelines for designing and implementing successful partnerships between AI researchers and application - domain experts.

122 citations


Journal ArticleDOI
TL;DR: It is shown that inhibitory synapses from parvalbumin and somatostatin expressing interneurons undergo long-term depression and potentiation respectively (PV- iLTD and SST-iLTP) during physiological activity patterns.
Abstract: The formation and maintenance of spatial representations within hippocampal cell assemblies is strongly dictated by patterns of inhibition from diverse interneuron populations. Although it is known that inhibitory synaptic strength is malleable, induction of long-term plasticity at distinct inhibitory synapses and its regulation of hippocampal network activity is not well understood. Here, we show that inhibitory synapses from parvalbumin and somatostatin expressing interneurons undergo long-term depression and potentiation respectively (PV-iLTD and SST-iLTP) during physiological activity patterns. Both forms of plasticity rely on T-type calcium channel activation to confer synapse specificity but otherwise employ distinct mechanisms. Since parvalbumin and somatostatin interneurons preferentially target perisomatic and distal dendritic regions respectively of CA1 pyramidal cells, PV-iLTD and SST-iLTP coordinate a reprioritisation of excitatory inputs from entorhinal cortex and CA3. Furthermore, circuit-level modelling reveals that PV-iLTD and SST-iLTP cooperate to stabilise place cells while facilitating representation of multiple unique environments within the hippocampal network.

86 citations


Journal ArticleDOI
TL;DR: This work proposes a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read- out neurons to encode another dimension, such as space or a phase.
Abstract: Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.

41 citations


Journal ArticleDOI
19 Feb 2020-eLife
TL;DR: It is argued that patterned perturbation of neurons is in fact necessary to reveal the specific dynamics of inhibitory stabilization, emerging in cortical networks with strong excitatory and inhibitory functional subnetworks, as recently reported in mouse visual cortex.
Abstract: Perturbation of neuronal activity is key to understanding the brain's functional properties, however, intervention studies typically perturb neurons in a nonspecific manner. Recent optogenetics techniques have enabled patterned perturbations, in which specific patterns of activity can be invoked in identified target neurons to reveal more specific cortical function. Here, we argue that patterned perturbation of neurons is in fact necessary to reveal the specific dynamics of inhibitory stabilization, emerging in cortical networks with strong excitatory and inhibitory functional subnetworks, as recently reported in mouse visual cortex. We propose a specific perturbative signature of these networks and investigate how this can be measured under different experimental conditions. Functionally, rapid spontaneous transitions between selective ensembles of neurons emerge in such networks, consistent with experimental results. Our study outlines the dynamical and functional properties of feature-specific inhibitory-stabilized networks, and suggests experimental protocols that can be used to detect them in the intact cortex.

28 citations


Journal ArticleDOI
TL;DR: This study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding and paves the road to map the perturbome of neuronal networks in future studies.
Abstract: To unravel the functional properties of the brain, we need to untangle how neurons interact with each other and coordinate in large-scale recurrent networks. One way to address this question is to measure the functional influence of individual neurons on each other by perturbing them in vivo. Application of such single-neuron perturbations in mouse visual cortex has recently revealed feature-specific suppression between excitatory neurons, despite the presence of highly specific excitatory connectivity, which was deemed to underlie feature-specific amplification. Here, we studied which connectivity profiles are consistent with these seemingly contradictory observations, by modeling the effect of single-neuron perturbations in large-scale neuronal networks. Our numerical simulations and mathematical analysis revealed that, contrary to the prima facie assumption, neither inhibition dominance nor broad inhibition alone were sufficient to explain the experimental findings; instead, strong and functionally specific excitatory-inhibitory connectivity was necessary, consistent with recent findings in the primary visual cortex of rodents. Such networks had a higher capacity to encode and decode natural images, and this was accompanied by the emergence of response gain nonlinearities at the population level. Our study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding and paves the road to map the perturbome of neuronal networks in future studies.

27 citations


Journal ArticleDOI
21 Apr 2020-eLife
TL;DR: A recurrent network model predicts that neurons with high population coupling exhibit more long-term stimulus response variability than neurons with low population coupling, and substantiates this prediction using recordings from the Allen Brain Observatory, finding that a neuron’s population coupling is correlated with the plasticity of its orientation preference.
Abstract: Some neurons have stimulus responses that are stable over days, whereas other neurons have highly plastic stimulus responses. Using a recurrent network model, we explore whether this could be due to an underlying diversity in their synaptic plasticity. We find that, in a network with diverse learning rates, neurons with fast rates are more coupled to population activity than neurons with slow rates. This plasticity-coupling link predicts that neurons with high population coupling exhibit more long-term stimulus response variability than neurons with low population coupling. We substantiate this prediction using recordings from the Allen Brain Observatory, finding that a neuron's population coupling is correlated with the plasticity of its orientation preference. Simulations of a simple perceptual learning task suggest a particular functional architecture: a stable 'backbone' of stimulus representation formed by neurons with low population coupling, on top of which lies a flexible substrate of neurons with high population coupling.

23 citations


Journal ArticleDOI
TL;DR: A computational model of the hippocampal CA1 network is proposed, which describes the formation, dynamics and stabilization of place fields and suggests that different types of interneurons are essential to unravel the mechanisms underlying place field plasticity.
Abstract: During the exploration of novel environments, place fields are rapidly formed in hippocampal CA1 neurons. Place cell firing rate increases in early stages of exploration of novel environments but returns to baseline levels in familiar environments. Although similar in amplitude and width, place fields in familiar environments are more stable than in novel environments. We propose a computational model of the hippocampal CA1 network, which describes the formation, dynamics and stabilization of place fields. We show that although somatic disinhibition is sufficient to form place fields, dendritic inhibition along with synaptic plasticity is necessary for place field stabilization. Our model suggests that place cell stability can be attributed to strong excitatory synaptic weights and strong dendritic inhibition. We show that the interplay between somatic and dendritic inhibition balances the increased excitatory weights, such that place cells return to their baseline firing rate after exploration. Our model suggests that different types of interneurons are essential to unravel the mechanisms underlying place field plasticity. Finally, we predict that artificially induced dendritic events can shift place fields even after place field stabilization.

13 citations



Posted ContentDOI
24 Jan 2020-bioRxiv
TL;DR: Hippocampal pyramidal cells show branch-specific tuning for different place fields, and their coupling to their soma changes with experience of an environment, demonstrating that spatial representation is organized in a branch- specific manner within dendrites of hippocampal pyramsidal cells.
Abstract: Dendrites of pyramidal neurons integrate different sensory inputs, and non-linear dendritic computations drive feature selective tuning and plasticity. Yet little is known about how dendrites themselves represent the environment, the degree to which they are coupled to their soma, and how that coupling is sculpted with experience. In order to answer these questions, we developed a novel preparation in which we image soma and connected dendrites in a single plane across days using in vivo two-photon microscopy. Using this preparation, we monitored spatially tuned activity in area CA3 of the hippocampus in head-fixed mice running on a linear track. We identified 9place dendrites9, which can stably and precisely represent both familiar and novel spatial environments. Dendrites could display place tuning independent of their connected soma and even their sister dendritic branches, the first evidence for branch-specific tuning in the hippocampus. In a familiar environment, spatially tuned somata were more decoupled from their dendrites as compared to non-tuned somata. This relationship was absent in a novel environment, suggesting an experience dependent selective gating of dendritic spatial inputs. We then built a data-driven multicompartment computational model that could capture the experimentally observed correlations. Our model predicts that place cells exhibiting branch-specific tuning have more flexible place fields, while neurons with homogenous or co-tuned dendritic branches have higher place field stability. These findings demonstrate that spatial representation is organized in a branch-specific manner within dendrites of hippocampal pyramidal cells. Further, spatial inputs from dendrites to soma are selectively and dynamically gated in an experience-dependent manner, endowing both flexibility and stability to the cognitive map of space.

9 citations


Posted Content
TL;DR: The proposed MTR buffer comprises a cascade of sub-buffers that accumulate experiences at different timescales, enabling the agent to improve the trade-off between adaptation to new data and retention of old knowledge.
Abstract: In this paper, we propose a multi-timescale replay (MTR) buffer for improving continual learning in RL agents faced with environments that are changing continuously over time at timescales that are unknown to the agent. The basic MTR buffer comprises a cascade of sub-buffers that accumulate experiences at different timescales, enabling the agent to improve the trade-off between adaptation to new data and retention of old knowledge. We also combine the MTR framework with invariant risk minimization, with the idea of encouraging the agent to learn a policy that is robust across the various environments it encounters over time. The MTR methods are evaluated in three different continual learning settings on two continuous control tasks and, in many cases, show improvement over the baselines.

8 citations


Posted ContentDOI
09 Dec 2020-bioRxiv
TL;DR: A voltage-dependent inhibitory plasticity model is proposed in which synaptic updates depend on presynaptic spike arrival and postsynaptic membrane voltage and accounts for network homeostasis while allowing for diverse neuronal dynamics observed across brain regions.
Abstract: Neural networks are highly heterogeneous while homeostatic mechanisms ensure that this heterogeneity is kept within a physiologically safe range. One of such homeostatic mechanisms, inhibitory synaptic plasticity, has been observed across different brain regions. Computationally, however, inhibitory synaptic plasticity models often lead to a strong suppression of neuronal diversity. Here, we propose a model of inhibitory synaptic plasticity in which synaptic updates depend on presynaptic spike arrival and postsynaptic membrane voltage. Our plasticity rule regulates the network activity by setting a target value for the postsynaptic membrane potential over a long timescale. In a feedforward network, we show that our voltage-dependent inhibitory synaptic plasticity (vISP) model regulates the excitatory/inhibitory ratio while allowing for a broad range of postsynaptic firing rates and thus network diversity. In a feedforward network in which excitatory and inhibitory neurons receive correlated input, our plasticity model allows for the development of co-tuned excitation and inhibition, in agreement with recordings in rat auditory cortex. In recurrent networks, our model supports memory formation and retrieval while allowing for the development of heterogeneous neuronal activity. Finally, we implement our vISP rule in a model of the hippocampal CA1 region whose pyramidal cell excitability differs across cells. This model accounts for the experimentally observed variability in pyramidal cell features such as the number of place fields, the fields sizes, and the portion of the environment covered by each cell. Importantly, our model supports a combination of sparse and dense coding in the hippocampus. Therefore, our voltage-dependent inhibitory plasticity model accounts for network homeostasis while allowing for diverse neuronal dynamics observed across brain regions.

Posted ContentDOI
04 Aug 2020-bioRxiv
TL;DR: In a hippocampal CA1 spiking model, it is shown that IDIP in combination with place tuned input can explain the formation of active and silent place cells, as well as place cells remapping following optogenetic silencing of active place cells and can also stabilise recurrent network dynamics.
Abstract: Despite ongoing experiential change, neural activity maintains remarkable stability. Such stability is thought to be mediated by homeostatic plasticity and is deemed to be critical for normal neural function. However, what aspect of neural activity does homeostatic plasticity conserve, and how it still maintains the flexibility necessary for learning and memory, is not fully understood. Homeostatic plasticity is often studied in the context of neuron-centered control, where the deviations from the target activity for each individual neuron are suppressed. However, experimental studies suggest that there are additional, network-centered mechanisms. These may act through the inhibitory neurons, due to their dense network connectivity. Here we use a computational framework to study a potential mechanism for such homeostasis, using experimentally inspired, input-dependent inhibitory plasticity (IDIP). In a hippocampal CA1 spiking model, we show that IDIP in combination with place tuned input can explain the formation of active and silent place cells, as well as place cells remapping following optogenetic silencing of active place cells. Furthermore, we show that IDIP can also stabilise recurrent network dynamics, as well as preserve network firing rate heterogeneity and stimulus representation. Interestingly, in an associative memory task, IDIP facilitates persistent activity after memory encoding, in line with some experimental data. Hence, the establishment of global network balance with IDIP has diverse functional implications and may be able to explain experimental phenomena across different brain areas.

Posted ContentDOI
20 Feb 2020-bioRxiv
TL;DR: This study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding, and paves the road to map the perturbome of neuronal networks in future studies.
Abstract: To unravel the functional properties of the brain, we need to untangle how neurons interact with each other and coordinate in large-scale recurrent networks. One way to address this question is to measure the functional influence of individual neurons on each other by perturbing them in vivo. Application of such single-neuron perturbations in mouse visual cortex has recently revealed feature-specific suppression between excitatory neurons, despite the presence of highly specific excitatory connectivity, which was deemed to underlie feature-specific amplification. Here, we studied which connectivity profiles are consistent with these seemingly contradictory observations, by modelling the effect of single-neuron perturbations in large-scale neuronal networks. Our numerical simulations and mathematical analysis revealed that, contrary to the prima facie assumption, neither inhibition-dominance nor broad inhibition alone were sufficient to explain the experimental findings; instead, strong and functionally specific excitatory-inhibitory connectivity was necessary, consistent with recent findings in the primary visual cortex of rodents. Such networks had a higher capacity to encode and decode natural images in turn, which was accompanied by the emergence of response gain nonlinearities at the population level. Our study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding, and paves the road to map the perturbome of neuronal networks in future studies.

Posted ContentDOI
03 Jun 2020-bioRxiv
TL;DR: The neuronal code that underlies inter-subject variability in fear specificity is identified using longitudinal imaging of neuronal activity before and after differential fear conditioning in the auditory cortex of mice.
Abstract: Learning to avoid dangerous signals while preserving normal behavioral responses to safe stimuli is essential for everyday behavior and survival. Like other forms of learning, fear learning has a high level of inter-subject variability. Following an identical fear conditioning protocol, different subjects exhibit a range of fear specificity. Under high specificity, subjects specialize fear to only the paired (dangerous) stimulus, whereas under low specificity, subjects generalize fear to other (safe) sensory stimuli. Pathological fear generalization underlies emotional disorders, such as post-traumatic stress disorder. Despite decades of work, the neuronal basis that determines fear specificity level remains unknown. We identified the neuronal code that underlies variability in fear specificity. We performed longitudinal imaging of activity of neuronal ensembles in the auditory cortex of mice prior to and after the mice were subjected to differential fear conditioning. The neuronal code in the auditory cortex prior to learning predicted the level of specificity following fear learning across subjects. After fear learning, population neuronal responses were reorganized: the responses to the safe stimulus decreased, whereas the responses to the dangerous stimulus remained the same, rather than decreasing as in pseudo-conditioned subjects. The magnitude of these changes, however, did not correlate with learning specificity, suggesting that they did not reflect the fear memory. Together, our results identify a new, temporally restricted, function for cortical activity in associative learning. These results reconcile seemingly conflicting previous findings and provide for a neuronal code for determining individual patterns in learning.

Posted ContentDOI
08 Sep 2020-bioRxiv
TL;DR: This work introduces a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales that displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations.
Abstract: Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.

Posted ContentDOI
21 Dec 2020-bioRxiv
TL;DR: In this paper, an attractor network model of long-term memory endowed with firing rate adaptation and global inhibition is proposed. But the model is not suitable for large networks of interacting neurons and it cannot explain the power laws governing recall capacity.
Abstract: Despite the complexity of human memory, paradigms like free recall have revealed robust qualitative and quantitative characteristics, such as power laws governing recall capacity. Although abstract random matrix models could explain such laws, the possibility of their implementation in large networks of interacting neurons has so far remained unexplored. We study an attractor network model of long-term memory endowed with firing rate adaptation and global inhibition. Under appropriate conditions, the transitioning behaviour of the network from memory to memory is constrained by limit cycles that prevent the network from recalling all memories, with scaling similar to what has been found in experiments. When the model is supplemented with a heteroassociative learning rule, complementing the standard autoassociative learning rule, as well as short-term synaptic facilitation, our model reproduces other key findings in the free recall literature, namely serial position effects, contiguity and forward asymmetry effects, as well as the semantic effects found to guide memory recall. The model is consistent with a broad series of manipulations aimed at gaining a better understanding of the variables that affect recall, such as the role of rehearsal, presentation rates and (continuous/end-of-list) distractor conditions. We predict that recall capacity may be increased with the addition of small amounts of noise, for example in the form of weak random stimuli during recall. Moreover, we predict that although the statistics of the encoded memories has a strong effect on the recall capacity, the power laws governing recall capacity may still be expected to hold.

Posted ContentDOI
11 Sep 2020-bioRxiv
TL;DR: It is proposed that homeostatic time constants can be slow if plasticity is gated, and it is suggested that the striking compartmentalisation of pyramidal cells and their inhibitory inputs enable large synaptic changes at the dendrite while maintaining network stability.
Abstract: With Hebbian learning ‘who fires together wires together’, well-known problems arise. On the one hand, plasticity can lead to unstable network dynamics, manifesting as run-away activity or silence. On the other hand, plasticity can erase or overwrite stored memories. Unstable dynamics can partly be addressed with homeostatic plasticity mechanisms. Unfortunately, the time constants of homeostatic mechanisms required in network models are much shorter than what has been measured experimentally. Here, we propose that homeostatic time constants can be slow if plasticity is gated. We investigate how the gating of plasticity influences the stability of network activity and stored memories. We use plastic balanced spiking neural networks consisting of excitatory neurons with a somatic and a dendritic compartment (which resemble cortical pyramidal cells in their firing properties), and inhibitory neurons targeting those compartments. We compare how different factors such as excitability, learning rate, and inhibition can lift the requirements for the critical time constant of homeostatic plasticity. We specifically investigate how gating of dendritic versus somatic plasticity allows for different amounts of weight changes in networks with the same critical homeostatic time constant. We suggest that the striking compartmentalisation of pyramidal cells and their inhibitory inputs enable large synaptic changes at the dendrite while maintaining network stability. We additionally show that spatially restricted plasticity in a subpopulation of the network improves stability. Finally, we compare how the different gates affect the stability of memories in the network.

Posted ContentDOI
25 Apr 2020-bioRxiv
TL;DR: Investigating whether inhibition from parvalbumin (PV)-expressing neurons is altered in primary somatosensory cortex in mice trained in a whisker-based reward-association task indicates reduced PV inhibition in L2/3 selectively enables an increase in translaminar recurrent activity, observed during SAT.
Abstract: Sensory and motor learning reorganizes neocortical circuitry, particularly manifested in the strength of excitatory synapses. Prior studies suggest reduced inhibition can facilitate glutamatergic synapse plasticity during learning, but the role of specific inhibitory neurons in this process has not been well-documented. Here we investigate whether inhibition from parvalbumin (PV)-expressing neurons is altered in primary somatosensory cortex in mice trained in a whisker-based reward-association task. Anatomical and electrophysiological analyses show PV input to L2/3, but not L5, pyramidal (Pyr) neurons is rapidly suppressed during early stages of sensory training, effects that are reversed after longer training periods. Importantly, sensory stimulation without reward does not alter PV-mediated inhibition. Computational modeling indicates that reduced PV inhibition in L2/3 selectively enables an increase in translaminar recurrent activity, also observed during SAT. PV disinhibition in superficial layers of the neocortex may be one of the earliest changes in learning-dependent rewiring of the cortical column. Impact statement Tactile learning is associated with reduced PV inhibition in superficial layers of somatosensory cortex. Modeling studies suggest that PV disinhibition can support prolonged recurrent activity initiated by thalamic input.

Posted ContentDOI
24 Feb 2020-bioRxiv
TL;DR: It is shown in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning.
Abstract: Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.