scispace - formally typeset
C

Claudia Clopath

Researcher at Imperial College London

Publications -  166
Citations -  11996

Claudia Clopath is an academic researcher from Imperial College London. The author has contributed to research in topics: Computer science & Biology. The author has an hindex of 30, co-authored 134 publications receiving 7728 citations. Previous affiliations of Claudia Clopath include Columbia University & Royal School of Mines.

Papers
More filters

Learning and Expression of Dopaminergic Reward Prediction Error via Plastic Representations of Time

TL;DR: A biophysically plausible plastic network model of spiking neurons, that learns RPEs and can replicate results observed in multiple experiments is presented, allowing the model to reconcile seemingly inconsistent experiments and make unique predictions that contrast those of TD.
Posted Content

CCN GAC Workshop: Issues with learning in biological recurrent neural networks.

TL;DR: In this paper, a review of the common assumptions about biological learning and corresponding findings from experimental neuroscience and contrast them with the efficiency of gradient-based learning in recurrent neural networks commonly used in artificial intelligence.
Posted ContentDOI

Neural manifold under plasticity in a goal driven learning behaviour

TL;DR: It is shown in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning.
Proceedings ArticleDOI

A Unifying Framework for Neuro-Inspired, Data-Driven Detection of Low-Level Auditory Features

TL;DR: This work proposes an approach that combines neural modelling with machine learning to determine relevant low-level auditory features and shows that the model can capture a variety of well-studied features and allows to unify concepts from different areas of hearing research.
Posted ContentDOI

Learning predictive signals within a local recurrent circuit

TL;DR: In this paper , the authors test whether local circuits alone can generate predictive signals by training a recurrent spiking network using local plasticity rules and show that synaptic plasticity can shape prediction errors and enables the acquisition and updating of an internal model of sensory input within a recurrent neural network.