scispace - formally typeset
Open AccessPosted Content

Ghost Units Yield Biologically Plausible Backprop in Deep Neural Networks

Reads0
Chats0
TLDR
In this paper, the authors introduce a model that relies on a new role for a neuronal inhibitory machinery, referred to as ghost units, which enables the network to backpropagate errors and do efficient credit assignment in deep structures.
Abstract
In the past few years, deep learning has transformed artificial intelligence research and led to impressive performance in various difficult tasks. However, it is still unclear how the brain can perform credit assignment across many areas as efficiently as backpropagation does in deep neural networks. In this paper, we introduce a model that relies on a new role for a neuronal inhibitory machinery, referred to as ghost units. By cancelling the feedback coming from the upper layer when no target signal is provided to the top layer, the ghost units enables the network to backpropagate errors and do efficient credit assignment in deep structures. While considering one-compartment neurons and requiring very few biological assumptions, it is able to approximate the error gradient and achieve good performance on classification tasks. Error backpropagation occurs through the recurrent dynamics of the network and thanks to biologically plausible local learning rules. In particular, it does not require separate feedforward and feedback circuits. Different mechanisms for cancelling the feedback were studied, ranging from complete duplication of the connectivity by long term processes to online replication of the feedback activity. This reduced system combines the essential elements to have a working biologically abstracted analogue of backpropagation with a simple formulation and proofs of the associated results. Therefore, this model is a step towards understanding how learning and memory are implemented in cortical multilayer structures, but it also raises interesting perspectives for neuromorphic hardware.

read more

Citations
More filters
Proceedings Article

Attention-Gated Brain Propagation: How the brain can implement reward-based error backpropagation

TL;DR: Attention-Gated Brain Propagation – a biologically plausible reinforcement learning scheme for deep networks with an arbitrary number of layers that achieves an accuracy that is equivalent to that of standard error-backpropagation, and better than state-of-the-art biologically inspired learning schemes.
Journal Article

Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain

TL;DR: A novel algorithm, Activation Relaxation (AR), is proposed, motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system, which converges rapidly and robustly to the correct backPropagation gradients, requires only a single type of computational unit, utilises only asingle parallel backwards relaxation phase, and can operate on arbitrary computation graphs.
Book ChapterDOI

Tiefes Lernen kann komplexe Zusammenhänge erfassen

Gerhard Paaß, +1 more
TL;DR: In this article, a Kapitel beschreibt die Eigenschaften derartiger tiefer neuronaler Netze and zeigt auf, wie sich mit Hilfe des Backproagation-Verfahrens die optimalen Parameter finden lassen.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Proceedings Article

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Posted Content

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: In this paper, the authors propose to use a soft-searching model to find the parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Related Papers (5)