scispace - formally typeset
Open AccessPosted Content

Extreme sparsity gives rise to functional specialization.

Reads0
Chats0
TLDR
In this paper, the authors show that enforcing structural modularity via sparse connectivity between two dense subnetworks leads to functional specialization of the sub-networks, but only at extreme levels of sparsity.
Abstract
Modularity of neural networks -- both biological and artificial -- can be thought of either structurally or functionally, and the relationship between these is an open question. We show that enforcing structural modularity via sparse connectivity between two dense sub-networks which need to communicate to solve the task leads to functional specialization of the sub-networks, but only at extreme levels of sparsity. With even a moderate number of interconnections, the sub-networks become functionally entangled. Defining functional specialization is in itself a challenging problem without a universally agreed solution. To address this, we designed three different measures of specialization (based on weight masks, retraining and correlation) and found them to qualitatively agree. Our results have implications in both neuroscience and machine learning. For neuroscience, it shows that we cannot conclude that there is functional modularity simply by observing moderate levels of structural modularity: knowing the brain's connectome is not sufficient for understanding how it breaks down into functional modules. For machine learning, using structure to promote functional modularity -- which may be important for robustness and generalization -- may require extremely narrow bottlenecks between modules.

read more

References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Modularity and community structure in networks

TL;DR: In this article, the modularity of a network is expressed in terms of the eigenvectors of a characteristic matrix for the network, which is then used for community detection.
Journal ArticleDOI

The Human Connectome: A Structural Description of the Human Brain

TL;DR: A research strategy to achieve the connection matrix of the human brain (the human “connectome”) is proposed, and its potential impact is discussed.
Journal ArticleDOI

The columnar organization of the neocortex.

V B Mountcastle
- 01 Apr 1997 - 
TL;DR: The modular organization of nervous systems is a widely documented principle of design for both vertebrate and invertebrate brains of which the columnar organization of the neocortex is an example.
Proceedings Article

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks.

TL;DR: This work finds that dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations, and articulate the "lottery ticket hypothesis".
Related Papers (5)