scispace - formally typeset
Open AccessPosted Content

Deep neural networks as nested dynamical systems.

Reads0
Chats0
TLDR
In this article, the authors make the case that the analogy between deep neural networks and actual brains is structurally flawed, since the wires in neural networks are more like nerve cells, in that they are what cause information to flow.
Abstract
There is an analogy that is often made between deep neural networks and actual brains, suggested by the nomenclature itself: the "neurons" in deep neural networks should correspond to neurons (or nerve cells, to avoid confusion) in the brain. We claim, however, that this analogy doesn't even type check: it is structurally flawed. In agreement with the slightly glib summary of Hebbian learning as "cells that fire together wire together", this article makes the case that the analogy should be different. Since the "neurons" in deep neural networks are managing the changing weights, they are more akin to the synapses in the brain; instead, it is the wires in deep neural networks that are more like nerve cells, in that they are what cause the information to flow. An intuition that nerve cells seem like more than mere wires is exactly right, and is justified by a precise category-theoretic analogy which we will explore in this article. Throughout, we will continue to highlight the error in equating artificial neurons with nerve cells by leaving "neuron" in quotes or by calling them artificial neurons. We will first explain how to view deep neural networks as nested dynamical systems with a very restricted sort of interaction pattern, and then explain a more general sort of interaction for dynamical systems that is useful throughout engineering, but which fails to adapt to changing circumstances. As mentioned, an analogy is then forced upon us by the mathematical formalism in which they are both embedded. We call the resulting encompassing generalization deeply interacting learning systems: they have complex interaction as in control theory, but adaptation to circumstances as in deep neural networks.

read more

References
More filters
Journal ArticleDOI

Never-ending learning

TL;DR: The Never-Ending Language Learner (NELL) as discussed by the authors is a case study of a machine learning system that learns to read the Web 24hrs/day since January 2010, and so far has acquired a knowledge base with 120mn diverse, confidence-weighted beliefs (e.g., servedWith(tea,biscuits), while learning thousands of interrelated functions that continually improve its reading competence over time.
Journal Article

Algebras of open dynamical systems on the operad of wiring diagrams

TL;DR: In this paper, the syntactic architecture of open dynamical systems is encoded using the visual language of wiring diagrams, and the algebraic nature of assembling complex dynam- ical systems from an interconnection of simpler ones is studied.
Posted Content

Categorical Foundations of Gradient-Based Learning

TL;DR: In this article, a categorical semantics of gradient-based machine learning algorithms in terms of lenses, parametrised maps, and reverse derivative categories is proposed, which encompasses a variety of gradient descent algorithms such as ADAM, AdaGrad, and Nesterov momentum, shedding new light on their similarities and differences.
Related Papers (5)
Trending Questions (1)
How does a neural network compares to a brains neuron?

The paper argues that the analogy between neurons in deep neural networks and neurons in the brain is flawed. Instead, the "neurons" in deep neural networks are more like synapses, while the wires in deep neural networks are more like nerve cells.