scispace - formally typeset
Open AccessBook ChapterDOI

Turing Universality of Neural Nets (Revisited)

Reads0
Chats0
TLDR
It is shown how to use recursive function theory to prove Turing universality of finite analog recurrent neural nets, with a piecewise linear sigmoid function as activation function.
Abstract
We show how to use recursive function theory to prove Turing universality of finite analog recurrent neural nets, with a piecewise linear sigmoid function as activation function We emphasize the modular construction of nets within nets, a relevant issue from the software engineering point of view

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

What graph neural networks cannot learn: depth vs width

TL;DR: GNNmp are shown to be Turing universal under sufficient conditions on their depth, width, node attributes, and layer expressiveness, and it is discovered that GNNmp can lose a significant portion of their power when their depth and width is restricted.
Book

Neural networks and analog computation

TL;DR: As a mathematical object, an automaton is simply the quintuple because the automaton of LaSalle's inequality is the inequality of the following type: For α ≥ 1, β ≥ 1 using LaShelle's inequality.
Journal ArticleDOI

An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

TL;DR: A novel refined attractor-based complexity measurement for Boolean recurrent neural networks is provided that represents an assessment of their computational power in terms of the significance of their attractor dynamics.
Journal ArticleDOI

Symbolic processing in neural networks

TL;DR: It is shown how to use resource bounds to speed up computations over neural nets, through suitable data type coding like in the usual programming languages.
Book ChapterDOI

Recurrent Neural Networks and Super-Turing Interactive Computation

TL;DR: The results show that the computational powers of neural nets involved in a classical or in an interactive computational framework follow similar patterns of characterization, and suggest that some intrinsic computational capabilities of the brain might lie beyond the scope of Turing-equivalent models of computation.
References
More filters
Journal ArticleDOI

On the Computational Power of Neural Nets

TL;DR: It is proved that one may simulate all Turing machines by such nets, and any multi-stack Turing machine in real time, and there is a net made up of 886 processors which computes a universal partial-recursive function.
Book

Computability and Logic

TL;DR: This book discusses Computability Theory, Modal logic and provability, and its applications to first-order logic, which aims to clarify and clarify the role of language in the development of computability.