scispace - formally typeset
Open AccessBook ChapterDOI

A Compiler and Simulator for Partial Recursive Functions over Neural Networks

TLDR
In this article, a compiler and a simulator for Artificial Recurrent Neural Networks (ARNNs) based on partial recursive functions are presented. But they are based on higher-level languages.
Abstract
The field of Artificial Recurrent Neural Networks (ARNNs), mainly in the last two decades, was able to solve engineering problems while keeping the simplicity of the underlying principles that allow them to mimic their biological counterparts. All this attracts people from many different fields such as Neurophysiology and Computer Science. We introduce our subject from a Computer Science perspective: the ARNN is seen as a computing mechanism able to perform computation based on a program coded as a specific arrangement of neurons and synapses. This work implements a compiler and a simulator based on [4]. In [3,7,5] similar ideas are presented but they are based on higher-level languages. We start by presenting the underlying theoretical context on which this work is based. In section 2 we give a brief review of the concept of partial recursive function. In section 3 we present our approach for building neural networks from partial recursive functions. The explanation of how we adapted the design of [4] into a compiler is given in section 4. Section 5 refers to the simulator and usage examples and section 6 concludes this paper. The simulator is freely available at http://www.di.fc.ul.pt/~jpn/netdef/nwb.html.

read more

Content maybe subject to copyright    Report

A compiler and simulator for
partial recursive functions over neural networks
João Neto
1
,JoséFélixCosta
2
,
Paulo Carreira
3
,MiguelRosa
4
1
Departamento de Informática, Faculdade de Ciências da
Universidade de Lisboa, C5 Piso 1, 1700 Lisboa,
PORTUGAL
jpn@di.fc.ul.pt
2
Departamento de Matemática, Instituto Superior Técnico,
Av. Rovisco Pais, 1049-001 Lisboa,
PORTUGAL
fgc@math.ist.utl.pt
3
OBLOG Software S.A., Alameda António Sérgio 7-1A
2795-023 Linda-a-Velha
PORTUGAL
pcarreira@oblog.pt
4
Fundação para a Computação Científica Nacional
Av. do Brasil, 101. 1700-066 Lisboa.
PORTUGAL
mar@fccn.pt
Abstract. In [
6
]and[
8
] it was shown that Artificial Recurrent
Neural Networks have the same computing power as Turing
machines. A Turing machine can be programmed in a proper high-
level language - the language of partial recursive functions. In this
paper we present the implementation of a compiler that directly
translates high-level Turing machine programs to Artificial
Recursive Neural Networks. The application contains a simulator
that can be used to test the resulting networks. We also argue that
these experiments provide clues to develop procedures for
automatic synthesis of Neural Networks from high-level
descriptions.
1. Introduction
The field of Artificial Recurrent Neural Networks (ARNNs) is meeting a
lot of excitement nowadays. Both because of their achievements in solving
real world problems and their simplicity of the underlying principles that
still allow them to mimic their biological counterparts. All this excitement
attracts people from many different fields such as Neurophisiology and
Computer Science.
We introduce our subject from a Computer Science perspective. The
view we are interested in, is the one in which an ARNN can be seen as a
computing mechanism able to perform some kind of computation based on
a program coded as a specific arrangement of neural artifacts, like neurons
and synapses. This work implements a compiler and a simulator based on
the previous Turing Universality of Neural Nets (Revisited) paper, [4]. In
[3], [7] and [5] similar ideas are given but they are based on higher level
languages.
We start by giving the underlying theoretical context on which it is
based. In section 2 we give a brief review of the concept of partial
recursive function. In section 3 we present our approach for constructing

neural networks from partial recursive functions. The explanation of how
we adapted the design of [4] into a compiler is given in section 4. Section
5 refers to the simulator and usage examples and section 6 concludes this
paper. The simulator is freely available at www.di.fc.ul.pt/~jpn/
netdef/nwb.html.
2. Partial Recursive Function theory
When informally speaking of a neural computer, one could be motivated
about what could it be like the language to program such a machine. The
language that we will use is the one of partial recursive functions (PRF).
Although primitive when compared to modern computer languages, it is
simple and powerful enough to program any mechanism with same
computing power as a Turing machine. Surely, building complex
programs with this language would be very difficult and more appropriate
languages exist. For our purposes however, this language is suited.
The PRF theory identifies the set of computable functions with the set
of partial recursive functions. We shall use a(x
1
,…,x
n
)
b(x
1
,…,x
n
)to
denote equality of the expressions a(x
1
,…,x
n
)andb(x
1
,…,x
n
), if and only
if both a(x
1
,…,x
n
)andb(x
1
,…,x
n
) are defined for all (x
1
,…,x
n
) or both
undefined.
The axioms also called primitive functions are:
W
that denotes the zero-ary constant 0;
S
that denotes the unary successor function S(x)=x+1;
U(i,n) that for i and n fixed, 1in, denotes the projection
function U
i,n
(x
1
, ..., x
n
)=x
i
.
The construction rules are:
C
,denotingcomposition.Iff
1
,…,f
k
are n-ary PRFs, and g is a
k-ary PRF, then the function h defined by composition,
h(x
1
,…,x
n
) g(f
1
(x
1
,…,x
n
), …, f
k
(x
1
,…,x
n
)), is a PRF;
R,
denoting recursion. If f is a n-ary PRF and g is a (n+2)-ary
PRF, then the unique (n+1)-ary function h, defined by
1) h(x
1
,…, x
n
,0)f(x
1
,…,x
n
)and
2) h(x
1
,…, x
n
, y+1) g(x
1
,…,x
n
,y,h(x
1
,…, x
n
,y));
is a PRF;
M,
denoting minimalisation.Iffisa(n+1)-aryPRF,then
h(x
1
,…, x
n
)
≡µ
y
(f(x
1
,…, x
n
,y)=0)isalsoaPRF,where
µ
y
(f(x
1
,…, x
n
,y)=0)=
least y such that f(x
1
,..,x
n
,y)=0 and
z
y: f(x
1
,…,x
n
,z) is defined
undefined, otherwise
For instance, f(x,y)=x+1 is a PRF and is described by the expression
C(U(1,2),S). The function f(x,y)=x+y is also a PRF described by the
expression
R(U(1,1),C(U(3,3),S
). In fact, it can be shown that
every Turing computable function is a PRF. More details on PRF theory
can be found in [1] or [2].

3. Coding PRF into ARNNs
Finding a systematic way of generating ARNNs from given descriptions of
PRF’s greatly simplifies the task of producing neural nets to perform
certain specific tasks. Furthermore, it also gives a proof that neural nets
can effectively compute all Turing computable functions as treated in [4].
In this section we briefly describe the computing rule of each processing
element, i.e., each neuron. Further we present the coding strategy of
natural numbers to load the network. Finally, we will see how to code a
PRF into an ARNN.
3.1 How do the processing elements work?
Likein[4]wemakeuseof
σ
-processors. In each instant t each neuron j
updates its activity x
j
in the following non-linear way:
x
j
(t+1) = σ(
i=1
N
a
ji
x
i
(t) +
k=1
M
b
jk
u
k
(t) + c
j
)
where a
ji
,b
jk
and c
j
are rational weights; N is the number of neurons, M the
number of input streams u
k
;andσ is the continuous function defined
below:
σ
(x) =
1,x
1
x,0<x<1
0,x
0
3.2 How can we represent the numbers ?
We use an unary representation where each natural number is represented
as a rational number by means of a mapping
α
where, for each n,
α
(n) is
given by
i=0
n
10
1-i
3.3 How to construct the ARNN ?
The following three net schemata were implemented to compute the
corresponding three axioms of recursive function theory. Changes were
made with respect to [4]. First, the W axiom is provided with two
additional neurons. Second, each U(i,n) axiom is constructed with only
five neurons making it more efficient.
fig. 1 Axioms (i) W; (ii) S; (iii) U(i,n).
0.1
OUT
IN
x+1
x
–0.9
(ii)
IN
OUT
0
0.1
(i)
...
...
x
i
x
n
x
1
x
i
OUT
IN
(iii)

The rules are illustrated by the net schemata of figures 2, 3 and 4,
where grey coloured circles represent repeated neurons. The Sg box
represents a subnet that finds if a given number is positive or zero:
fig. 2 Composition.
fig.3–Recursion.
g(…)
OUT
h(…)
=0
IN
g
Sg
f(x
1
,…,x
n
)
OUT
f
H
–1
–1.2
10
>0
IN
y
Y
0.2
0.1
0.1
.
x
1
x
n
IN
X
1
X
n
–1
–1
X
1
X
n
–1
–1
–1
-10
–1
K
–0.9
H
OUT
–1–1
reset
Y
X
i
–1–1
reset
KH
–1
–1
x
n
……
.
.
.
x
1
f
1
(x
1
,…,x
n
)
h(…)
y
1
f
k
(x
1
,…,x
n
)
OUT
IN
OUT
OUT
y
k
f
1
IN
g
f
k
–1
–1
–1
–1
–1
(k–1)
–1
–1

fig. 4 Minimalisation.
fig. 5 The signal (Sg) network.
For each PRF expression an ARNN is generated using the structure of
the expression. The construction of the ARNNs is made in a top-down
fashion, beginning with the outermost ARNN and then continuously
instantiating ARNNs until reaching the axioms.
For an expression R(U(1,1),C(S,U(3,3)) we would first build
the network for the recursion, then instantiate with the projection axiom
network and with the composition rule network, that in turn would
accommodate the successor and projection axiom networks. This
instantiation mechanism for the network schemas consists of replacing the
boxes by compatible network schemas. A box is said to be compatible
with a network schema if the number of inputs (respectively outputs) of
the box is the same as the number of inputs (respectively outputs) of the
network schema. The substitution operation of a box by a network schema
consists of connecting the box inputs (respectively outputs) to the network
inputs (respectively outputs).
4. Putting the pieces together and building the compiler
The tool we want to present to the user, should be capable of handling
more complex functions than the simple examples used in the previous
sections. Specifying a more complex function implies writing a more
complex expression. This motivates the use of a modular and yet simple
language to increase readability.
=1 if x=0
–1
x
IN
–100
2
2
100
=1 if x>0
–2
–0.1
10
0.1
0.1
=0
>0
Sg
f(x
1
,…,x
n
,y)
OUT
IN
OUT
h(...)
f
–1
–1–1
reset
Y
X
i
–0.9
Y
...
x
1
x
n
IN
X
1
X
n
–1
–1
–1
–1

References
More filters
Journal ArticleDOI

On the Computational Power of Neural Nets

TL;DR: It is proved that one may simulate all Turing machines by such nets, and any multi-stack Turing machine in real time, and there is a net made up of 886 processors which computes a universal partial-recursive function.
Book

Computability: An Introduction to Recursive Function Theory

TL;DR: The author explains how theorems such as Godel's incompleteness theorem and the second recursion theorem can be applied to the problem of computable functions.
Book

Neural networks and analog computation: beyond the Turing limit

TL;DR: This chapter discusses Neural Networks and Turing Machines, which are concerned with the construction of neural networks based on the explicit specification of a discrete-time Turing machine.
Book

Neural networks and analog computation

TL;DR: As a mathematical object, an automaton is simply the quintuple because the automaton of LaSalle's inequality is the inequality of the following type: For α ≥ 1, β ≥ 1 using LaShelle's inequality.
Journal ArticleDOI

A neural compiler

TL;DR: This paper describes a neural compiler, a compiler that produces a neural network that computes what is specified by the PASCAL program and generates an intermediate code called cellular code.
Frequently Asked Questions (1)
Q1. What are the contributions in "A compiler and simulator for partial recursive functions over neural networks" ?

In this paper the authors present the implementation of a compiler that directly translates high-level Turing machine programs to Artificial Recursive Neural Networks. The authors also argue that these experiments provide clues to develop procedures for automatic synthesis of Neural Networks from high-level descriptions.