scispace - formally typeset
Search or ask a question
Author

M.L. Recce

Bio: M.L. Recce is an academic researcher from University College London. The author has contributed to research in topics: Functional reactive programming & Programming language theory. The author has an hindex of 1, co-authored 1 publications receiving 7 citations.

Papers
More filters
Book ChapterDOI
01 Jan 1992
TL;DR: The range of software environments available for programming neural network applications is reviewed and a taxonomy of three classes: application-oriented, algorithm-oriented and general programming Systems is introduced.
Abstract: This paper reviews the range of software environments available for programming neural network applications. We introduce a taxonomy of three classes: application-oriented, algorithm-oriented and general programming Systems. Applications-oriented Systems are designed for specific market domains, such as finance, transportation and medicine. Algorithm-oriented Systems support specific neural network models, with two major subclasses including: algorithm-specific Systems supporting a single neural network model and algorithm libraries supporting many models. The third class, general programming Systems, provide “tool-kits” comprising many algorithms and programming tools, that can be used for a wide range of new algorithms and applications. Programming Systems can be further sub-divided into: general purpose Systems for programming any algorithm or application; open Systems where the user can modify any part of the system; and hardware-oriented Systems for programming specific machines, such as parallel computers. We discuss examples of the numerous Systems currently available and where they fit into the taxonomy.

7 citations


Cited by
More filters
Proceedings Article
01 Jan 1995
TL;DR: This paper describes how graph grammars may be used to grow neural networks, and has interesting application in the evolution of neural nets, since now it is possible to evolve all aspects of a network within a single uniied paradigm.
Abstract: This paper describes how graph grammars may be used to grow neural networks. The grammar facilitates a very compact and declarative description of every aspect of a neural architecture; this is important from a software/neural engineering point of view, since the descriptions are much easier to write and maintain than programs written in a high-level language , such as C++, and do not require programming ability. The output of the growth process is a neural network that can be transformed into a Postscript representation for display purposes, or simulated using a separate neural network simulation program, or mapped directly into hardware in some cases. In this approach, there is no separate learning algorithm; learning proceeds (if at all) as an intrinsic part of the network behaviour. This has interesting application in the evolution of neural nets, since now it is possible to evolve all aspects of a network (including the learningàlgorithm') within a single uniied paradigm. As an example, a grammar is given for growing a multi-layer perceptron with active weights that has the error back-propagation learning algorithm embedded in its structure.

16 citations

Proceedings ArticleDOI
13 May 1991
TL;DR: The authors describe the PYGMALION programming environment and propose a design for the GALATEA neurocomputing system and neurocomputer architecture, and discuss the design of 'neural' chips for the neurocomputer (general-purpose circuits), and the silicon compiler (for special-purpose circuit design).
Abstract: The focus of European neural computing research is the ESPRIT II PYGMALION Project, and its successor GALATEA. The authors present their view of this general-purpose neurocomputing system. They describe the PYGMALION programming environment and propose a design for the GALATEA neurocomputing system and neurocomputer architecture. Lastly, they discuss the design of 'neural' chips for the neurocomputer (general-purpose circuits), and the silicon compiler (for special-purpose circuits) which uses PYGMALION's intermediate level language nC. >

5 citations

Book ChapterDOI
09 Jun 1993
TL;DR: A Neural Silicon Compiler (NSC) which is dedicated to the generation of Application-Specific Neural Network Chips (ASNNCs) from a high level C-based behavioural language through a heuristic approach.
Abstract: In this paper we present a Neural Silicon Compiler (NSC) which is dedicated to the generation of Application-Specific Neural Network Chips (ASNNCs) from a high level C-based behavioural language. The integration of this tool into a neural network programming environment permits the translation of a neural application specified in the C-bascd input language into cither binary (for simulation) or silicon (for execution in hardware). The development of the NSC focuses on the high level synthesis part of thc silicon compilation process, where the output is a Register Transfer Level of a circuit specified in VHDL. This is accomplished through a heuristic approach, which targets the generated hardware structure of the ASNNCs in an optimised digital VLSI architecture employing bolh phases of neural computing on-chip: recall and learning.

5 citations

Book ChapterDOI
18 Apr 1994
TL;DR: The design of PREENS allows for neural networks to be running on any high performance MIMD machine such as a transputer system, and can also be used for other applications, like image processing.
Abstract: PREENS — a Parallel Research Execution Environment for Neural Systems — is a distributed neurosimulator, targeted on networks of workstations and transputer systems. As current applications of neural networks often contain large amounts of data and as the neural networks involved in tasks such as vision are very large, high requirements on memory and computational resources are imposed on the target execution platforms. PREENS can be executed in a distributed environment, i.e. tools and neural network simulation programs can be running on any machine connectable via TCP/IP. Using this approach, larger tasks and more data can be examined using an efficient coarse grained parallelism. Furthermore, the design of PREENS allows for neural networks to be running on any high performance MIMD machine such as a transputer system. In this paper, the different features and design concepts of PREENS are discussed. These can also be used for other applications, like image processing.

4 citations

Book ChapterDOI
09 Jun 1993
TL;DR: This paper describes the design and implementation of a development environment for matrix based neurocomputers where a new virtual machine language provides a wide range of matrix operations and device-related input/output communications.
Abstract: This paper describes the design and implementation of a development environment for matrix based neurocomputers. A new virtual machine language provides a wide range of matrix operations and device-related input/output communications. Virtual machines may be implemented entirely on conventional workstations or may use matrix-based neurocomputer hardware. To assist in algorithm development and debugging, the virtual machine is able to generate monitoring messages. A graphical interface is used to view the workings of one or more virtual machines. The user interface allows a range of display techniques to be associated with VML scalar and matrix variables. Virtual machines and monitoring processes run under the control of a central scheduler. All communications are implemented using a message based protocol. This environment is currently being used to develop a wide range of applications.

4 citations