scispace - formally typeset
Search or ask a question

Showing papers by "Lionel Tarassenko published in 1989"


Proceedings Article
01 Jan 1989
TL;DR: New CMOS synapse circuits using only three and four MOSFETs/synapse are announced and projections to networks of hundreds of neurons are made.
Abstract: We announce new CMOS synapse circuits using only three and four MOSFETs/synapse. Neural states are asynchronous pulse streams, upon which arithmetic is performed directly. Chips implementing over 100 fully programmable synapses are described and projections to networks of hundreds of neurons are made.

27 citations


Book ChapterDOI
01 Jan 1989
TL;DR: A neural network is a massively parallel array of simple computational units (neurons) that models some of the functionality of the human nervous system and attempts to capture some of its computational strengths.
Abstract: A neural network is a massively parallel array of simple computational units (neurons) that models some of the functionality of the human nervous system and attempts to capture some of its computational strengths (see Grossberg 1968, Hopfield 1982, Lippmann 1987). The abilities that a synthetic neural net might aspire to mimic include the ability to consider many solutions simultaneously, the ability to work with corrupted or incomplete data without explicit error-correction, and a natural fault-tolerance. This latter attribute, which arises from the parallelism and distributed knowledge representation gives rise to graceful degradation as faults appear. This is attractive for VLSI.

16 citations


Proceedings ArticleDOI
08 May 1989
TL;DR: A pulse-stream signaling mechanism is described that is analogous to that found in natural neural systems and allows relatively high levels of integration in comparison to other programmable silicon neural forms.
Abstract: A pulse-stream signaling mechanism is described that is analogous to that found in natural neural systems. Previous work has resulted in the development of synthetic neural networks implemented as VLSI devices using pulse streams to represent neural states and a time-chopping technique to implement multiplication by synaptic weights. Synaptic weights are stored on-chip in digital memory. An alternative method for representing synaptic weights is described which uses dynamic storage capacitors to hold the charge proportional to synaptic weight. The capacitive storage devices are refreshed from off-chip digital RAM via a digital-to-analog converter. The presence, absence, and rate of pulse firing of the neuron are used to represent its state. Multiplication of a neuron state by a synaptic weight is performed by modifying the width of individual pulses passing through the synapse. A circuit that performs this function is described. The small synaptic circuit that results allows relatively high levels of integration in comparison to other programmable silicon neural forms. >

15 citations


Proceedings Article
16 Oct 1989
TL;DR: An iterative algorithm is proposed to train fully-connected feedback networks and it is shown that the recall performance can be predicted without recourse to protracted simulation studies, confirming the vast superiority of Hamming-type networks for binary pattern classification.
Abstract: This paper identifies optimal strategies available for the computation of synaptic weights in both auto- and hetero-associative networks. An iterative algorithm is proposed to train fully-connected feedback networks and it is shown that the recall performance can be predicted without recourse to protracted simulation studies. More importantly, the vast superiority of Hamming-type networks for binary pattern classification, over the corresponding neural network algorithms which rely instead on the distributed storage of data is confirmed.

4 citations



18 May 1989
TL;DR: The paper discusses: general principles of VLSI implementation, analogue V LSI neural networks, and pulse-stream VLSi neural networks.
Abstract: One of the main contributions to the resurgence of interest in artificial neural networks in the early 80's was the work of Hopfield (1982) on feedback architectures with nonlinear threshold elements. Since the development of VLSI devices for the implementation of neural networks was initially stimulated by this work, the principles behind the Hopfield model are reviewed briefly. The paper then discusses: general principles of VLSI implementation, analogue VLSI neural networks, and pulse-stream VLSI neural networks. >

1 citations