scispace - formally typeset
Search or ask a question

Showing papers in "Network: Computation In Neural Systems in 1992"


Journal ArticleDOI
TL;DR: The principle of efficiency of information representation as a design principle for sensory processing is explored and a preliminary discussion on how this principle could be applied in general to predict neural processing is given and some neural systems where it recently has been shown to be successful are discussed.
Abstract: The sensory pathways of animals are well adapted to processing a special class of signals, namely stimuli from the animal's environment. An important fact about natural stimuli is that they are typically very redundant and hence the sampled representation of these signals formed by the array of sensory cells is inefficient. One could argue for some animals and pathways, as the author does in this review, that efficiency of information representation in the nervous system has several evolutionary advantages. Consequently, one might expect that much of the processing in the early levels of these sensory pathways could be dedicated towards recording incoming signals into a more efficient form. The author explores the principle of efficiency of information representation as a design principle for sensory processing. He gives a preliminary discussion on how this principle could be applied in general to predict neural processing and then discuss concretely some neural systems where it recently has been shown to...

723 citations


Journal ArticleDOI
TL;DR: In this paper, a neural network was used to analyse samples of natural images and text, where components resemble derivatives of Gaussian operators, similar to those found in visual cortex and inferred from psychophysics.
Abstract: A neural net was used to analyse samples of natural images and text. For the natural images, components resemble derivatives of Gaussian operators, similar to those found in visual cortex and inferred from psychophysics. While the results from natural images do not depend on scale, those from text images are highly scale dependent. Convolution of one of the text components with an original image shows that it is sensitive to inter-word gaps.

264 citations


Journal ArticleDOI
TL;DR: The authors address the question of what happens if formal neurons are replaced by a model of ‘spiking’ neurons, and show how to include refractoriness and noise into a simple threshold model of neuronal spiking.
Abstract: The Hopfield network provides a simple model of an associative memory in a neuronal structure. It is, however, based on highly artificial assumptions, especially the use of formal two-state neurons or graded-response neurons. The authors address the question of what happens if formal neurons are replaced by a model of ‘spiking’ neurons. They do so in two steps. First, they show how to include refractoriness and noise into a simple threshold model of neuronal spiking. The spike trains resulting from such a model reproduce the distribution of interspike intervals and gain functions found in real neurons. In a second step they connect the model neurons so as to form a large associative memory system. The spike transmission is described by a synaptic kernel which includes axonal delays, ‘Hebbian’ synaptic efficacies, and a realistic postsynaptic response. The collective behaviour of the system is predicted by a set of dynamical equations which are exact in the limit of a large and fully connected network that...

220 citations


Journal ArticleDOI
Tariq Samad1, Steven A. Harp1
TL;DR: It is shown how the kohonen self-organizing feature map model can be extended so that partial training data can be utilized, including an application to student modelling for intelligent tutoring systems in which data is inherently incomplete.
Abstract: We show how the kohonen self-organizing feature map model can be extended so that partial training data can be utilized. Given input stimuli in which values for some elements or features are absent, the match computation and the weight updates are performed in the input subspace defined by the available values. Three examples, including an application to student modelling for intelligent tutoring systems in which data is inherently incomplete, demonstrate the effectiveness of the extension.

106 citations


Journal ArticleDOI
TL;DR: A Lyapunov function is developed to help understand the combined activity and synapse dynamics for a class of such adaptive networks in neural networks.
Abstract: Two kinds of dynamic processes take place in neural networks. One involves the change with time of the activity of each neuron. The other involves the change in strength of the connections (synapses) between neurons. When a neural network is learning or developing, both processes simultaneously take place, and their dynamics interact. This interaction is particularly important in feedback networks. A Lyapunov function is developed to help understand the combined activity and synapse dynamics for a class of such adaptive networks. The methods and viewpoint are illustrated by using them to describe the development of columnar structure of orientation-selective cells in primary visual cortex. Within this model, the columnar structure originates from symmetry breaking in feedback pathways within an area of cortex, rather than feedforward pathways between areas.

80 citations


Journal ArticleDOI
TL;DR: In this article, the Frolov-Cowan point approximation for single-neuron spike dynamics in associative cortex has been derived, even in the presence of shunting inhibition, and the same type of logic is applied to the cable theory equation for the neuron.
Abstract: Single-neuron spike dynamics is reconsidered in a situation in which the neural afferent spike input, originating from non-specific spontaneous activity, is very large compared with the input produced by specific (task related) operation of a cortical module. This the authors argue is the situation prevailing in associative cortex. It is shown that the Frolov-Cowan 'point approximation' can be derived systematically in this case, even in the presence of shunting inhibition.The same type of logic is then applied to the cable theory equation for the neuron. Also, here under low ratio of signal to spontaneous activity in the input, the dynamics linearizes, leading to an integrate-and-fire behaviour for the effective neuron. This element sums its synaptic inputs linearly. Its parameters are the resting parameters of the bare neuron, renormalized by the heavy barrage of impinging spontaneous activity. The only remnant of the geometric structure of the dendritic tree is an effective weakening of the postsynapti...

68 citations


Journal ArticleDOI
TL;DR: The authors introduce the definition of information capacity which guarantees content addressability and is a stricter upper bound of the information really accessible in an autoassociation process.
Abstract: A new access to the asymptotic analysis of autoassociation properties in recurrent McCulloch-Pitts networks in the range of low activity is proposed. Using information theory, this method examines the static structure of stable states imprinted by a Hebbian storing process. In addition to the definition of critical pattern capacity usually considered in the analysis of the Hopfield model, the authors introduce the definition of information capacity which guarantees content addressability and is a stricter upper bound of the information really accessible in an autoassociation process. They calculate these two types of capacities for two types of local learning rules which are very effective for sparsely coded patterns: the Hebb rule and the clipped Hebb rule. It turns out that for both rules the information capacity is exactly half the pattern capacity.

65 citations


Journal ArticleDOI
TL;DR: Hebbian-type learning is discussed in a network whose synapses are analogue, dynamic variables, whose values have to be periodically refreshed due to possible exponential decay, or other instability of continuous synaptic efficacies, and the end product of learning is very sensitive to the relation between the rate of presentation of patterns and the size of the refresh time interval.
Abstract: Hebbian-type learning is discussed in a network whose synapses are analogue, dynamic variables, whose values have to be periodically refreshed due to possible exponential decay, or other instability of continuous synaptic efficacies. It is shown that the end product of learning in such networks is very sensitive to the relation between the rate of presentation of patterns and the size of the refresh time interval. It is shown that in the limit of slow presentation, the network can learn at most O(In N) patterns in N neurons, and must learn each one in one shot, thus learning all errors present in a corrupt stimulus presented for retrieval.It is then shown that as the rate of presentation is increased the performance is increased rapidly. Another option we investigate is that in which the refresh mechanism is acting stochastically. In this case the rate of learning can be slowed down very significantly, but the number of stored patterns cannot surpass √N.

53 citations


Journal ArticleDOI
TL;DR: It is shown that in some cases it may be more efficient (in information theoretic terms) to store many patterns that are each retrieved with a high error rate rather than fewer patterns which are each retrieval with high accuracy.
Abstract: The associative net is a fully connected feedforward associative memory, with one layer of input units, one layer of output units and binary valued weights. Its simple structure and the form of the weight modification rule used have led to several analyses of two measures of its performance: capacity and information efficiency. However, these have yielded only approximate expressions. In this paper we present a more precise treatment. Simulation results are presented to support the analysis and to deal with the cases where analysis is not possible. We extend previous work which showed that in some cases it may be more efficient (in information theoretic terms) to store many patterns that are each retrieved with a high error rate rather than fewer patterns which are each retrieved with high accuracy.

45 citations


Journal ArticleDOI
TL;DR: The idea that long range cortico-cortical connections might be the substrate for an autoassociative memory mechanism, whereby features processed locally could be linked together over large portions of neocortex is discussed.
Abstract: We discuss the idea that long range cortico-cortical connections might be the substrate for an autoassociative memory mechanism, whereby features processed locally could be linked together over large portions of neocortex. The simplest version of this idea is shown to be implausibly inadequate in terms of storage capacity; although up to a fraction of a bit could be stored on each synapse, the number of global activity patterns that could be stored and individually retrieved would scale not with the size of the network but, effectively, only with the number of modifiable connections per cell.

38 citations


Journal ArticleDOI
TL;DR: Drawing on experience gained in the study of condensed matter, it is pointed out that some of the single-electrode recordings which are exposing such remarkable computational features could not possibly be of great relevance for the description of 'higher brain function'.
Abstract: As a physicist observing the empirical struggle of neurophysiology to penetrate the computational secrets of neo-cortex, one is struck by the coexistence of two extreme positions. One has it that the computations performed in neo-cortex are too complex to be approachable by single-electrode recordings. The other is continuously discovering single neurons which can perform such complex tasks as to recognize the individual faces of the senior staff in the laboratory.Drawing on experience gained in the study of condensed matter, I would like to reopen the discussion, pointing out that some of the single-electrode recordings which are exposing such remarkable computational features could not possibly be of great relevance for the description of 'higher brain function'. On the other hand, some single-electrode experiments, which detect rather mundane features of performance, display remarkable dynamical features, which must surely underlie cortical function.All this leads us to the conclusion that, though simu...

Journal ArticleDOI
TL;DR: In this article, four theories of the EEG are discussed-the Amsterdam group's model of alpha activity, the Nunez model of global resonance, Freeman's model, and the New Zealand group's stochastic model of EEG at millimetric scale.
Abstract: The character of the EEG, its cellular sources, and its relationship to cognitive events are outlined. Then four theories of the EEG are discussed-the Amsterdam group's model of alpha activity, the Nunez model of global resonance, Freeman's model of oscillation in the cortical minicolumn, and the New Zealand group's stochastic model of EEG at millimetric scale. Experiments supporting these theories are outlined, including spatial and temporal characteristics of the alpha rhythm, velocities of EEG wave propagation, and phase relations of cell action potentials with EEG near 40 Hz.These theories are mutually consistent, differing only with regard to the scale of phenomenon accounted for. They imply that real cortical dynamic properties bear analogy to those of Hopfield networks, Boltzmann machines, and Amit probabilistic attractor networks. The cortex may be described as a system with a single instantaneous basin of attraction, the locus of the basin being subject to adiabatic control by brain-stem afferent...

Journal ArticleDOI
TL;DR: Two different implementations of backward error propagation are described; one using an exact line search to find the minimum of the error along the current search direction, the other avoids the line search by controlling the positive indefiniteness of the Hessian matrix.
Abstract: Backward error propagation is a widely used procedure for computing the gradient of the error for a feed-forward network and thus allows the error to be minimized (learning). Simple gradient descent is ineffective unless the step size used is very small and it is then unacceptably slow. Conjugate gradient methods are now increasingly used as they allow second-derivative information to be used, thus improving learning. Two different implementations are described; one using an exact line search to find the minimum of the error along the current search direction, the other avoids the line search by controlling the positive indefiniteness of the Hessian matrix. The two implementations are compared and evaluated in the context of an image recognition problem using input bit-maps with a resolution of 128 by 128 pixels.

Journal ArticleDOI
TL;DR: A feed-forward layered network is used to simulate the life cycle of a synthetic animal that moves in an environment and captures food objects and the network can learn the weights that solve the survival task only by means of its genetic evolution.
Abstract: Genetic algorithms have been successfully used for optimizing complex functions over multidimensional domains, such as the space of the connection weights in a neural network. A feed-forward layered network is used to simulate the life cycle of a synthetic animal that moves in an environment and captures food objects. The adaptation of the animal (i.e. of the network's weight matrix) to the environment can be measured by the amount of reached food objects in a given lifetime. We consider this amount as a fitness function to be optimized by a genetic algorithm over the space of the connection weights. The network can learn the weights that solve the survival task only by means of its genetic evolution. The recombination genetic operator (crossover) can be seen as a model of sexual recombination for the population, while mutation models agamic reproduction. The central problem in trying to apply crossover is the difficult mapping between the genetic code string (genotype) and the network's weight matrix (ph...

Journal ArticleDOI
TL;DR: Evidence is presented to suggest that information storage, in part, may be based on synapse connectivity changes mediated by a replay of neurodevelopmental events.
Abstract: This brief overview presents evidence to suggest that information storage, in part, may be based on synapse connectivity changes mediated by a replay of neurodevelopmental events. These considerations are based on studies which have monitored change in neural cell adhesion molecule (NCAM) sialylation state during acquisition and consolidation of a passive avoidance response in the adult rat. The synapse-specific isoform of NCAM, NCAM180 is demonstrated to increase sialylation state in the hippocampus between 12-24 h after training and to produce a novel sialylated form of 210 kDa. Interventive studies with specific antibodies showed NCAM to play a specific role at 6-8 h after training, the amnestic effect of which does not become apparent until the process of NCAM sialylation is complete.

Journal ArticleDOI
TL;DR: This work has used the hippocampal slice preparation to study cell properties, synaptic organization and synchronized population bursts and oscillations of the hippocampus to shed light on the mechanisms of certain epilepsy types and brain waves that occur in the whole animal.
Abstract: We have used the hippocampal slice preparation to study cell properties, synaptic organization and synchronized population bursts and oscillations. Our methods are those of electrophysiology and computer modelling. The results shed light on the mechanisms of certain epilepsy types and brain waves that occur in the whole animal. In addition, detailed information on cellular organization of the hippocampus is critical for the evaluation of computational brain theories. To illustrate this point, we shall discuss a particular computational theory of the CA3 region of the hippocampus. This theory concerns the way recurrent excitatory synaptic connections in the hippocampus might be used to define distance in a non-topographic map of the environment. If correct, the theory has specific physiological implications.

Journal ArticleDOI
TL;DR: A technique is described that permits the on-line construction and dynamic modification of parse trees during the processing of sentence-like-input using simple recurrent network and recursive auto-associative memory.
Abstract: A technique is described that permits the on-line construction and dynamic modification of parse trees during the processing of sentence-like-input. The approach is a combination of simple recurrent network (SRN) and recursive auto-associative memory (RAAM). The parsing technique involves teaching the SRN to build RAAM representations as it processes its input item by item. The approach is a potential component of a larger connectionist natural language processing system, and could also be used as a tool in the cognitive modelling of language understanding. Unfortunately, the modified SRN demonstrates a limited capacity for generalization.

Journal ArticleDOI
TL;DR: A neural network approach is used to extract enough useful information from the patterns of ACF and PACF to identify an appropriate ARMA model for an unknown time series.
Abstract: This study presents an artificial neural network-based paradigm for automating the controversial identification stage of the Box-Jenkins method, in which a time series is classified into an autoregressive moving average (ARMA) model. The identification stage depends on interpreting the patterns of two statistics-the autocorrelation function (ACF) and the partial autocorrelation function (PACF). The interpretation, however, requires an expertise of the Box-Jenkins method to be successfully completed. This operational drawback makes it less practical despite its theoretical elaborateness.In this paper, a neural network approach is used to extract enough useful information from the patterns of ACF and PACF to identify an appropriate ARMA model for an unknown time series. This paper suggests both the neural network architecture and the training strategy that are suitable for identifying the Box-Jenkins model. Promising results were obtained through extensive computer experiments with the artificially generate...

Journal ArticleDOI
TL;DR: It is demonstrated that formal neural network techniques allow us to build the simplest models compatible with a limited but systematic set of experimental data, and three components out of four can be described by linear multithreshold automata.
Abstract: We demonstrate that formal neural network techniques allow us to build the simplest models compatible with a limited but systematic set of experimental data. The experimental system under study is the growth of mouse macrophage like cell lines under the combined influence of two ion channels, the growth factor receptor and adenylate cyclase. We conclude that three components out of four can be described by linear multithreshold automata. The remaining component behaviour being non-monotonic necessitates the introduction of a fifth hidden variable, or of nonlinear interactions.

Journal ArticleDOI
TL;DR: The author describes work conducted using artificial neural networks, in the form of a multilayer perceptron, employing error backpropagation and trained on a database of features derived from 524 expertly classified single cervical cells and subsequently tested on a further 524 previously unseen cells.
Abstract: The author describes work conducted using artificial neural networks, in the form of a multilayer perceptron, employing error backpropagation and trained on a database of features derived from 524 expertly classified single cervical cells and subsequently tested on a further 524 previously unseen cells. Pre-processing of the data was used to achieve a data reduction of better than 99%. Each cell image was converted from its 256*256 pixel format to its frequency spectrum from which 80 features containing texture and energy information were extracted. The artificial neural network was trained and tested using this compressed data representation. The performance of a number of different network arrangements was investigated. The best results were obtained using a network with 80 inputs, 4 processing elements in a single hidden layer and 1 processing element in the output layer. The network with this arrangement was able to correctly classify, as either normal or abnormal, 98% of the cell images in the traini...

Journal ArticleDOI
TL;DR: The authors present a new learning algorithm for neural networks with discrete synaptic couplings defined in the continuous space and its performance and features are analysed in detail.
Abstract: The authors present a new learning algorithm for neural networks with discrete synaptic couplings. The main difference with respect to other previous algorithms is that it is defined in the continuous space. Its performance and features are analysed in detail.

Journal ArticleDOI
TL;DR: Phase locking of tectal unit activity with the EEG oscillations provides evidence that the EEG may contribute to modulation of neural responsiveness.
Abstract: The retinal input to the midbrain optic tectum and pretectal thalamus of frogs and toads synapses with intrinsic neurons in the surface layers. Processing of visual information occurs from surface to a depth involving intratectal connections, probably organized in a columnar fashion, and also connections derived from pretectal thalamus. The latter are involved in determining the specificity of responses of some tectal (T52) units for 'prey'-like stimuli. These T52 units also act as pre-motor neurons as they project to more posterior brainstem (bulbar) regions responsible for prey-catching behaviour. Novel stimuli presented to toads induce an increase in synchrony and amplitude of the tectal electroencephalogram (EEG) and a surface negative sustained potential shift (SPS). Phase locking of tectal unit activity with the EEG oscillations provides evidence that the EEG may contribute to modulation of neural responsiveness. The SPS representing extracellular potassium fluxes and depolarization of radially dist...

Journal ArticleDOI
TL;DR: A method is presented of interpreting the role of a hidden layer unit geometrically, when usingforward neural networks for real-valued function approximation problems, in terms of regions of the input space over which it is most important in a particular sense.
Abstract: Feedforward neural networks with several input units, a single output unit and a single hidden layer of sigmoid transfer functions are considered. A method is presented of interpreting the role of a hidden layer unit geometrically, when using such networks for real-valued function approximation problems, in terms of regions of the input space over which it is most important in a particular sense. The relationship between this interpretation and the weight values of the network is highlighted. Then, for the case in which the approximation is of most interest over a bounded region of the input space, it is shown that this interpretation may then be used to check for redundancy among hidden units, and to remove any such units found. Finally, future research issues for which the interpretation may be useful are briefly discussed.

Journal ArticleDOI
TL;DR: The entropy of an architecture is defined, which quantifies the propensity of a machine to learn a rule by examples, and a possible prototype of intelligent behaviour is defined.
Abstract: In this paper we define the entropy of an architecture, which quantifies the propensity of a machine to learn a rule by examples. The distance in the learning propensities of two architectures is also defined. A possible prototype of intelligent behaviour is defined.

Journal ArticleDOI
TL;DR: The effect of a bimodal (fast rise, slow decay) outward conductance on a formal point neuron is studied by means of numerical simulation and appears to sharpen the network's ability to recall a preferred stable state.
Abstract: The effect of a bimodal (fast rise, slow decay) outward conductance on a formal point neuron is studied by means of numerical simulation. Its influence upon the model neuron is contrasted with respect to that exercised by a generic, unimodal conductance. This is realized by treating the model neuron both isolated and as the processing element of a standard neural network. The increment of the magnitude of the outward bimodal conductance appears to sharpen the network's ability to recall a preferred stable state. The neuron has also been embedded in a more complex model, inspired by the nigro-striato-cortical circuit. This system, only when endowed by the bimodal potassium conductance, is able to generate periodic bursts of neuronal discharges, as a response to an unstructured input.

Journal ArticleDOI
TL;DR: Relationships between neural networks and a range of key ideas and findings in modern learning theory are examined, drawing on studies of both conditioning and perceptual learning.
Abstract: The origins of artificial neural networks are related to animal conditioning theory: both are forms of connectionist theory which in turn derives from the empiricist philosophers' principle of association. The parallel between animal learning and neural nets suggests that interaction between them should benefit both sides. The paper examines relationships between neural networks and a range of key ideas and findings in modern learning theory. It draws on studies of both conditioning and perceptual learning. The need to avoid simplistic comparisons is stressed. Not all issues which have aroused interest in learning theory are relevant to neural net research, because old and new connectionism diverge in some important ways. It is also necessary to recognize that many learning phenomena do not lend themselves to simulation by a single net. However, once these points are recognized, the findings of learning theory provide a range of well-defined challenges which are potentially important for those who are con...

Journal ArticleDOI
TL;DR: It is noted that the implicit way to describe threshold and refractory period is advantageous to the adaptive learning in neural networks and that molecular electronics probably provides an effective approach to implementing the above neuron model.
Abstract: A variant of the BvP model is proposed. The mechanisms of threshold and refractory period resulting from the double dynamical processes are qualitatively studied through computer simulation. The results show that the variant neuron model has the property that its threshold, refractory period and response amplitude are dynamically adjustable. This paper also discusses some problems relating to collective property, learning and implementation of the neural network based on the neuron model proposed. It is noted that the implicit way to describe threshold and refractory period is advantageous to the adaptive learning in neural networks and that molecular electronics probably provides an effective approach to implementing the above neuron model.

Journal ArticleDOI
TL;DR: Under certain simplifying assumptions, it is shown that the sizes of the amplifications of signals in the inhibitory neural field are inversely related to the size of the receptive fields of the excitatory and inhibitory neurons in the signal space.
Abstract: A mathematical approach to studying the properties of maps between a signal space and two neural fields, one of excitatory and the other of inhibitory neurons, is presented. It is based on a modified version of Amari's field-theoretic approach. Under certain simplifying assumptions, it is shown that the sizes of the amplifications of signals in the inhibitory neural field are inversely related to the sizes of the receptive fields of the excitatory and inhibitory neurons in the signal space. Other results concerning the relationships between the sizes of the amplifications of signals and the receptive fields of neurons, and the probabilities of occurrence of signals, are obtained and discussed.

Journal ArticleDOI
TL;DR: Simulation results confirm the effectiveness of the proposed scaling factor to the McCulloch-Pitts neuron model, which suffers from saturation when the fan-in is large and leads to network paralysis.
Abstract: The McCulloch-Pitts neuron model with semi-linear transducer function suffers from saturation when the fan-in is large. This leads to network paralysis. This paper suggests a scaling factor to the McCulloch-Pitts neuron model. Simulation results confirm the effectiveness of the proposed modification.

Journal ArticleDOI
TL;DR: The need for efficient implementation of neural networks in silicon is used to motivate the investigation of alternatives to the McCulloch-Pitts neuron; in particular, one which computes the norm of a difference rather than an inner product.
Abstract: The need for efficient implementation of neural networks in silicon is used to motivate the investigation of alternatives to the McCulloch-Pitts neuron; in particular, one which computes the norm of a difference rather than an inner product. Earlier work is reviewed briefly and formal relationships between the two types are provided. The two types are shown to be equivalent under certain circumstances. Hopfield-like networks and multilayer feedforward networks of the difference neuron are simulated and analysed by comparing them to conventional types. Outside the domain of equivalence, the difference networks are found to perform less well than networks of identical architecture but which incorporate the McCulloch-Pitts neuron.