scispace - formally typeset
Search or ask a question

Showing papers on "Artificial neural network published in 1974"


Journal ArticleDOI
A. Held1
TL;DR: In this paper, a new formalism is proposed for the investigation of algebraically special metrics and the essential calculations are co-ordinate free and the equations are gauge invariant, hence easy to work with and the approach is rich in possibilities not explored by previous techniques.
Abstract: A new formalism is proposed for the investigation of algebraically special metrics. Among its advantages are that the essential calculations are co-ordinate free and the equations are gauge invariant. The derived equations are simple in form hence easy to work with and the approach is rich in possibilities not explored by previous techniques.

53 citations


Journal ArticleDOI
01 Sep 1974
TL;DR: In this article, two new methods are presented in order to determine the behavior of a structure carrying moving masses, one is analytic in nature and represents a modified asymptotic method in the theory of nonlinear phenomena and the second method is an exact numerical technique general enough to be used for solving exactly a set of differential equations with singular coefficients.
Abstract: Two new methods are presented in order to determine the behavior of a structure carrying moving masses. The first method is analytic in nature and represents a modified asymptotic method in the theory of nonlinear phenomena. The second method is an exact numerical technique general enough to be used for solving exactly a set of differential equations with singular coefficients.

53 citations


Journal ArticleDOI
TL;DR: Steady-state solutions and the stability of these solutions to small perturbations can be obtained and simple mechanisms for memory storage, for the generation of oscillatory activity and for decision making in neural systems are suggested.
Abstract: Networks containing neuronal models of the type considered in the previous paper can be described by a set of first order differential equations. Steady-state solutions and the stability of these solutions to small perturbations can be obtained. Networks of physiological interest which give rise to second, third and fourth order linear equations are analysed in detail. Conditions are derived under which such networks can be condensed into a single neuron of similar order. Simple mechanisms for memory storage, for the generation of oscillatory activity and for decision making in neural systems are suggested.

42 citations



Journal ArticleDOI
TL;DR: A model of a neural network with recurrent inhibition has been studied, intended as a possible description of the cerebral cortex, although this interpretation is not necessary.
Abstract: A model of a neural network with recurrent inhibition has been studied The model is intended as a possible description of the cerebral cortex, although this interpretation is not necessary Using the corresponding neuroanatomical concepts, it can be described in the following way The network consists of pyramidal cells and stellate cells These are assumed to be of excitatory and inhibitory type, respectively The input consists of excitatory signals and so-called unspecified signals Both types of input are connected to the pyramidal cells The output of the model is similarly formed by the output of these cells However, the pyramidal cell output is also connected to the stellate cells These are in their turn connected to the pyramidal cells, thus completing a closed circuit All connections between cells are of random character It is assumed that synapses can be facilitated as a result of simultaneous presynaptic and postsynaptic activity This gives the model a capability of associative learning The model's ability to retrieve information is investigated by studying the output in the absence of unspecified signals It is shown that, under suitable conditions, the output pattern will become composed of just one major component even if the excitatory input pattern is a mixture of several patterns that were present during learning This major component is a part of the specific output pattern that during learning became associated with the input pattern corresponding to the largest component of the pattern mixture This behavior is obtained through a dynamic process in which the pattern separation properties of the feedback link play an important role The model's operation can be viewed as pattern recognition and this aspect as well as some physiological and psychological interpretations are discussed

16 citations


Journal ArticleDOI
TL;DR: A theory and corresponding model for the neural basis of language is described and processes demonstrated are verbally directed recall of visual experience, and the dependence of, recognition and understanding on, contextual information; and elementary concepts of sentence generation.
Abstract: This report describes a theory and corresponding model for the neural basis of language. A detailed functional description will be given for the following elementary visual-linguistic processes: (1) the selection and neural encoding of patterns from the visual field; (2) the representation of visual experience in memory; (3) the mechanisms of association between different types of visual and verbal information including (a) naming of visual images, (b) naming of positional relationships between objects, (c) naming of size and shape attributes of objects, and (d) imaging of pictorial information which was previously stored in memory; (4) the neural representation of phrases and simple sentences; (5) the recognition of simple sentences and the concept of meaning; and (b) verballydirected recall of visual experience. Strengths and weaknesses of the model are discussed. Part 1 of this paper contains a complete set of operational definitions. The neural networks aie described, and several alternate control strategies for these networks are considered. Part 2 gives a detailed description of computersimulation studies of the proposed model. Processes demonstrated by the computer simulations are: (1) verbally directed recall of visual experience; (2) understanding of verbal information; (3) aspects of learning and forgetting; (4) the dependence of, recognition and understanding on, contextual information; and (5) elementary concepts of sentence generation. The simulation studies are based on one particular choice of control functions.

10 citations


Journal ArticleDOI
TL;DR: The neural networks investigated could produce output excitation patterns in which frequency and intensity information was coded with position, and such a system of speech coding, if not a true analogue of the auditory system, has immense potential in the field of speech recognition.

7 citations


Journal ArticleDOI
TL;DR: A neural net model with random interconnections is shown capable of exhibiting the features of Parkinson’s disease, including cogwheel rigidity, resting tremor, and dysdiadochokinesis.
Abstract: A neural net model with random interconnections is shown capable of exhibiting the features of Parkinson’s disease, including cogwheel rigidity, resting tremor, and dysdiadochokinesis. These properties are simulated uniquely for extrapyramidal disease, with spasticity and hypereflexia predicted for other upper motoneuron lesions.

3 citations


Proceedings Article
01 Jul 1974
TL;DR: Primitive computational concepts, expressed in terms of neural nets, are created as a basis for a model of visual motion perception that is explicitly derived within the context of a complete visual system.
Abstract: Primitive computational concepts, expressed in terms of neural nets, are created as a basis for a model of visual motion perception. These primitives are explicitly derived within the context of a complete visual system.

1 citations


Journal ArticleDOI
TL;DR: The linearized McCulloch & Pitts equation for neural network is shown to have localized oscillations and the localization is shows to be preserved or enforced by the introduction of nonlinearity.

1 citations


Journal ArticleDOI
TL;DR: A fast method is presented for simulating a class of systems that includes certain regular neural networks based on neurons that perform a weighted spatial summation as a part of their operation through high-speed convolution via the Fast Fourier Transform.
Abstract: A fast method is presented for simulating a class of systems that includes certain regular neural networks based on neurons that perform a weighted spatial summation as a part of their operation. The method employs high-speed convolution via the Fast Fourier Transform. Some important aspects are emphasized: first, even though the FFT is essential, the neurons do not need to be completely linear (they can have time varying thresholds for example); second, simulations of networks with very dense interconnections are encouraged (they take no more time then sparse ones using this method); and finally, the method is suggestive of similar but more general computational schemes.