scispace - formally typeset
Search or ask a question

Showing papers in "Neural Networks in 1990"


Journal ArticleDOI
TL;DR: A probabilistic neural network that can compute nonlinear decision boundaries which approach the Bayes optimal is formed, and a fourlayer neural network of the type proposed can map any input pattern to any number of classifications.

3,772 citations


Journal ArticleDOI
TL;DR: A shoulder strap retainer having a base to be positioned on the exterior shoulder portion of a garment with securing means attached to the undersurface of the base for removably securing the base to the exterior shoulders portion of the garment.

1,709 citations


Journal ArticleDOI
TL;DR: A new competitive-learning algorithm based on the “conscience” learning method is introduced that is shown to be efficient and yields near-optimal results in vector quantization for data compression.

726 citations


Journal ArticleDOI
TL;DR: It is shown that sufficiently complex multilayer feedforward networks are capable of representing arbitrarily accurate approximations to arbitrary mappings by proving the consistency of a class of connectionist nonparametric regression estimators for arbitrary (square integrable) regression functions.

702 citations


Journal ArticleDOI
TL;DR: A translation-invariant back-propagation network is described that performs better than a sophisticated continuous acoustic parameter hidden Markov model on a noisy, 100-speaker confusable vocabulary isolated word recognition task.

635 citations


Journal ArticleDOI
TL;DR: A model to implement parallel search of compressed or distributed pattern recognition codes in a neural network hierarchy is introduced and is a form of hypothesis testing capable of discovering appropriate representations of a nonstationary input environment.

537 citations


Journal ArticleDOI
TL;DR: Computer simulation of the dynamics of a distributed model of the olfactory system that is aimed at understanding the role of chaos in biological pattern recognition shows that the adaptive behavior of the system is scaling invariant, and it is independent of the initial conditions at the transition from one wing to another.

403 citations


Journal ArticleDOI
TL;DR: It is shown that SuperSAB may converge orders of magnitude faster than the original back propagation algorithm, and is only slightly instable, while the algorithm is very insensitive to the choice of parameter values, and has excellent scaling properties.

387 citations


Journal ArticleDOI
TL;DR: A stochastic reinforcement learning algorithm for learning functions with continuous outputs using a connectionist network that learns to perform an underconstrained positioning task using a simulated 3 degree-of-freedom robot arm.

306 citations


Journal ArticleDOI
TL;DR: The neurophysiological studies of the mustached bat's auditory system provide an excellent data base for computational models and the IBPs are mapped, which greatly depend upon subcortical signal processing.

213 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the SAS approach is much more accurate and efficient on the test examples tried than the FSA algorithm, and convenient statistical criteria are proposed and used for algorithm performance evaluation, together with test examples of controlled properties.

Journal ArticleDOI
TL;DR: This article addresses the issue of consistency in using Heuristic Dynamic Programming (HDP), a procedure for adapting a “critic” neural network, closely related to Sutton's method of temporal differences.

Journal ArticleDOI
TL;DR: This paper illustrates why a nonlinear adaptive feed-forward layered network with linear output units can perform well as a pattern classification device and shows that minimising the error at the output of the network is equivalent to maximising a particular norm, the network discriminant function, at theoutput of the hidden units.

Journal ArticleDOI
TL;DR: The initial results are very promising: for the diagnostic category that it is most crucial to get right, the MLP produces correct classification more often than any of the three groups of doctors or the fuzzy logic system.

Journal ArticleDOI
TL;DR: Two distinct but complementary neural architectures for motion perception are presented, a directionally selective, local motion detector based on the fly visual system and a motion-sensitive network, insensitive to the direction of motion.

Journal ArticleDOI
TL;DR: A new approach to the aperture problem is presented, using an adaptive neural network model that accommodates its structure to long-term statistics of visual motion, but also simultaneously uses its acquired structure to assimilate, disambiguate, and represent visual motion events in real-time.

Journal ArticleDOI
Shigeru Tanaka1
TL;DR: From detailed consideration of this equation, it is found that the problem for cortical map formation is equivalent to that for the thermodynamics in the Potts spin system whose spin variables typically have complex internal states and can interpret use-dependent self-organization of a cortical map as a kind of phase transition phenomena.

Journal ArticleDOI
TL;DR: It is shown that smooth variation of one of the parameters of the original map—learning rate—gives rise to period-doubling bifurcations of total coupling strength, suggesting the presence of a period-Doubling route to chaos.

Journal ArticleDOI
TL;DR: This work exhibits a collection of algebraic transformations which reduce network cost and increase the set of objective functions that are neurally implementable, and applies them to simplify a number of structured neural networks.

Journal ArticleDOI
TL;DR: It is shown that a model of the saccadic system using the quaternion representation of eye rotations yields a spatiotemporal translation with all the experimentally observed properties: activation of a particular site in the SC generates a saccade of a specific amplitude and direction.

Journal ArticleDOI
TL;DR: This work presents a mathematical framework for describing the interaction of neural mappings and local image processing operations which allows functional interpretations and shows that neural maps are powerful tools for the parallel processing of visual information.

Journal ArticleDOI
TL;DR: For an auto-correlation-type associative memory network, the capacity becomes greater and basins of attractor of memorized patterns are larger when there is a hysteretic property than otherwise.

Journal ArticleDOI
TL;DR: It is shown how certain model parameters must be chosen appropriately to obtain approximate shift invariance, and how these parameters should be chosen to reach a compromise between invariance and classification sensitivity.

Journal ArticleDOI
TL;DR: This paper describes these networks as a way for learning feature extractors by constrained back-propagation and shows such a time-delay network to be capable of dealing with a near real-sized problem: French digit recognition.

Journal ArticleDOI
TL;DR: An adaptation of Kohonen's algorithm for self-organisation is used as it is well suited for implementation on massively parallel architectures and the placement problem is simplified by assuming cells to be of uniform width and the cost to be minimised as total weighted wire length.

Journal ArticleDOI
TL;DR: This network has been designed to be robust under noise while still maintaining enough flexibility to learn new patterns, and its use is demonstrated in the recognition of planar objects given edge vectors.

Journal ArticleDOI
TL;DR: Ulasonic images of human eyes in which choroidal tumors were present were applied as input to a neural network utilizing the backpropagation algorithm and were capable of localizing and classifying tumors and were tolerant of differences in size, conformation, and orientation.

Journal ArticleDOI
TL;DR: This study incorporated the model of the sensory neuron into a lateral inhibition-type network consisting of five elements and found that this network successfully simulated both second-order conditioning and blocking more readily than the three-cell network.

Journal ArticleDOI
TL;DR: Direct comparison of these generalizers with the neural net NETtalk, a back-propagated neural net also created for reading aloud, leads to the conclusion that NETtalk is in fact a poor generalizer.