scispace - formally typeset
Journal ArticleDOI

An introduction to computing with neural nets

Richard P. Lippmann
- 01 Mar 1988 - 
- Vol. 16, Iss: 1, pp 7-25
TLDR
This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification and exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components.
Abstract
Artificial neural net models have been studied for many years in the hope of achieving human-like performance in the fields of speech and image recognition. These models are composed of many nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural nets. Computational elements or nodes are connected via weights that are typically adapted during use to improve performance. There has been a recent resurgence in the field of artificial neural nets caused by new net topologies and algorithms, analog VLSI implementation techniques, and the belief that massive parallelism is essential for high performance speech and image recognition. This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification. These nets are highly parallel building blocks that illustrate neural net components and design principles and can be used to construct more complex systems. In addition to describing these nets, a major emphasis is placed on exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components. Single-layer nets can implement algorithms required by Gaussian maximum-likelihood classifiers and optimum minimum-error classifiers for binary patterns corrupted by noise. More generally, the decision regions required by any classification algorithm can be generated in a straightforward manner by three-layer feed-forward nets.

read more

Citations
More filters
Journal ArticleDOI

Approximation by superpositions of a sigmoidal function

TL;DR: It is demonstrated that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube.
Journal ArticleDOI

Fast learning in networks of locally-tuned processing units

TL;DR: This work proposes a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken 1988).
Journal ArticleDOI

Artificial neural networks: a tutorial

TL;DR: The article discusses the motivations behind the development of ANNs and describes the basic biological neuron and the artificial computational model, and outlines network architectures and learning processes, and presents some of the most commonly used ANN models.
Journal ArticleDOI

Neural network ensembles

TL;DR: It is shown that the remaining residual generalization error can be reduced by invoking ensembles of similar networks, which helps improve the performance and training of neural networks for classification.
Journal ArticleDOI

Neural networks and the bias/variance dilemma

TL;DR: It is suggested that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Journal ArticleDOI

Neural networks and physical systems with emergent collective computational abilities

TL;DR: A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.
Book

Neurons with graded response have collective computational properties like those of two-state neurons

TL;DR: In this article, a model for a large network of "neurons" with a graded response (or sigmoid input-output relation) is studied, which has collective properties in very close correspondence with the earlier stochastic model based on McCulloch--Pitts neurons.
Journal ArticleDOI

Absolute stability of global pattern formation and parallel memory storage by competitive neural networks

TL;DR: It remains an open question whether the Lyapunov function approach, which requires a study of equilibrium points, or an alternative global approach, such as the LyAPunov functional approach, will ultimately handle all of the physically important cases.
Related Papers (5)