scispace - formally typeset
Book ChapterDOI

Neural Networks for Pattern Recognition

Suresh Kothari, +1 more
- 01 Jan 1993 - 
- Vol. 37, pp 119-166
Reads0
Chats0
TLDR
The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue.
Abstract
Publisher Summary This chapter provides an account of different neural network architectures for pattern recognition. A neural network consists of several simple processing elements called neurons. Each neuron is connected to some other neurons and possibly to the input nodes. Neural networks provide a simple computing paradigm to perform complex recognition tasks in real time. The chapter categorizes neural networks into three types: single-layer networks, multilayer feedforward networks, and feedback networks. It discusses the gradient descent and the relaxation method as the two underlying mathematical themes for deriving learning algorithms. A lot of research activity is centered on learning algorithms because of their fundamental importance in neural networks. The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue. It closes with the discussion of performance and implementation issues.

read more

Citations
More filters
Journal ArticleDOI

A spatially constrained mixture model for image segmentation

TL;DR: A new methodology for the M-step of the EM algorithm that is based on a novel constrained optimization formulation that shows superior performance in terms of the attained maximum value of the objective function and segmentation accuracy compared to previous implementations of this approach.
Proceedings ArticleDOI

Performance and Scalability of GPU-Based Convolutional Neural Networks

TL;DR: This paper presents the implementation of a framework for accelerating training and classification of arbitrary Convolutional Neural Networks (CNNs) on the GPU and describes the basic parts of a CNN and demonstrates the performance and scalability improvement that can be achieved by shifting the computation-intensive tasks of aCNN to the GPU.
Journal ArticleDOI

Approximation by fully complex multilayer perceptrons

TL;DR: Three proofs of the approximation capability of the fully complex MLP are provided based on the characteristics of singularity among ETFs, which shows the output of complex MLPs using ETFs with isolated and essential singularities uniformly converges to any nonlinear mapping in the deleted annulus of singularities nearest to the origin.
Journal ArticleDOI

Taking the bite out of automated naming of characters in TV video

TL;DR: It is demonstrated that high precision can be achieved by combining multiple sources of information, both visual and textual, by automatic generation of time stamped character annotation by aligning subtitles and transcripts.
Journal ArticleDOI

Artificial neural networks and cluster analysis in landslide susceptibility zonation

TL;DR: The case study validates that, by means of a domain-specific distance measure in cluster formation, it is possible to introduce expert knowledge into the black-box modelling method, implemented by ANNs, to improve the predictive capability and the robustness of the models obtained.
References
More filters
Journal ArticleDOI

Neural networks and physical systems with emergent collective computational abilities

TL;DR: A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.
Journal ArticleDOI

A logical calculus of the ideas immanent in nervous activity

TL;DR: In this article, it is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under another and gives the same results, although perhaps not in the same time.
Journal ArticleDOI

An introduction to computing with neural nets

TL;DR: This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification and exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components.
Journal ArticleDOI

Neurons with graded response have collective computational properties like those of two-state neurons.

TL;DR: A model for a large network of "neurons" with a graded response (or sigmoid input-output relation) is studied and collective properties in very close correspondence with the earlier stochastic model based on McCulloch - Pitts neurons are studied.