scispace - formally typeset
Journal ArticleDOI

Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure

TLDR
This paper presents a pruning technique, by means of a quantified sensitivity measure, to remove as many neurons as possible, those with the least relevance, from the hidden layer of a multilayer perceptron (MLP).
About
This article is published in Neurocomputing.The article was published on 2006-03-01. It has received 90 citations till now. The article focuses on the topics: Pruning (decision trees) & Multilayer perceptron.

read more

Citations
More filters
Journal ArticleDOI

Review on Methods to Fix Number of Hidden Neurons in Neural Networks

TL;DR: The experimental results show that with minimum errors the proposed approach can be used for wind speed prediction in renewable energy systems and the perfect design of the neural network based on the selection criteria is substantiated using convergence theorem.
Journal ArticleDOI

A neural network approach using multi-scale textural metrics from very high-resolution panchromatic imagery for urban land-use classification

TL;DR: Very high-resolution panchromatic images from QuickBird and WorldView-1 have been used to accurately classify the land-use of four different urban environments and show that with a multi-scale approach it is possible to discriminate different asphalt surfaces due to the different textural information content.
Journal ArticleDOI

Reverse Engineering the Neural Networks for Rule Extraction in Classification Problems

TL;DR: Experimental results show that the proposed RxREN algorithm is quite efficient in extracting smallest set of rules with high classification accuracy than those generated by other neural network rule extraction methods.
Journal ArticleDOI

Pruning algorithms of neural networks — a comparative study

TL;DR: A survey of existing pruning techniques that optimize the architecture of neural networks and discusses their advantages and limitations is provided.
Journal ArticleDOI

Effective Neural Network Ensemble Approach for Improving Generalization Performance

TL;DR: Experimental results show that the proposed approach could construct a neural network ensemble with better generalization performance than that of each individual in the ensemble combining with all the other individuals, and than those of the ensembles with simply averaged weights.
References
More filters
Journal ArticleDOI

Multilayer feedforward networks are universal approximators

TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
Proceedings Article

Optimal Brain Damage

TL;DR: A class of practical and nearly optimal schemes for adapting the size of a neural network by using second-derivative information to make a tradeoff between network complexity and training set error is derived.
Proceedings Article

Second order derivatives for network pruning: Optimal Brain Surgeon

TL;DR: Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case, and thus yields better generalization on test data.
Journal ArticleDOI

Pruning algorithms-a survey

TL;DR: The approach taken by the methods described here is to train a network that is larger than necessary and then remove the parts that are not needed.