Book ChapterDOI
Generalization capability of artificial neural network incorporated with pruning method
Siddhaling Urolagin,K. V. Prema,N. V. Subba Reddy +2 more
- pp 171-178
TLDR
A method is developed in this research to evaluate generalization capability and rate of convergence towards the generalization of Multi-Layer Perceptron neural network with pruning being incorporated for handwritten numeral recognition.Abstract:
In any real world application, the performance of Artificial Neural Networks (ANN) is mostly depends upon its generalization capability. Generalization of the ANN is ability to handle unseen data. The generalization capability of the network is mostly determined by system complexity and training of the network. Poor generalization is observed when the network is over-trained or system complexity (or degree of freedom) is relatively more than the training data. A smaller network which can fit the data will have the k good generalization ability. Network parameter pruning is one of the promising methods to reduce the degree of freedom of a network and hence improve its generalization. In recent years various pruning methods have been developed and found effective in real world applications. Next, it is important to estimate the improvement in generalization and rate of improvement as pruning being incorporated in the network. A method is developed in this research to evaluate generalization capability and rate of convergence towards the generalization. Using the proposed method, experiments have been conducted to evaluate Multi-Layer Perceptron neural network with pruning being incorporated for handwritten numeral recognition.read more
Citations
More filters
Journal ArticleDOI
Comparative analysis on hidden neurons estimation in multi layer perceptron neural networks for wind speed forecasting
M. Madhiarasan,S. N. Deepa +1 more
TL;DR: Simulation results prove that the proposed methodology minimized the computational error and enhanced the prediction accuracy, and the presented approach is compact, enhances the accuracy rate with reduced error and faster convergence.
Proceedings ArticleDOI
Generalization of Deep Learning for Cyber-Physical System Security: A Survey
TL;DR: This paper intends to provide a concise survey of the regularization methods for DL algorithms used in security-related applications in CPSs and thus could be used to improve the generalization capability of DL based cyber-physical system based security applications.
Journal ArticleDOI
A novel criterion to select hidden neuron numbers in improved back propagation networks for wind speed forecasting
M. Madhiarasan,S. N. Deepa +1 more
TL;DR: Simulation results prove that proposed approach reduces the error to a minimal value and enhances forecasting accuracy and the perfect building of improved back propagation networks employing the fixation criterion is substantiated based on the convergence theorem.
Journal ArticleDOI
Pattern recognition for electroencephalographic signals based on continuous neural networks
TL;DR: In comparison with similar pattern recognition methods that even considered less number of classes, the proposed CNN proved to achieve the same or even better correct classification results.
Proceedings ArticleDOI
First-order-principles-based constructive network topologies: An application to robot inverse dynamics
TL;DR: The concept FOPnet is introduced, based on first-order principles and system knowledge to determine topologies of parametrized operator networks that accurately model input-output mappings of physical systems and was able to achieve a seven orders of magnitude smaller generalization RMSE.
References
More filters
Book
Estimation of Dependences Based on Empirical Data
TL;DR: In this article, the Big Picture of Inference: Direct Inference Instead of Generalization (INFI) instead of generalization (2000-2010) is presented. But this is not the case in this paper.
Journal ArticleDOI
Learnability and the Vapnik-Chervonenkis dimension
TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Proceedings Article
Second order derivatives for network pruning: Optimal Brain Surgeon
Babak Hassibi,David G. Stork +1 more
TL;DR: Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case, and thus yields better generalization on test data.
Journal ArticleDOI
Pruning algorithms-a survey
TL;DR: The approach taken by the methods described here is to train a network that is larger than necessary and then remove the parts that are not needed.
Journal ArticleDOI
A simple procedure for pruning back-propagation trained neural networks
TL;DR: Shadow arrays are introduced which keep track of the incremental changes to the synaptic weights during a single pass of back-propagating learning and are ordered by decreasing sensitivity numbers so that the network can be efficiently pruned by discarding the last items of the sorted list.