scispace - formally typeset
N

Nazri Mohd Nawi

Researcher at Universiti Tun Hussein Onn Malaysia

Publications -  155
Citations -  1806

Nazri Mohd Nawi is an academic researcher from Universiti Tun Hussein Onn Malaysia. The author has contributed to research in topics: Backpropagation & Artificial neural network. The author has an hindex of 19, co-authored 142 publications receiving 1443 citations. Previous affiliations of Nazri Mohd Nawi include University of Wales & Multimedia University.

Papers
More filters
Journal ArticleDOI

The Effect of Data Pre-processing on Optimized Training of Artificial Neural Networks

TL;DR: Simulation results show that the computational efficiency of ANN training process is highly enhanced when coupled with different preprocessing techniques, particularly Min-Max, Z-Score and Decimal Scaling Normalization preprocessing technique.
Book ChapterDOI

A new back-propagation neural network optimized with cuckoo search algorithm

TL;DR: The simulation results show that the computational efficiency of BP training process is highly enhanced when coupled with the proposed hybrid method and the performance of the proposed Cuckoo Search Back-Propagation (CSBP) is compared with artificial bee colony using BP algorithm, and other hybrid variants.
Journal ArticleDOI

A New Levenberg Marquardt Based Back Propagation Algorithm Trained with Cuckoo Search

TL;DR: An improved Levenberg Marquardt (LM) based back propagation (BP) trained with Cuckoo search algorithm for fast and improved convergence speed of the hybrid neural networks learning method.
Journal ArticleDOI

Non-stationary and stationary prediction of financial time series using dynamic ridge polynomial neural network

TL;DR: Simulation results indicate that DRPNN in most cases demonstrated advantages in capturing chaotic movement in the signals with an improvement in the profit return and rapid convergence over other network models.
Book ChapterDOI

An Improved Back Propagation Neural Network Algorithm on Classification Problems

TL;DR: The proposed algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function significantly improves the learning speed of the conventional back-propagation algorithm.