scispace - formally typeset
A

AbdElRahman A. ElSaid

Researcher at Rochester Institute of Technology

Publications -  27
Citations -  294

AbdElRahman A. ElSaid is an academic researcher from Rochester Institute of Technology. The author has contributed to research in topics: Recurrent neural network & Artificial neural network. The author has an hindex of 7, co-authored 27 publications receiving 200 citations. Previous affiliations of AbdElRahman A. ElSaid include University of North Dakota.

Papers
More filters
Journal ArticleDOI

Optimizing long short-term memory recurrent neural networks using ant colony optimization to predict turbine engine vibration

TL;DR: In this article, an improved version of the ant colony optimization (ACO) algorithm has been used to develop a recurrent neural network (RNN) capable of predicting aircraft engine vibrations using LSTM neurons.
Proceedings ArticleDOI

Using LSTM recurrent neural networks to predict excess vibration events in aircraft engines

TL;DR: Recurrent Neural Networks using Long Short Term Memory neurons to predict aircraft engine vibrations provide a promising means for the future development of warning systems so that suitable actions can be taken before the occurrence of excess vibration to avoid unfavorable situations during flight.
Proceedings ArticleDOI

Using ant colony optimization to optimize long short-term memory recurrent neural networks

TL;DR: The ACO optimized LSTM performed significantly better than traditional Nonlinear Output Error (NOE), Nonlinear AutoRegression with eXogenous (NARX) inputs, and Nonlinear Box-Jenkins (NBJ) models, which only reached error rates of 11.47% and 9.77%, respectively.
Book ChapterDOI

Evolving Recurrent Neural Networks for Time Series Data Prediction of Coal Plant Parameters

TL;DR: The Evolutionary eXploration of Augmenting LSTM Topologies (EXALT) algorithm and its use in evolving recurrent neural networks (RNNs) for time series data prediction is presented, showing strong potential to beat traditional architectures given additional runtime.
Book ChapterDOI

An Experimental Study of Weight Initialization and Lamarckian Inheritance on Neuroevolution

TL;DR: In this article, the difference between the state-of-the-art Xavier and Kaiming methods, and novel Lamarckian weight inheritance for weight initialization during crossover and mutation operations was explored using the Evolutionary eXploration of Augmenting Memory Models (EXAMM) neuroevolution algorithm.