scispace - formally typeset
H

Hammam A. Alshazly

Researcher at South Valley University

Publications -  37
Citations -  963

Hammam A. Alshazly is an academic researcher from South Valley University. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 7, co-authored 22 publications receiving 323 citations. Previous affiliations of Hammam A. Alshazly include University of Lübeck & University of Kansas.

Papers
More filters
Book ChapterDOI

Image Features Detection, Description and Matching

TL;DR: This chapter introduces basic notation and mathematical concepts for detecting and describing image features, and discusses properties of perfect features and gives an overview of various existing detection and description methods.
Journal ArticleDOI

Explainable COVID-19 Detection Using Chest CT Scans and Deep Learning.

TL;DR: How well deep learning models trained on chest CT images can diagnose COVID-19 infected people in a fast and automated process is explored and a transfer learning strategy using custom-sized input tailored for each deep architecture to achieve the best performance is proposed.
Journal ArticleDOI

Ear recognition using local binary patterns: A comparative experimental study

TL;DR: The obtained results for both identification and verification indicate that the current LBP texture descriptors are successful feature extraction candidates for ear recognition systems in the case of constrained imaging conditions and can achieve recognition rates reaching up to 99%; while, their performance faces difficulties when the level of distortions increases.
Journal ArticleDOI

Diabetic Retinopathy Diagnosis From Fundus Images Using Stacked Generalization of Deep Models

TL;DR: In this article, the authors proposed a methodology to eliminate unnecessary reflectance properties of the images using a novel image processing schema and a stacked deep learning technique for the diagnosis of diabetic retinopathy.
Journal ArticleDOI

Ensembles of Deep Learning Models and Transfer Learning for Ear Recognition.

TL;DR: A novel system for ear recognition based on ensembles of deep CNN-based models and more specifically the Visual Geometry Group (VGG)-like network architectures for extracting discriminative deep features from ear images with significant improvements over the recently published results.