scispace - formally typeset
A

Abu Saleh Musa Miah

Researcher at University of Rajshahi

Publications -  25
Citations -  122

Abu Saleh Musa Miah is an academic researcher from University of Rajshahi. The author has contributed to research in topics: Computer science & Pattern recognition (psychology). The author has an hindex of 2, co-authored 13 publications receiving 13 citations. Previous affiliations of Abu Saleh Musa Miah include Bangladesh University.

Papers
More filters
Journal ArticleDOI

BenSignNet: Bengali Sign Language Alphabet Recognition Using Concatenated Segmentation and Convolutional Neural Network

TL;DR: A novel method for recognizing Bengali sign language (BSL) alphabets to overcome the issue of generalization and achieve a higher recognition rate than the conventional ones and accomplished a generalization property in all datasets for the BSL domain is proposed.
Journal ArticleDOI

Motor-Imagery Classification Using Riemannian Geometry with Median Absolute Deviation

TL;DR: A median absolute deviation (MAD) strategy that calculates the average sample covariance matrices (SCMs) to select optimal accurate reference metrics in a tangent space mapping (TSM)-based MI-EEG data provides better accuracy than more sophisticated methods.
Proceedings ArticleDOI

Hand Gesture Recognition Based on Optimal Segmentation in Human-Computer Interaction

TL;DR: In this article, the authors proposed an optimal segmentation method for identifying hand gestures from input images, improving recognition performance by comparing the segmentation methods of YCbCr, SkinMask, and HSV (hue, saturation, and value).
Book ChapterDOI

Multiclass MI-Task Classification Using Logistic Regression and Filter Bank Common Spatial Patterns

TL;DR: This paper proposed a classification technique of EEG motor imagery signals using Logistic regression and feature extraction algorithm using filter bank common spatial pattern (FBCSP), and it is shown that proposed method is promising.
Journal ArticleDOI

Rotation, Translation and Scale Invariant Sign Word Recognition Using Deep Learning

TL;DR: A Rotation, Translation and Scale-invariant sign word recognition system using a convolutional neural network (CNN) that has been trained to classify the hand gesture as well as the sign word.