scispace - formally typeset
Open AccessJournal ArticleDOI

User-Independent American Sign Language Alphabet Recognition Based on Depth Image and PCANet Features

Walaa Aly, +2 more
- 01 Jan 2019 - 
- Vol. 7, pp 123138-123150
Reads0
Chats0
TLDR
Experimental results show that the performance of the proposed method outperforms state-of-the-art recognition accuracy using leave-one-out evaluation strategy and is evaluated using public dataset of real depth images captured from various users.
Abstract
Sign language is the most natural and effective way for communications among deaf and normal people. American Sign Language (ASL) alphabet recognition (i.e. fingerspelling) using marker-less vision sensor is a challenging task due to the difficulties in hand segmentation and appearance variations among signers. Existing color-based sign language recognition systems suffer from many challenges such as complex background, hand segmentation, large inter-class and intra-class variations. In this paper, we propose a new user independent recognition system for American sign language alphabet using depth images captured from the low-cost Microsoft Kinect depth sensor. Exploiting depth information instead of color images overcomes many problems due to their robustness against illumination and background variations. Hand region can be segmented by applying a simple preprocessing algorithm over depth image. Feature learning using convolutional neural network architectures is applied instead of the classical hand-crafted feature extraction methods. Local features extracted from the segmented hand are effectively learned using a simple unsupervised Principal Component Analysis Network (PCANet) deep learning architecture. Two strategies of learning the PCANet model are proposed, namely to train a single PCANet model from samples of all users and to train a separate PCANet model for each user, respectively. The extracted features are then recognized using linear Support Vector Machine (SVM) classifier. The performance of the proposed method is evaluated using public dataset of real depth images captured from various users. Experimental results show that the performance of the proposed method outperforms state-of-the-art recognition accuracy using leave-one-out evaluation strategy.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Vision-based hand gesture recognition using deep learning for the interpretation of sign language

TL;DR: A deep learning based convolutional neural network model is specifically designed for the recognition of gesture-based sign language that achieves better classification accuracy with a fewer number of model parameters over the other existing architectures of CNN.
Journal ArticleDOI

American sign language recognition and training method with recurrent neural network

TL;DR: Long-Short Term Memory Recurrent Neural Network with k-Nearest-Neighbour method is adopted as the classification method is based on handling of sequences of input, and revealed that the recognition rate for 26 ASL alphabets yields an average of 99.44% accuracy rate.
Journal ArticleDOI

DeepArSLR: A Novel Signer-Independent Deep Learning Framework for Isolated Arabic Sign Language Gestures Recognition

TL;DR: Experimental results show that the performance of proposed framework outperforms with large margin the state-of-the-art methods for signer-independent testing strategy.
Journal ArticleDOI

Deep Learning for Sign Language Recognition: Current Techniques, Benchmarks, and Open Issues

TL;DR: A comprehensive review of automated sign language recognition based on machine/deep learning methods and techniques published between 2014 and 2021 is presented in this article, which concludes that the current methods require conceptual classification to interpret all available data correctly.
Journal ArticleDOI

FFT-based deep feature learning method for EEG classification

TL;DR: A new method for electroencephalogram (EEG) signal classification based on deep learning model, by which relevant features are automatically learned in a supervised learning framework, which exhibits better stability across different classification cases or patients, indicates the worth in practical applications for diagnostic reference in clinics.
References
More filters
Journal Article

LIBLINEAR: A Library for Large Linear Classification

TL;DR: LIBLINEAR is an open source library for large-scale linear classification that supports logistic regression and linear support vector machines and provides easy-to-use command-line tools and library calls for users and developers.
Proceedings ArticleDOI

Training linear SVMs in linear time

TL;DR: A Cutting Plane Algorithm for training linear SVMs that provably has training time 0(s,n) for classification problems and o(sn log (n)) for ordinal regression problems and several orders of magnitude faster than decomposition methods like svm light for large datasets.
Journal ArticleDOI

Enhanced Computer Vision With Microsoft Kinect Sensor: A Review

TL;DR: A comprehensive review of recent Kinect-based computer vision algorithms and applications covering topics including preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping.
Journal ArticleDOI

Vision based hand gesture recognition for human computer interaction: a survey

TL;DR: An analysis of comparative surveys done in the field of gesture based HCI and an analysis of existing literature related to gesture recognition systems for human computer interaction by categorizing it under different key parameters are provided.
Journal ArticleDOI

PCANet: A Simple Deep Learning Baseline for Image Classification?

TL;DR: PCANet as discussed by the authors is a simple deep learning network for image classification which comprises only the very basic data processing components: cascaded principal component analysis (PCA), binary hashing, and block-wise histograms.
Related Papers (5)