scispace - formally typeset
Proceedings ArticleDOI

Models for Hand Gesture Recognition using Deep Learning

TLDR
A hand gesture recognition system which works in 4 steps to eliminate the ambiguity introduced in the results by inculcating variation in the background by developing a system which acts as a mediator between both.
Abstract
According to the World Health organization, 5% of the world population, approximately 466 million people, is deaf and/or mute or has disabling hearing loss. There is often a wall of distinction between handicapped people and normal people. We communicate to share our thoughts but for a disabled person (mainly deaf and dumb), it becomes difficult to communicate. Inability to speak is considered to be a true form of disability. For such people, sign language or Braille is the only means of communication. Sign Language is a way of communication using hand gestures. However, it becomes difficult for them to communicate with others as most don’t understand sign language. Hence, we aim at bridging this communication gap, between a deaf/mute person and others, by developing a system which acts as a mediator between both. We propose a hand gesture recognition system which works in 4 steps: 1. Generate a live stream of hand gestures using web-cam. 2. Form images from the video using video frames. 3. Preprocess these images. 4. Recognize sign language hand-gestures and convert into text/audio output. The system is implemented using the concepts of image processing and neural networks. We have tested the proposed models using Kaggle dataset, our dataset and a dataset formed after combining both. We propose to eliminate the ambiguity introduced in the results by inculcating variation in the background. Most of the models give similar accuracy of the test results for both plain and cluttered background.

read more

Citations
More filters
Journal ArticleDOI

Hand Gesture Recognition Based on Auto-Landmark Localization and Reweighted Genetic Algorithm for Healthcare Muscle Activities

TL;DR: Experimental results proved that auto landmark localization with the proposed feature extraction technique is an efficient approach towards developing a robust HGR system.
Journal ArticleDOI

An Integrated Real-Time Hand Gesture Recognition Framework for Human–Robot Interaction in Agriculture

TL;DR: A real-time skeleton-based recognition system for five hand gestures using a depth camera and machine learning was developed and successfully tested in outdoor experimental sessions that included either one or two persons.
Book ChapterDOI

Gesture Interaction in Virtual Reality

TL;DR: In this article, hand and body gestures are detected using human pose estimation based on off-the-shelf optical camera images using machine learning, and obtain reliable gesture recognition without additional sensors.
Proceedings ArticleDOI

Use of Ensemble Machine Learning to Detect Depression in Social Media Posts

TL;DR: In this article, a system to detect depression using ensembled learning and Natural Language Processing (NLP) techniques was proposed, and the best performing configuration gave an accuracy of 96.35%.

Adversarial Unsupervised Domain Adaptation for Hand Gesture Recognition Using Thermal Images

TL;DR: In this article , an adversarial UDA model was proposed to learn domain-invariant features across RGB and thermal domains, which leverages the information from the labeled RGB data to solve the hand gesture recognition task using thermal images.
References
More filters
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

American Sign Language Recognition using Deep Learning and Computer Vision

TL;DR: The focus of this work is to create a visionbased application which offers sign language translation to text thus aiding communication between signers and non-signers.
Proceedings ArticleDOI

Hand gesture recognition using deep learning

TL;DR: A technique which commands computer using six static and eight dynamic hand gestures, the three main steps are: hand shape recognition, tracing of detected hand, and converting the data into the required command.
Proceedings ArticleDOI

A Real-Time System for Recognition of American Sign Language by using Deep Learning

TL;DR: A real-time sign language recognition system for people who do not know sign language to communicate easily with hearing-impaired people is developed.