scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Hand Sign Recognition Based Communication System for Speech Disable People

TL;DR: A smart hand sign interpretation system using a smart glove is proposed to reduce the communication gap between speech impaired people and the normal people and has an ability to convert sign language into a voice in a very simple way.
Abstract: According to the census of India 2011. In India, 70 million people have some kind of disability, among of that 18% of people are speech and hear impaired. That means India is the country which has a large number of people having this kind of disability. These people experience the problem to participate in society and the enjoyment of equal rights and opportunities. Because they don't have the power to express feelings in the form of words and sentence. So the people try to deal with this problem using different techniques. In this paper, a smart hand sign interpretation system using a smart glove is proposed to reduce the communication gap between speech impaired people and the normal people. This wearable system utilizes five flex-sensors, 3-axis accelerometer, one Bluetooth module and 16*2 LCD display. In this system, processor collects data from the 5 flex sensors and accelerometer. Further processor matches the data which is received from the sensors and the previously saved data. If data matched with the saved data then assigned meaning for that data will be displayed on LCD screen and also send it to the Android mobile through Bluetooth. Android mobile app can convert this into voice. So this system has an ability to convert sign language into a voice in a very simple way.
Citations
More filters
Proceedings ArticleDOI
03 Dec 2020
TL;DR: The idea is to attach sensors to a glove and develop a machine learning model that predicts the gesture by reading the sensor values, and the predicted result can be given as output through an audio module or a display.
Abstract: Muteness or mutism is an inability to speak, often caused by speech disorder or surgery Regular people cannot understand sign language and therefore, gesture recognizer and communicator can be a solution to this problem The idea is to attach sensors (flex sensors and accelerometers) to a glove and develop a machine learning model that predicts the gesture by reading the sensor values The predicted result can be given as output through an audio module or a display

4 citations


Cites background from "Hand Sign Recognition Based Communi..."

  • ...Similarly, almost all the existing research [6],[7],[8] does not use machine learning which always results in reduced accuracy....

    [...]

Journal ArticleDOI
TL;DR: Proposed system helps to provide output of speech and text from hand gesture and the user interface used here is raspberry pi microcontroller which helps for relay communication.

3 citations

Journal ArticleDOI
TL;DR: This paper uses a camera-PC interface that can catch the developments, and later uses PC vision innovation and AI calculations to comprehend the fundamental example and match the information with a preprepared dataset.
Abstract: This paper presents an intelligent human PC intuitive framework. In this proposed work, artificial intelligence is utilized for home mechanization, which perceives human motions with the assistance of a camera and performs tasks appropriately. The idea of perceiving the motions depends on three layers: detection, tracking, and recognition. We use a camera-PC interface that can catch the developments, and later use PC vision innovation and AI calculations to comprehend the fundamental example and match the information with a preprepared dataset. When it comes to safes, an extra layer of security is provided by using face recognition, and the safe is opened if the individual is recognized from the dataset.

1 citations

Journal ArticleDOI
TL;DR: In this article , the authors reviewed and analyzed articles related to sign language recognition based on the sensor-based glove system, in order to identify academic motivations, challenges, and recommendations related to this field.
Abstract: Sign language is the predominant mode of communication for the Hearing impaired community. For the millions of people who suffer from hearing loss around the world, interaction with people who have the ability to hear and do not suffer from hearing impairment or loss is considered as complicated. In line with this issue, technology is perceived as a crucial factor in being an enabler of providing solutions to enhance the quality of life of the hearing impairment by increasing accessibility. This research aims to review and analyze articles related to sign language recognition based on the sensor- based glove system, in order to identify academic motivations, challenges, and recommendations related to this field. The search for the relevant review materials and articles was performed on four major databases ranging from 2017 to 2022: Science Direct, Web of Science, IEEE Xplore, and Scopus. The articles were chosen based on our inclusion and exclusion criteria. The literature findings of this paper indicate the dataset size to be open issues and challenges for hand gesture recognition. Furthermore, the majority of research on sign language recognition based on data glove was performed on static, single hand, and isolated gestures. Moreover, recognition accuracy typically achieved results higher than 90%. However, most experiments were carried out with a limited number of gestures. Overall, it is hoped that this study will serve as a roadmap for future research and raise awareness among researchers in the field of sign language recognition.

1 citations

Journal ArticleDOI
TL;DR: In this paper , a systematic literature review (SLR) method was used to evaluate the quality of sign language mobile apps for normal-hearing and hearing-impaired users.
Abstract: Numerous nations have prioritised the inclusion of citizens with disabilities, such as hearing loss, in all aspects of social life. Sign language is used by this population, yet they still have trouble communicating with others. Many sign language apps are being created to help bridge the communication gap as a result of technology advances enabled by the widespread use of smartphones. These apps are widely used because they are accessible and inexpensive. The services and capabilities they offer and the quality of their content, however, differ greatly. Evaluation of the quality of the content provided by these applications is necessary if they are to have any kind of real effect. A thorough evaluation like this will inspire developers to work hard on new apps, which will lead to improved software development and experience overall. This research used a systematic literature review (SLR) method, which is recognised in gaining a broad understanding of the study whilst offering additional information for future investigations. SLR was adopted in this research for smartphone-based sign language apps to understand the area and main discussion aspects utilised in the assessment. These studies were reviewed on the basis of related work analysis, main issues, discussions and methodological aspects. Results revealed that the evaluation of sign language mobile apps is scarce. Thus, we proposed a future direction for the quality assessment of these apps. The findings will benefit normal-hearing and hearing-impaired users and open up a new area where researchers and developers could work together on sign language mobile apps. The results will help hearing and non-hearing users and will pave the way for future collaboration between academicians and app developers in the field of sign language technology.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: The proposed wearable system outperforms the existing method, for instance, although background lights, and other factors are crucial to a vision-based processing method, they are not for the proposed system.
Abstract: Gesturing is an instinctive way of communicating to present a specific meaning or intent. Therefore, research into sign language interpretation using gestures has been explored progressively during recent decades to serve as an auxiliary tool for deaf and mute people to blend into society without barriers. In this paper, a smart sign language interpretation system using a wearable hand device is proposed to meet this purpose. This wearable system utilizes five flex-sensors, two pressure sensors, and a three-axis inertial motion sensor to distinguish the characters in the American sign language alphabet. The entire system mainly consists of three modules: 1) a wearable device with a sensor module; 2) a processing module; and 3) a display unit mobile application module. Sensor data are collected and analyzed using a built-in embedded support vector machine classifier. Subsequently, the recognized alphabet is further transmitted to a mobile device through Bluetooth low energy wireless communication. An Android-based mobile application was developed with a text-to-speech function that converts the received textinto audible voice output. Experiment results indicate that a true sign language recognition accuracy rate of 65.7% can be achieved on average in the first version without pressure sensors. A second version of the proposed wearable system with the fusion of pressure sensors on the middle finger increased the recognition accuracy rate dramatically to 98.2%. The proposed wearable system outperforms the existing method, for instance, although background lights, and other factors are crucial to a vision-based processing method, they are not for the proposed system.

123 citations

Proceedings ArticleDOI
01 Sep 2011
TL;DR: The results with test images are presented, which show that the proposed Sign Language Recognition System is able to recognize images with 98.125% accuracy when trained with 320 images and tested with 160 images.
Abstract: The Sign Language is a method of communication for deaf - dumb people. This paper proposes a method that provides a basis for the development of Sign Language Recognition system for one of the south Indian languages. In the proposed method, a set of 32 signs, each representing the binary ‘UP’ & ‘DOWN’ positions of the five fingers is defined. The images are of the palm side of right hand and are loaded at runtime i.e. dynamic loading. The method has been developed with respect to single user both in training and testing phase. The static images have been pre-processed using feature point extraction method and are trained with 10 numbers of images for each sign. The images are converted into text by identifying the finger tip position of static images using image processing techniques. The proposed method is able to identify the images of the signer which are captured dynamically during testing phase. The results with test images are presented, which show that the proposed Sign Language Recognition System is able to recognize images with 98.125% accuracy when trained with 320 images and tested with 160 images.

98 citations


"Hand Sign Recognition Based Communi..." refers background in this paper

  • ...[7], a bunch of 32 signs, each and every sign is not exactly similar to another....

    [...]

Proceedings ArticleDOI
01 Dec 2016
TL;DR: Though the glove is intended for sign language to speech conversion, it is a multipurpose glove and finds its applications in gaming, robotics and medical field.
Abstract: People with speech impairment find it difficult to communicate in a society where most of the people do not understand sign language The idea proposed in this paper is a smart glove which can convert sign language to speech output The glove is embedded with flex sensors and an Inertial Measurement Unit (IMU) to recognize the gesture A novel method of State Estimation has been developed to track the motion of hand in three dimensional spaces The prototype was tested for its feasibility in converting Indian Sign Language to voice output Though the glove is intended for sign language to speech conversion, it is a multipurpose glove and finds its applications in gaming, robotics and medical field

43 citations


"Hand Sign Recognition Based Communi..." refers background in this paper

  • ...[12], in which he develops a smart glove which can convert sign language to speech output....

    [...]

06 Mar 2015
TL;DR: This paper aims at eradicating the communication barrier between deaf-dumb people by developing an embedded system which will translate the hand gestures into synthesized textual and vocal format without any requirement of special sign language interpreter.
Abstract: Communication between deaf-dumb and a normal person have always been a challenging task .About 9 billion people in the world come into this category which is quite large number to be ignored. As deaf-dumb people use sign language for their communication which is difficult to understand by the normal people. This paper aims at eradicating the communication barrier between them by developing an embedded system which will translate the hand gestures into synthesized textual and vocal format without any requirement of special sign language interpreter. This system consists of a glove that will be worn by a dumb person to facilitate the communication with the normal person. it translates the hand gestures to corresponding words using flex sensors and 3-axis accelerometer. The signals are converted to digital data using comparator circuits and ADC of microcontroller ARM LPC 2138.the microcontroller matches the binary combinations with the data given in the look up table of the databases and produces the speech signal. The output of the system is displayed using the speaker and LCD.

38 citations

Proceedings ArticleDOI
14 Jul 2014
TL;DR: An embedded gesture recognition system which can interpret gesture to voice and text messages is designed and developed using a data glove and the experimental results show that the recognition accuracy rate of trained gestures is above 91%.
Abstract: Gesture is a natural and intuitive mode of interpersonal communication, which can simulate language and behaviors to express certain meanings and words. In this paper, an embedded gesture recognition system has been designed and developed using a data glove. The design of a data glove is aimed at supplying a auxiliary communication tool for deaf people so that there is more social intercourse between them and the normal. The gesture recognition system which can interpret gesture to voice and text messages is mainly composed of four parts: a data glove, an ARM processor, a display module and an audio module. The data glove comprises five unidirectional bend sensors (FLX-03) and a 3-axis accelerometer. The ARM processor receives the data that is transmitted from the sensors of a data glove through I/O ports. And it also analyzes and calculations the data from distinct gestures, then compares the data with the templates to see if the gestures match. The recognition results are displayed on an LCD screen, and the converted voice is outputted through an external speaker. Combined with the hardware, this paper focuses on the template matching algorithm, including data collection, mining and processing. The experimental results show that the recognition accuracy rate of trained gestures is above 91%.

28 citations