scispace - formally typeset
Search or ask a question
Author

Sujin Krishnan

Bio: Sujin Krishnan is an academic researcher from Amrita Vishwa Vidyapeetham. The author has contributed to research in topics: Gesture recognition & Gesture. The author has an hindex of 1, co-authored 1 publications receiving 17 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: With the average success rate of gesture recognition above 99.25% and response time as comparable with that of commercially available joystick controlled wheelchair, HanGes could be a possible alternative to the existing ones.
Abstract: This paper presents a novel and simple hand gesture recognition method to be used in rehabilitation of people who have mobility issues particularly stroke patients and patients with spinal cord injury (SCI). Keeping in mind the reach of such a system for a wider community of people with mobility issues, the proposed low-cost control device called gpaD—gesture pad provides an alternative solution to the joystick-based powered wheelchair control through hand gestures. In this method, IR sensors are used for identifying the simple gestures to control the powered wheelchair to move in any direction. In the proposed prototype system HanGes, a gesture pad that includes IR sensors, MCU and power management circuit is designed for gesture recognition and identification and a controller for driving motors is implemented. HanGes’s design, implementation, the response time calculations of the system, testing, performance evaluation with stroke and SCI patients are discussed in detail. With the average success rate of gesture recognition above 99.25% and response time as comparable with that of commercially available joystick controlled wheelchair, HanGes could be a possible alternative to the existing ones. With extensive experiments that demonstrate the accuracy of the system, the user experience, testing with patients, and the implementation cost indicate the superiority of our system.

18 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An overview of mobile applications that, in some perspective, may be used to support people with disabilities in their tourist activities is presented and some preliminary recommendations for a collaborative and personalized system framework are presented.
Abstract: In the Travel and Tourism sector, mobile applications could do much more than simply provide information about specific locations or recommend places and itineraries based on the user location. They could leverage a wide range of technologies to be aware of the interests and specific needs of disabled tourists, providing them with appropriate and tailored information. This information should be presented using appropriate interaction mechanisms, able to help this specific, but large, group of the population in their everyday tourist activities, thus contributing to even more accessible tourism and travel activities. We present an overview of mobile applications that, in some perspective, may be used to support people with disabilities in their tourist activities. This overview allows us to explore the key challenges involved and some available alternatives, as well as to identify their positive aspects. However, it also underlines some issues that could be addressed more carefully and extensively in the application ecosystem. We also present some preliminary recommendations for a collaborative and personalized system framework to support people with disabilities in their tourist and travel activities. Ongoing work on a first prototype has already given us valuable insights into the identified challenges and we expect it to be a major step towards a more formal specification of our platform, as well as its development and test.

40 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: An android application is proposed that converts sign language to natural language and enable deaf and dumb community to talk over mobile phones and allows easy communication of deaf and Dumb with society.
Abstract: The communication barrier of deaf and dumb community with the society still remains a matter of concern due to lack of perfect sign language translators. Usage of mobile phones for communication remains a dream for deaf and dumb community. We propose an android application that converts sign language to natural language and enable deaf and dumb community to talk over mobile phones. Developing Sign Recognition methods for mobile applications has challenges like need for light weight method with less CPU and memory utilization. The application captures image using device camera process it and determines the corresponding gesture. An initial phase of comparison using histogram matching is done to identify those gestures that are close to test sample and further only those samples are subjected to Oriented Fast and Rotated BRIEF(Binary Robust Independent Element Features) based comparison hence reducing the CPU time. The user of the application can also add new gestures into the dataset. The application allows easy communication of deaf and dumb with society. Though there are many computer based applications for sign language recognition, development in android platform is adequately less.

24 citations

Journal ArticleDOI
18 May 2018
TL;DR: The results show a better performance of the handlebar in terms of error in following a trajectory, collisions with the surrounding furniture, and user feeling related to ease of use, comfort, required training, usefulness, safety, and fatigue.
Abstract: Attendant joysticks of powered wheelchairs are devices oriented to help caregivers. Diseases and disabilities such as dementia, spinal cord injuries or blindness make the user unable to drive the chair by his or her own. However, this device is not intuitive to use, especially for old people. Proper processing of the information provided by two tactile sensors in the handlebar achieves control signals that allow an easy and intuitive driving. This is done in this paper, where the performance of this approach is evaluated in comparison with that of the joystick by means of objective measurements as well as questionnaires to obtain the subjective perception of the participants in the experiments. The results show a better performance of the handlebar in terms of error in following a trajectory, collisions with the surrounding furniture, and user feeling related to ease of use, comfort, required training, usefulness, safety, and fatigue.

18 citations

Journal ArticleDOI
24 May 2020-Sensors
TL;DR: A dual-output deep learning model is proposed to enable simultaneous hand gesture classification and finger angle estimation and could be used in applications related to the human–computer interaction and in control environments with both discrete and continuous variables.
Abstract: Hand gesture classification and finger angle estimation are both critical for intuitive human-computer interaction. However, most approaches study them in isolation. We thus propose a dual-output deep learning model to enable simultaneous hand gesture classification and finger angle estimation. Data augmentation and deep learning were used to detect spatial-temporal features via a wristband with ten modified barometric sensors. Ten subjects performed experimental testing by flexing/extending each finger at the metacarpophalangeal joint while the proposed model was used to classify each hand gesture and estimate continuous finger angles simultaneously. A data glove was worn to record ground-truth finger angles. Overall hand gesture classification accuracy was 97.5% and finger angle estimation R 2 was 0.922, both of which were significantly higher than shallow existing learning approaches used in isolation. The proposed method could be used in applications related to the human-computer interaction and in control environments with both discrete and continuous variables.

16 citations

Proceedings ArticleDOI
01 Nov 2017
TL;DR: A concept of human-in-the-loop system is proposed, applying several hands free control interfaces including electroencephalography, myoelectric interface and gyroscope plus accelerometer interface, and vibration actuators are proposed as a prospective kind of the wheelchair-to-user feedback.
Abstract: Motivated by the emerging needs to improve the quality of life for the elderly and disabled individuals who rely on wheelchairs for mobility, and who might have limited or no hand functionality at all, a new concept of wheelchair human-in-the-loop interface is proposed in this report. The beginning of the report provides an analysis of information sources on the presented topic. Then, based on this analysis, a concept of human-in-the-loop system is proposed, applying several hands free control interfaces including electroencephalography, myoelectric interface and gyroscope plus accelerometer interface. In the same time, vibration actuators are proposed as a prospective kind of the wheelchair-to-user feedback.

9 citations