scispace - formally typeset
Search or ask a question
Author

Shyam Sundar

Bio: Shyam Sundar is an academic researcher. The author has contributed to research in topics: Eye movement. The author has an hindex of 1, co-authored 1 publications receiving 41 citations.
Topics: Eye movement

Papers
More filters
Journal Article
TL;DR: This paper uses the optical-type eye tracking system to control powered wheel chair and uses this system to train the microprocessor to send signals to control the wheels and thus the overall movement.
Abstract: A powered wheel chair is a mobility-aided device for persons with moderate/severe physical disabilities or chronic diseases as well as the elderly. In order to take care for different disabilities, various kinds of interfaces have been developed for powered wheelchair control; such as joystick control, head control and sip-puff control. Many people with disabilities do not have the ability to control powered wheel chair using the above mentioned interfaces. The proposed model is a possible alternative. In this paper, we use the optical-type eye tracking system to control powered wheel chair. User‘s eye movement are translated to screen position using the optical type eye tracking system. When user looks at appropriate angle, then computer input system will send command to the software based on the angle of rotation of pupil i.e., when user moves his eyes balls up (move forward), left (move left), right (move right) in all other cases wheel chair will stop. Once the image has been processed it moves onto the second part, our microprocessor. The microprocessor will take a USB output from the laptop and convert the signal into signals that will be sent to the wheelchair wheels for movement. Also, the pressure and object detection sensors will be connected to our microprocessor to provide necessary feedback for proper operation of the wheelchair system. The final part of the project is the wheelchair itself. The rear wheels will provide forward. The front two wheels will be used for steering left and right. All four wheels will be connected to our microprocessor that will send signals to control the wheels and thus the overall movement.

46 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The results of this study indicate that ML and IoT are important aspects in evolving eye tracking applications owing to their ability to learn from existing data, make better decisions, be flexible, and eliminate the need to manually re-calibrate the tracker during the eye tracking process.
Abstract: Eye tracking is the process of measuring where one is looking (point of gaze) or the motion of an eye relative to the head. Researchers have developed different algorithms and techniques to automatically track the gaze position and direction, which are helpful in different applications. Research on eye tracking is increasing owing to its ability to facilitate many different tasks, particularly for the elderly or users with special needs. This study aims to explore and review eye tracking concepts, methods, and techniques by further elaborating on efficient and effective modern approaches such as machine learning (ML), Internet of Things (IoT), and cloud computing. These approaches have been in use for more than two decades and are heavily used in the development of recent eye tracking applications. The results of this study indicate that ML and IoT are important aspects in evolving eye tracking applications owing to their ability to learn from existing data, make better decisions, be flexible, and eliminate the need to manually re-calibrate the tracker during the eye tracking process. In addition, they show that eye tracking techniques have more accurate detection results compared with traditional event-detection methods. In addition, various motives and factors in the use of a specific eye tracking technique or application are explored and recommended. Finally, some future directions related to the use of eye tracking in several developed applications are described.

75 citations

Journal ArticleDOI
08 Apr 2021
TL;DR: This research article discusses better accuracy achievement and minimizes the delay response time in the proposed eye-tracking EWC, which achieves an accuracy of about 90% and response time is least compared with the existing methods.
Abstract: In this fast-paced world, it is very challenging for the elderly and disabled population to move independently to their desire places at any convenient time. Fortunately, some of the people have good eyesight and physically strong to take care of their survival. Nevertheless, Electric wheelchair (EWC) can provide them a better lifestyle with commendable confidence. At Journal of Innovative Image Processing (JIIP) (2021) Vol.03/ No. 01 Pages: 21-35 https://www.irojournals.com/iroiip/ DOI: https://doi.org/10.36548/jiip.2021.1.003 22 ISSN: 2582-4252 (online) Submitted: 26.02.2021 Revised: 11.03.2021 Accepted: 24.03.2021 Published: 08.04.2021 the same time, the hand, head and voice recognition-based EWC meet many limitations. Despite, the eye-tracking-based EWC provides a better smartness in their lifestyle. This research article discusses better accuracy achievement and minimizes the delay response time in the proposed system. The proposed eye-tracking EWC is differed from another existing system with good validation parameters of the controller and it introduces edge detection to identify the eye pupil position in the face. The proposed method includes a PID controller to control the DC motor, which in turn controls the rotation of wheel in EWC. This research article is mainly focused on the costeffectiveness and improvement in the system performance. The display system is mounted in front of the sitting position of EWC users. The camera captures eye pupil position and it determines the direction of the EWC movement by controlling DC motor with the help of a PID controller. When derivative (D) control is used in the proposed system, the system response is quite faster and it reduces the delay time between the user and system reaction. This pupil of eye position is determined by a canny edge detector, which provides good results when compared with other edge detection approaches. Object detection in front of the EWC is an added advantage of the proposed system. The proposed article integrates all the activities and measures the system performance. The proposed model achieves an accuracy of about 90% and response time is least compared with the existing methods.

46 citations

Proceedings ArticleDOI
20 May 2019
TL;DR: In this paper, a multi-modal system consisting of different sensing, decision-making and actuating modalities is presented to assist those with movement disabilities in reaching for those objects, grasping them, and using them to interact with other objects.
Abstract: Assistive robotic systems endeavour to support those with movement disabilities, enabling them to move again and regain functionality. Main issue with these systems is the complexity of their low-level control, and how to translate this to simpler, higher level commands that are easy and intuitive for a human user to interact with. We have created a multi-modal system, consisting of different sensing, decision making and actuating modalities, leading to intuitive, human-in-the-loop assistive robotics. The system takes its cue from the user’s gaze, to decode their intentions and implement low-level motion actions to achieve high-level tasks. This results in the user simply having to look at the objects of interest, for the robotic system to assist them in reaching for those objects, grasping them, and using them to interact with other objects. We present our method for 3D gaze estimation, and grammars-based implementation of sequences of action with the robotic system. The 3D gaze estimation is evaluated with 8 subjects, showing an overall accuracy of 4.68\pm 0.14cm. The full system is tested with 5 subjects, showing successful implementation of 100% of reach to gaze point actions and full implementation of pick and place tasks in 96%, and pick and pour tasks in 76% of cases. Finally we present a discussion on our results and what future work is needed to improve the system.

30 citations

Journal ArticleDOI
TL;DR: The results revealed that the prototype BCW could be operated in either of the proposed modes, and the proposed navigation system had a flexible design that could be interfaced with other assistive technologies.
Abstract: Currently, electric wheelchairs are commonly used to improve mobility in disabled people. In severe cases, the user is unable to control the wheelchair by themselves because his/her motor functions are disabled. To restore mobility function, a brain-controlled wheelchair (BCW) would be a promising system that would allow the patient to control the wheelchair by their thoughts. P300 is a reliable brain electrical signal, a component of visual event-related potentials (ERPs), that could be used for interpreting user commands. This research aimed to propose a prototype BCW to allowed severe motor disabled patients to practically control a wheelchair for use in their home environment. The users were able to select from 9 possible destination commands in the automatic mode and from 4 directional commands (forward, backward, turn left and right) in the shared-control mode. These commands were selected via the designed P300 processing system. The wheelchair was steered to the desired location by the implemented navigation system. Safety of the user was ensured during wheelchair navigation due to the included obstacle detection and avoidance features. A combination of P300 and EOG was used as a hybrid BCW system. The user could fully operate the system such as enabling P300 detection system, mode shifting and stop/cancelation command by performing a different consecutive blinks to generate eye blinking patterns. The results revealed that the prototype BCW could be operated in either of the proposed modes. With the new design of the LED-based P300 stimulator, the average accuracies of the P300 detection algorithm in the shared-control and automatic modes were 95.31 and 83.42% with 3.09 and 3.79 bits/min, respectively. The P300 classification error was acceptable, as the user could cancel an incorrect command by blinking 2 times. Moreover, the proposed navigation system had a flexible design that could be interfaced with other assistive technologies. This research developed 3 alternative input modules: an eye tracker module and chin and hand controller modules. The user could select the most suitable assistive technology based on his/her level of disability. Other existing assistive technologies could also be connected to the proposed system in the future using the same protocol.

30 citations

Journal ArticleDOI
TL;DR: A system to aid people with motor disabilities by restoring their ability to move effectively and effortlessly without having to rely on others utilizing an eye-controlled electric wheelchair, and provides low-cost tools for the organization assisting wheelchair users.
Abstract: In the 34 developed and 156 developing countries, there are about 132 million disabled people who need a wheelchair constituting 1.86% of the world population. Moreover, there are millions of people suffering from diseases related to motor disabilities, which cause inability to produce controlled movement in any of the limbs or even head.The paper proposes a system to aid people with motor disabilities by restoring their ability to move effectively and effortlessly without having to rely on others utilizing an eye-controlled electric wheelchair. The system input was images of the users eye that were processed to estimate the gaze direction and the wheelchair was moved accordingly. To accomplish such a feat, four user-specific methods were developed, implemented and tested; all of which were based on a benchmark database created by the authors.The first three techniques were automatic, employ correlation and were variants of template matching, while the last one uses convolutional neural networks (CNNs). Different metrics to quantitatively evaluate the performance of each algorithm in terms of accuracy and latency were computed and overall comparison is presented. CNN exhibited the best performance (i.e. 99.3% classification accuracy), and thus it was the model of choice for the gaze estimator, which commands the wheelchair motion. The system was evaluated carefully on 8 subjects achieving 99% accuracy in changing illumination conditions outdoor and indoor. This required modifying a motorized wheelchair to adapt it to the predictions output by the gaze estimation algorithm. The wheelchair control can bypass any decision made by the gaze estimator and immediately halt its motion with the help of an array of proximity sensors, if the measured distance goes below a well-defined safety margin.

25 citations