scispace - formally typeset
Search or ask a question
Book ChapterDOI

Towards Intelligent Control of Electric Wheelchairs for Physically Challenged People

TL;DR: The obtained results show that the proposed intelligent wheelchair is feasible for the disabled and the elderly with severe mobility disabilities.
Abstract: The chapter deals with the use of soft computing techniques in solving the mobility problems of physically handicapped people using available signals such as face directional gesture, voice, brain and electromyogram (EMG) signals. These signals, depending on the type and degree of handicap, are used to classify commands required to drive a wheelchair. The user’s intention is transferred to the wheelchair controller through the human-computer interface (HCI), and then the wheelchair is guided to the intended direction. Additionally, the wheelchair can perform safe and reliable motions by detecting and avoiding obstacles autonomously. Several detection methods and commands classification algorithms will be discussed. For smooth and reliable operation, an intelligent controller will be proposed to drive wheelchair motors. An adaptive Neuro-fuzzy inference system (ANFIS) technique will be used in the controller. The chapter introduces a modified method to design multiple-input, multiple-output (MIMO) ANFIS using only MATLAB. This controller relies on real data received from obstacle avoidance sensors and the HCI unit. The implemented wheelchair will be equipped with path detection sensors, GPS tracking and battery level monitoring to guaranty more safety for the user. It has been tested on 3D simulation software, and the obtained results from the wheelchair prototype and 3D simulation model demonstrated the performance of the proposed real-time controller in dealing with user requirements and working environment constraints. The cost of the proposed smart wheelchair is suitable with user case. By combining the concepts of soft computing and mechatronics, the implemented wheelchair will be more sophisticated and gives people more mobility. The obtained results show that the proposed intelligent wheelchair is feasible for the disabled and the elderly with severe mobility disabilities.
Citations
More filters
Journal ArticleDOI
TL;DR: The obtained results support that the proposed framework can be used for BCI control applications, and preprocesses the non-stationary and non-linear EEG signals to finally use a Bidirectional Long Short-Term Memory (BiLSTM) to classify corresponding feature sequences.
Abstract: Brain-Computer Interface (BCI) paradigms based on Motor Imagery Electroencephalogram (MI-EEG) signals have been developed because the related signals can be generated voluntarily to control further applications. Researches using strong and stout limbs MI-EEG signals reported performing significant classification rates for BCI applied systems. However, MI-EEG signals produced by imagined movements of small limbs present a real classification challenge to be effectively used in BCI systems. It is due to a reduced signal level and increased noisy distorted effects. This study aims to decode individual right-hand fingers’ imagined movements for BCI applications, using MI-EEG signals from C3, Cz, P3, and Pz channels. For this purpose, the Empirical Mode Decomposition (EMD) preprocesses the non-stationary and non-linear EEG signals to finally use a Bidirectional Long Short-Term Memory (BiLSTM) to classify corresponding feature sequences. An average accuracy of 98.8 % was achieved for ring-finger movements decoding using k-fold cross-validation on a public dataset (Scientific-Data). The obtained results support that the proposed framework can be used for BCI control applications.
Journal ArticleDOI
TL;DR: In this paper , an imagined speech-based brain wave pattern recognition using deep learning was proposed, where wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes.
Abstract: In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The long-short term memory recurrent neural network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: up, down, left, and right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration was implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based brain–computer interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50%, and 92.62% for precision, recall, and F1-score, respectively.
Journal ArticleDOI
TL;DR: In this article , the authors used linear discriminant analysis (LDA) classifier to achieve decision boundary between left hand and right hand imagination for BCI-controlled virtual reality system.
Abstract: The contribution of the brain-computer interface (BCI) ranges from prevention of disease to neuronal control for disabled peoples. BCI-controlled virtual reality system is a potentially important new assistive technology area to aid various physically disable people (i.e., paralyzed people) by monitoring brain activity and translating desired signal features to operate external devices. This research used motor imagery achieved from EEG data implicating three main phases (i.e., preprocessing, features extraction, and classification of brain signals). This research used linear discriminant analysis (LDA) classifier to achieve decision boundary between left hand and right hand imagination. In this context, motor imagery-based EEG data was segmented and classified to be used as a controller for BCI. Experimental results reflect the significant impact of various classifiers and is expected to aid paralyzed people in converting their imagination into reality.
References
More filters
Journal ArticleDOI
TL;DR: The various methodologies and algorithms for EMG signal analysis are illustrated to provide efficient and effective ways of understanding the signal and its nature to help researchers develop more powerful, flexible, and efficient applications.
Abstract: Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications.

1,195 citations

Journal ArticleDOI
08 Mar 2013
TL;DR: Noninvasive brain-computer interfaces (BCIs) offer a promising solution to this interaction problem for people who are unable to use conventional controls due to severe motor disabilities.
Abstract: Independent mobility is central to being able to perform activities of daily living by oneself. However, power wheelchairs are not an option for many people who, due to severe motor disabilities, are unable to use conventional controls. For some of these people, noninvasive brain-computer interfaces (BCIs) offer a promising solution to this interaction problem.

386 citations

Journal ArticleDOI
TL;DR: This investigation on the speech recognition classification performance is performed using two standard neural networks structures as the classifier using Feed-forward Neural Network with back propagation algorithm and a Radial Basis Functions Neural Networks.
Abstract: In this paper is presented an investigation of the speech recognition classification performance. This investigation on the speech recognition classification performance is performed using two standard neural networks structures as the classifier. The utilized standard neural network types include Feed-forward Neural Network (NN) with back propagation algorithm and a Radial Basis Functions Neural Networks.

101 citations

Journal ArticleDOI
TL;DR: A dependent-user recognition voice system and ultrasonic and infrared sensor systems has been integrated in this wheelchair which can be driven with using voice commands and with the possibility of avoiding obstacles and downstairs or hole detection.
Abstract: This paper describes a wheelchair for physically disabled people developed within the UMIDAM Project. A dependent-user recognition voice system and ultrasonic and infrared sensor systems has been integrated in this wheelchair. In this way we have obtained a wheelchair which can be driven with using voice commands and with the possibility of avoiding obstacles and downstairs or hole detection. The wheelchair has also been developed to allow autonomous driving (for example, following walls). The project, in which two prototypes have been produced, has been carried out totally in the Electronics Department of the University of Alcala (Spain). It has been financed by the ONCE. Electronic system configuration, a sensor system, a mechanical model, control (low level control, control by voice commands), voice recognition and autonomous control are considered. The results of the experiments carried out on the two prototypes are also given.

85 citations

Proceedings ArticleDOI
21 May 2006
TL;DR: A prototype system using off-the-shelf components was built and tested successfully by developing a graphical user interface (GUI) in LabVIEW environment and the effects of position and orientation of the permanent magnet on the sensors in FEMLAB were modeled and experimentally measured.
Abstract: The "tongue drive" system is a tongue-operated assistive technology developed for people with severe disability to control their environment. The tongue is considered an excellent appendage in severely disabled people for operating an assistive device. Tongue Drive consists of an array of Hall-effect magnetic sensors mounted on a dental retainer on the outer side of the teeth to measure the magnetic field generated by a small permanent magnet secured on the tongue. The sensor signals are transmitted across a wireless link and processed to control the movements of a cursor on a computer screen or to operate a powered wheelchair, a phone, or other equipments. The principal advantage of this technology is the possibility of capturing a large variety of tongue movements by processing a combination of sensor outputs. This would provide the user with a smooth proportional control as opposed to a switch based on/off control that is the basis of most existing technologies. We modeled the effects of position and orientation of the permanent magnet on the sensors in FEMLAB and experimentally measured them. We built a prototype system using off-the-shelf components and tested it successfully by developing a graphical user interface (GUI) in LabVIEW environment. A small battery powered wireless mouthpiece with no external component is under development.

56 citations