scispace - formally typeset
Search or ask a question
Author

Sananda Paul

Bio: Sananda Paul is an academic researcher from Jadavpur University. The author has contributed to research in topics: Gait analysis & Hjorth parameters. The author has an hindex of 2, co-authored 3 publications receiving 36 citations.

Papers
More filters
Proceedings ArticleDOI
30 Apr 2015
TL;DR: An effective classifier named Support Vector Machine (SVM) is introduced to categorize the EEG feature space related to various emotional states into their respective classes and the result reveal that frontal, temporal and parietal regions of the brain are relevant to positive emotion recognition and frontal andParietal regions are activated in case of negative emotion identification.
Abstract: Emotion is a complex set of interactions among subjective and objective factors governed by neural/hormonal systems resulting in the arousal of feelings and generate cognitive processes, activate physiological changes such as behavior. Emotion recognition can be correctly done by EEG signals. Electroencephalogram (EEG) is the direct reflection of the activities of hundreds and millions of neurons residing within the brain. Different emotion states create distinct EEG signals in different brain regions. Therefore EEG provides reliable technique to identify the underlying emotion information. This paper proposes a novel approach to recognize users emotions from electroencephalogram (EEG) signals. Audio signals are used as stimuli to elicit positive and negative emotions of subjects. For eight healthy subjects, EEG signals are acquired using seven channels of an EEG amplifier. The result reveal that frontal, temporal and parietal regions of the brain are relevant to positive emotion recognition and frontal and parietal regions are activated in case of negative emotion identification. After proper signal processing of the raw EEG, for the whole frequency bands the features are extracted from each channel of the EEG signals by Multifractral Detrended Fluctuation Analysis (MFDFA) method. We introduce an effective classifier named Support Vector Machine (SVM) to categorize the EEG feature space related to various emotional states into their respective classes. Next, we compare Support Vector Machine (SVM) with various other methods like Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA) and K Nearest Neighbor (KNN). The average classification accuracy of SVM for positive emotions on the whole frequency bands is 84.50%, while the accuracy of QDA is 76.50% and with LDA 75.25% and KNN is only 69.625% whereas, for negative emotions it is 82.50%, while for QDA is 72.375% and with LDA 65.125% and KNN is only 70.50%.

34 citations

Journal ArticleDOI
TL;DR: DWT has the potential to be used as real-time EOG-based emotion assessment system and shows best performance with the combination DWT+SVM and Hjorth+NB for each of the emotions.
Abstract: In this study, for recognition of (positive, neutral and negative) emotions using EOG signals, subjects were stimulated with audio-visual stimulus to elicit emotions. Hjorth parameters and Discrete Wavelet Transform (DWT) (Haar mother wavelet) were employed as feature extractor. Support Vector Machine (SVM) and Naive Bayes (NB) were used for classifying the emotions. The results of multiclass classifications in terms of classification accuracy show best performance with the combination DWT+SVM and Hjorth+NB for each of the emotions. The average SVM classifier's accuracy with DWT for horizontal and vertical eye movement are 81%, 76.33%, 78.61% and are 79.85%, 75.63% and 77.67% respectively. The experimental results show the average recognition rate of 78.43%, 74.61%, and 76.34% for horizontal and 77.11%, 74.03%, and 75.84% for vertical eye movement when Naive Bayes group with Hjorth parameter. Above result indicates that it has the potential to be used as real-time EOG-based emotion assessment system.

12 citations

Journal Article
TL;DR: The goal of this research is to develop and test an ultrasound-based gait tachography system to enable the doctors and physiotherapists to evaluate lower limb extremity problems.
Abstract: Gait analysis is an approach towards analysis of the structure and function of the foot, lower limb and body during walking or running. The aim of gait analysis in rehabilitation centres is much larger than only simply a functional assessment tool as it can help us determine the complex relationships between impairment, functional limitation and disability. The goal of this research is to develop and test an ultrasound-based gait tachography system to enable the doctors and physiotherapists to evaluate lower limb extremity problems. Gait tachography employ Doppler frequency shifting principle to calculate the change in velocity of body's centre of gravity during gait. The ultrasound-based gait tachography includes simple instrumentation, composed of a transmitter and a receiver block. The ultrasonic gait tachograph is cost efficient, comparatively small, lightweight, does not hamper a person's psychology much and is suitable to be used in non-laboratory situation.

3 citations


Cited by
More filters
Journal ArticleDOI
21 Jan 2020-Sensors
TL;DR: This paper covers a few classes of sensors, using contactless methods as well as contact and skin-penetrating electrodes for human emotion detection and the measurement of their intensity and proposes their classification.
Abstract: Automated emotion recognition (AEE) is an important issue in various fields of activities which use human emotional reactions as a signal for marketing, technical equipment, or human–robot interaction. This paper analyzes scientific research and technical papers for sensor use analysis, among various methods implemented or researched. This paper covers a few classes of sensors, using contactless methods as well as contact and skin-penetrating electrodes for human emotion detection and the measurement of their intensity. The results of the analysis performed in this paper present applicable methods for each type of emotion and their intensity and propose their classification. The classification of emotion sensors is presented to reveal area of application and expected outcomes from each method, as well as their limitations. This paper should be relevant for researchers using human emotion evaluation and analysis, when there is a need to choose a proper method for their purposes or to find alternative decisions. Based on the analyzed human emotion recognition sensors and methods, we developed some practical applications for humanizing the Internet of Things (IoT) and affective computing systems.

227 citations

Journal ArticleDOI
TL;DR: A hybrid deep neural network is constructed to deal with the EEG MFI sequences to recognize human emotional states where the hybridDeep Neural Networks combined the Convolution Neural Networks (CNN) and Long Short-Term-Memory (LSTM) Recurrent Neural networks (RNN).
Abstract: The aim of this study is to recognize human emotions by electroencephalographic (EEG) signals. The innovation of our research methods involves two aspects: First, we integrate the spatial characteristics, frequency domain, and temporal characteristics of the EEG signals, and map them to a two-dimensional image. With these images, we build a series of EEG Multidimensional Feature Image (EEG MFI) sequences to represent the emotion variation with EEG signals. Second, we construct a hybrid deep neural network to deal with the EEG MFI sequences to recognize human emotional states where the hybrid deep neural network combined the Convolution Neural Networks (CNN) and Long Short-Term-Memory (LSTM) Recurrent Neural Networks (RNN). Empirical research is carried out with the open-source dataset DEAP (a Dataset for Emotion Analysis using EEG, Physiological, and video signals) using our method, and the results demonstrate the significant improvements over current state-of-the-art approaches in this field. The average emotion classification accuracy of each subject with CLRNN (the hybrid neural networks that we proposed in this study) is 75.21%.

116 citations

Journal ArticleDOI
TL;DR: An automated classification of emotions-labeled EEG signals using nonlinear higher order statistics and deep learning algorithm has the potential for accurate and rapid recognition of human emotions.

107 citations

Journal ArticleDOI
22 Apr 2020-Sensors
TL;DR: This survey paper presents a review on emotion recognition using eye- tracking technology, including a brief introductory background on emotion modeling, eye-tracking devices and approaches, emotion stimulation methods, the emotional-relevant features extractable from eye-track data, and a categorical summary and taxonomy of the current literature which relates to emotion recognition Using eye- Tracking.
Abstract: The ability to detect users’ emotions for the purpose of emotion engineering is currently one of the main endeavors of machine learning in affective computing. Among the more common approaches to emotion detection are methods that rely on electroencephalography (EEG), facial image processing and speech inflections. Although eye-tracking is fast in becoming one of the most commonly used sensor modalities in affective computing, it is still a relatively new approach for emotion detection, especially when it is used exclusively. In this survey paper, we present a review on emotion recognition using eye-tracking technology, including a brief introductory background on emotion modeling, eye-tracking devices and approaches, emotion stimulation methods, the emotional-relevant features extractable from eye-tracking data, and most importantly, a categorical summary and taxonomy of the current literature which relates to emotion recognition using eye-tracking. This review concludes with a discussion on the current open research problems and prospective future research directions that will be beneficial for expanding the body of knowledge in emotion detection using eye-tracking as the primary sensor modality.

94 citations

Journal ArticleDOI
TL;DR: A subject independent emotion recognition technique is proposed from EEG signals using Variational Mode Decomposition (VMD) as a feature extraction technique and Deep Neural Network as the classifier that performs better compared to the state of the art techniques in subject-independent emotion recognition from EEG.

91 citations