scispace - formally typeset
Search or ask a question

Showing papers by "Anupam Agrawal published in 2022"


Journal ArticleDOI
TL;DR: In this paper, a CapsNet architecture is trained on DEAP dataset to perform on a cross-subject binary classification task, and tuning of the hyperparameters using Bayesian Optimization is analyzed.

14 citations


Journal ArticleDOI
TL;DR: This work proposes DWT-EMD Feature level Fusion-based seizure detection approach over multi and single channel EEG signals and studied the usability of discrete wavelet transform (DWT) and empirical mode decomposition (EMD) feature fusion with respect to individual DWT and EMD features over classifiers SVM, SVM with RBF kernel, decision tree and bagging classifier for seizure detection.
Abstract: Brain Computer Interface technology enables a pathway for analyzing EEG signals for seizure detection. EEG signal decomposition, features extraction and machine learning techniques are more familiar in seizure detection. However, selecting decomposition technique and concatenation of their features for seizure detection is still in the state-of-the-art phase. This work proposes DWT-EMD Feature level Fusion-based seizure detection approach over multi and single channel EEG signals and studied the usability of discrete wavelet transform (DWT) and empirical mode decomposition (EMD) feature fusion with respect to individual DWT and EMD features over classifiers SVM, SVM with RBF kernel, decision tree and bagging classifier for seizure detection. All classifiers achieved an improved performance over DWT-EMD feature level fusion for two benchmark seizure detection EEG datasets. Detailed quantification results have been mentioned in the Results section.

8 citations


Proceedings ArticleDOI
11 Aug 2022
TL;DR: The hateful meme dataset by Facebook AI has been used to test the various unimodal and a multimodal approach to baseline performance for these models and highlight the challenges these hate memes pose to the community.
Abstract: This work projects light upon the challenges of hate speech detection in memes and demonstrates the various machine learning model to automatically detect hate in the internet memes. Memes are the visual content shared on the social media in the form of combination of picture and some textual phrases to depict light humour or jokes. However, some images in the form of memes can also be used to convey misinformation and hate, so their early automatic detection is necessary to stop the hate spreading to wide range of users or population which may cause unrest and harm to human life and property. In this paper, the hateful meme dataset by Facebook AI has been used to test the various unimodal and a multimodal approach to baseline performance for these models and highlight the challenges these hate memes pose to the community.

1 citations


Proceedings ArticleDOI
11 Aug 2022
TL;DR: This work implemented various machine learning-based algorithms for hate speech detection on various social media platforms and found out that XGBoost when used with TF-IDF transformer embedding gave us an accuracy of 94.43%, which is the maximum among these three models for the given benchmark dataset.
Abstract: The purpose of this work is to solve the challenges faced by us in the field of hate speech recognition on various social media platforms, that is to get a better machine learning model that can detect hate speech with greater accuracy. As the reach of the internet and mobile phones has extended abruptly in the past few years, everyone has the power to share their opinions, but some use it as an opportunity to spread hate among one another. In this paper, we used the Davidson [10] dataset which is the most popular Twitter dataset for hate speech detection, and further we implemented various machine learning-based algorithms and compared them on the basis of various parameters such as accuracy, precision score, recall and F1 scores. After the study, we found out that XGBoost when used with TF-IDF transformer embedding gave us an accuracy of 94.43%, which is the maximum among these three models for the given benchmark dataset.

Book ChapterDOI
TL;DR: In this paper , a method to classify autistic and non-autistic facial images using model 1 (Xception) and model 2 (Augmentation + Xception) was proposed, which achieved higher accuracy of 98% and a minimum loss of 0.08.
Abstract: Autism spectrum disorder (ASD) is a neurodevelopmental disorder in which the neurology of an autistic person is severely hampered. A child with autism has difficulty answering his name, avoids eye contact, and cannot express his emotions. Early diagnosis can assist children with ASD enhance their intellectual abilities while reducing autistic symptoms. In computer vision, determining developmental disorder problems from facial image data is a significant but largely unexplored challenge. This paper proposed a method to classify autistic and non-autistic facial images using model 1 (Xception) and model 2 (Augmentation + Xception). Among Model 1 and Model 2, Model 2 achieved higher accuracy of 98% and a minimum loss of 0.08.

Proceedings ArticleDOI
11 Aug 2022
TL;DR: A method for recognizing hand gestures and signs using Background Subtraction, Skin Masking, and Convolutional Neural Network for segmentation and classification of gestures into text is proposed.
Abstract: This paper aims at analyzing and recognizing American Sign Language (ASL) that can be converted to text in order to facilitate communication with differently-abled people. This has been a key challenge for communication with/between differently-abled people. Many different approaches have been formulated trying to solve this problem including Principal Component Analysis (PCA), Finger Peak and Angle Calculation, and Support Vector Machine (SVM) to name a few. Here we have proposed a method for recognizing hand gestures and signs using Background Subtraction, Skin Masking, and Convolutional Neural Network (CNN) for segmentation and classification of gestures into text. The method provides a training accuracy of approx 98% in recognition under ideal conditions (simple background and good light intensity).

Proceedings ArticleDOI
11 Aug 2022
TL;DR: After noise removal and data augmentation, Resnet50 outperforms the other four pretrained CNN models for the classification of MRI images on the "Cjdata" dataset (also called the "Brain Tumor" dataset).
Abstract: Following an MRI scan of a patient, radiologists partition the tumor-bearing area of the brain based on their previous experience. The presence of cerebrospinal fluid and white matter in the brain makes it difficult to pinpoint the tumor's location. Human observation is prone to error, especially when performed by a radiologist with less experience segmenting the MRI image. They are manually classified after segmentation based on the tumor's growth rate, origin area, and harmfulness. Deep learning models can be used to separate and classify data efficiently. Classification can be done before or after segmentation; here, segmentation is performed using U-Net while classification is done utilising some of the most efficient CNN models: VGG16, Resnet50, Inception V3, and SqueezeNet. After noise removal and data augmentation, Resnet50 outperforms the other four pretrained CNN models for the classification of MRI images on the "Cjdata" dataset (also called the "Brain Tumor" dataset).