scispace - formally typeset
Journal ArticleDOI

Deep Learning for predicting neutralities in Offensive Language Identification Dataset

TLDR
This work aims to combine the power of deep learning with SVNS to represent a sample’s sentiment into membership functions of SVNS, a novel framework that can integrate with any neural network model and quantify sentiments using SVNS.
Abstract
Deep learning is advancing rapidly; it has aided in solving problems that were thought impossible. Natural language understanding is one such task that has evolved with the advancement of deep learning systems. There have been several sentiment analysis attempts, but they aim to classify it as a single emotion. Human emotion in natural language is generally a complex combination of emotions, which may be indeterminate or neutral at times. Neutrosophy is a branch of philosophy that identifies neutralities and uses membership functions (positive, negative, neutral) to quantify a sample into Single Valued Neutrosophic Set (SVNS) values. Our work aims to combine the power of deep learning with SVNS to represent a sample’s sentiment into membership functions of SVNS. We have worked on the Offensive Language Identification Dataset (OLID). Combining the power of state-of-the-art neural network techniques with neutrosophy allowed us to quantify the sentiments and identify the transition phase between positive and negative ones. We used the transition phase to capture neutral samples, which is beneficial if we want to obtain purely positive/negative samples. We performed experiments using Bi-directional Long Short Term Memory (BiLSTM) with attention, Bidirectional Encoder Representations from Transformers (BERT), A Lite BERT (ALBERT), A Robustly Optimised BERT Approach (RoBERTa), and MPNet. Our SVNS model performed equivalent to state-of-the-art neural network models on the OLID dataset. Here, we propose a novel framework that can integrate with any neural network model and quantify sentiments using SVNS.

read more

Citations
More filters
Journal ArticleDOI

Detecting offensive speech in conversational code-mixed dialogue on social media: A contextual dataset and benchmark experiments

TL;DR: In this article , the authors presented the first dataset for conversational-based hate speech classification with an approach for collecting context from long conversations for code-mixed Hindi (ICHCL) dataset.
Journal ArticleDOI

Context-aware sentiment analysis with attention-enhanced features from bidirectional transformers

TL;DR: A transfer learning-based bidirectional transformer model is proposed that finds deep contextual words existing in a review by exhibiting different patterns in different layers and is fed into the BGRU through transfer learning to have better contextual classification.
Proceedings ArticleDOI

R2D2 at SemEval-2022 Task 5: Attention is only as good as its Values! A multimodal system for identifying misogynist memes

TL;DR: This paper describes the multimodal deep learning system proposed for SemEval 2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification, and performs extensive experiments using combinations of different pre-trained models which will be helpful as baselines for future work.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings ArticleDOI

Glove: Global Vectors for Word Representation

TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Related Papers (5)