scispace - formally typeset
Search or ask a question
Author

Pranav Narayanan Venkit

Bio: Pranav Narayanan Venkit is an academic researcher from Pennsylvania State University. The author has contributed to research in topics: Sentiment analysis & Emoji. The author has co-authored 2 publications.

Papers
More filters
Posted Content
TL;DR: The Bias Identification Test in Sentiments (BITS) corpus as discussed by the authors is a corpus of 1,126 sentences designed to probe sentiment analysis models for biases in disability, which is used to identify potential biases against disability in any sentiment analysis tools.
Abstract: Sociodemographic biases are a common problem for natural language processing, affecting the fairness and integrity of its applications. Within sentiment analysis, these biases may undermine sentiment predictions for texts that mention personal attributes that unbiased human readers would consider neutral. Such discrimination can have great consequences in the applications of sentiment analysis both in the public and private sectors. For example, incorrect inferences in applications like online abuse and opinion analysis in social media platforms can lead to unwanted ramifications, such as wrongful censoring, towards certain populations. In this paper, we address the discrimination against people with disabilities, PWD, done by sentiment analysis and toxicity classification models. We provide an examination of sentiment and toxicity analysis models to understand in detail how they discriminate PWD. We present the Bias Identification Test in Sentiments (BITS), a corpus of 1,126 sentences designed to probe sentiment analysis models for biases in disability. We use this corpus to demonstrate statistically significant biases in four widely used sentiment analysis tools (TextBlob, VADER, Google Cloud Natural Language API and DistilBERT) and two toxicity analysis models trained to predict toxic comments on Jigsaw challenges (Toxic comment classification and Unintended Bias in Toxic comments). The results show that all exhibit strong negative biases on sentences that mention disability. We publicly release BITS Corpus for others to identify potential biases against disability in any sentiment analysis tools and also to update the corpus to be used as a test for other sociodemographic variables as well.
Posted Content
TL;DR: In this paper, the authors highlight the importance of using Twitter features to help the model understand the sentiment involved and hence to predict the most suitable emoji for the text by crawling additional Twitter data, including timestamp, hashtags, and application source acting as additional attributes to the tweet.
Abstract: We widely use emojis in social networking to heighten, mitigate or negate the sentiment of the text. Emoji suggestions already exist in many cross-platform applications but an emoji is predicted solely based a few prominent words instead of understanding the subject and substance of the text. Through this paper, we showcase the importance of using Twitter features to help the model understand the sentiment involved and hence to predict the most suitable emoji for the text. Hashtags and Application Sources like Android, etc. are two features which we found to be important yet underused in emoji prediction and Twitter sentiment analysis on the whole. To approach this shortcoming and to further understand emoji behavioral patterns, we propose a more balanced dataset by crawling additional Twitter data, including timestamp, hashtags, and application source acting as additional attributes to the tweet. Our data analysis and neural network model performance evaluations depict that using hashtags and application sources as features allows to encode different information and is effective in emoji prediction.