Self-Harm Detection for Mental Health Chatbots.
Saahil Deshpande,Jim Warren +1 more
- Vol. 281, pp 48-52
TLDR
In this paper, a self-harm classifier was designed to predict whether a user's response to a chatbot indicates intent for selfharm, based on text input from the user.Abstract:
Chatbots potentially address deficits in availability of the traditional health workforce and could help to stem concerning rates of youth mental health issues including high suicide rates. While chatbots have shown some positive results in helping people cope with mental health issues, there are yet deep concerns regarding such chatbots in terms of their ability to identify emergency situations and act accordingly. Risk of suicide/self-harm is one such concern which we have addressed in this project. A chatbot decides its response based on the text input from the user and must correctly recognize the significance of a given input. We have designed a self-harm classifier which could use the user's response to the chatbot and predict whether the response indicates intent for self-harm. With the difficulty to access confidential counselling data, we looked for alternate data sources and found Twitter and Reddit to provide data similar to what we would expect to get from a chatbot user. We trained a sentiment analysis classifier on Twitter data and a self-harm classifier on the Reddit data. We combined the results of the two models to improve the model performance. We got the best results from a LSTM-RNN classifier using BERT encoding. The best model accuracy achieved was 92.13%. We tested the model on new data from Reddit and got an impressive result with an accuracy of 97%. Such a model is promising for future embedding in mental health chatbots to improve their safety through accurate detection of self-harm talk by users.read more
Citations
More filters
Journal ArticleDOI
Natural language processing applied to mental illness detection: a narrative review
TL;DR: In this article , the authors provide a narrative review of mental illness detection using NLP in the past decade, to understand methods, trends, challenges and future directions, and also provide some recommendations for future studies, including the development of novel detection methods, deep learning paradigms and interpretable models.
Journal ArticleDOI
Exploring The Design of Prompts For Applying GPT-3 based Chatbots: A Mental Wellbeing Case Study on Mechanical Turk
Harsh Kumar,Ilya Musabirov,Jiakai Shi,Adele Lauzon,Kwan Kiu Choy,Ofek Gross,Dana Kulzhabayeva,J. J. Williams +7 more
TL;DR: In the problem solving intent, GPT-3 tries to narrow in on the user’s problems and help them brainstorm, identify, and implement an effective solution.
Journal ArticleDOI
A Critical Review of Text Mining Applications for Suicide Research
Jennifer M. Boggs,Julie M Kafka +1 more
TL;DR: The literature from 2019 to 2021 is critically reviewed for text mining projects that use electronic health records, social media data, and death records as mentioned in this paper , where text mining has helped identify risk factors for suicide in general and specific populations (e.g., older adults), has been combined with structured variables in EHRs to predict suicide risk, and has been used to track trends in social media suicidal discourse following population level events.
Journal ArticleDOI
Should we agree to disagree about Twitter's bot problem?
TL;DR: It is argued how assumptions on bot-likely behavior, the detection approach, and the population inspected can affect the estimation of the percentage of bots on Twitter.
Journal ArticleDOI
Evaluation of Abstraction Capabilities and Detection of Discomfort with a Newscaster Chatbot for Entertaining Elderly Users
Francisco de Arriba-Pérez,Silvia García-Méndez,Francisco J. González-Castaño,Enrique Costa-Montenegro +3 more
TL;DR: In this paper, the authors proposed an intelligent newscaster chatbot for digital inclusion, where user interest is estimated by analysing the sentiment of his/her answers, and a differential feature of their approach is automatic and transparent monitoring of the abstraction skills of the target users.
References
More filters
Proceedings ArticleDOI
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Journal ArticleDOI
Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape:
TL;DR: Preliminary evidence for psychiatric use of chatbots is favourable, however, given the heterogeneity of the reviewed studies, further research with standardized outcomes reporting is required to more thoroughly examine the effectiveness of conversational agents.
Journal ArticleDOI
An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study
TL;DR: A preliminary real-world data evaluation of the effectiveness and engagement levels of an AI-enabled, empathetic, text-based conversational mobile mental well-being app, Wysa, on users with self-reported symptoms of depression shows promise.