scispace - formally typeset
Search or ask a question
Conference

International Workshop Rumours and Deception Social Media 

About: International Workshop Rumours and Deception Social Media is an academic conference. The conference publishes majorly in the area(s): Task (project management) & Social media. Over the lifetime, 9 publications have been published by the conference receiving 28 citations.

Papers
More filters
Proceedings Article
01 Jan 2019
TL;DR: In this article, a new approach was proposed to predict user stance towards emerging rumours in Twitter, in terms of supporting, denying, querying, or commenting the original rumour, looking at the conversation threads originated by the rumour.
Abstract: Analysing how people react to rumours associated with news in social media is an important task to prevent the spreading of misinformation, which is nowadays widely recognized as a dangerous tendency. In social media conversations, users show different stances and attitudes towards rumourous stories. Some users take a definite stance, supporting or denying the rumour at issue, while others just comment it, or ask for additional evidence related to the veracity of the rumour. On this line, a new shared task has been proposed at SemEval-2017 (Task 8, SubTask A), which is focused on rumour stance classification in English tweets. The goal is predicting user stance towards emerging rumours in Twitter, in terms of supporting, denying, querying, or commenting the original rumour, looking at the conversation threads originated by the rumour. This paper describes a new approach to this task, where the use of conversation-based and affective-based features, covering different facets of affect, has been explored. Our classification model outperforms the best-performing systems for stance classification at SemEval-2017 Task 8, showing the effectiveness of the feature set proposed.

13 citations

Proceedings Article
01 Dec 2020
TL;DR: In this paper, the authors conducted several exploratory analyses to identify the linguistic properties of Arabic fake news with satirical content, and used these features to build a number of machine learning models capable of identifying satirical fake news.
Abstract: One very common type of fake news is satire which comes in a form of a news website or an online platform that parodies reputable real news agencies to create a sarcastic version of reality. This type of fake news is often disseminated by individuals on their online platforms as it has a much stronger effect in delivering criticism than through a straightforward message. However, when the satirical text is disseminated via social media without mention of its source, it can be mistaken for real news. This study conducts several exploratory analyses to identify the linguistic properties of Arabic fake news with satirical content. It shows that although it parodies real news, Arabic satirical news has distinguishing features on the lexico-grammatical level. We exploit these features to build a number of machine learning models capable of identifying satirical fake news with an accuracy of up to 98.6%. The study introduces a new dataset (3185 articles) scraped from two Arabic satirical news websites (‘Al-Hudood’ and ‘Al-Ahram Al-Mexici’) which consists of fake news. The real news dataset consists of 3710 articles collected from three official news sites: the ‘BBC-Arabic’, the ‘CNN-Arabic’ and ‘Al-Jazeera news’. Both datasets are concerned with political issues related to the Middle East.

5 citations

Proceedings Article
01 Dec 2020
TL;DR: In this paper, a bi-directional recurrent neural network (RNN) classification model was trained on interpretable features derived from multi-disciplinary integrated approaches to language and applied to two benchmark datasets.
Abstract: ‘Fake news’ – succinctly defined as false or misleading information masquerading as legitimate news – is a ubiquitous phenomenon and its dissemination weakens the fact-based reporting of the established news industry, making it harder for political actors, authorities, media and citizens to obtain a reliable picture. State-of-the art language-based approaches to fake news detection that reach high classification accuracy typically rely on black box models based on word embeddings. At the same time, there are increasing calls for moving away from black-box models towards white-box (explainable) models for critical industries such as healthcare, finances, military and news industry. In this paper we performed a series of experiments where bi-directional recurrent neural network classification models were trained on interpretable features derived from multi-disciplinary integrated approaches to language. We apply our approach to two benchmark datasets. We demonstrate that our approach is promising as it achieves similar results on these two datasets as the best performing black box models reported in the literature. In a second step we report on ablation experiments geared towards assessing the relative importance of the human-interpretable features in distinguishing fake news from real news.

4 citations

Proceedings Article
01 Dec 2020
TL;DR: In this paper, the authors identify different types of unrealistic news (clickbait and fake news written for entertainment purposes) written in Hungarian on the basis of a rich feature set and with the help of machine learning methods.
Abstract: Online news do not always come from reliable sources and they are not always even realistic. The constantly growing number of online textual data has raised the need for detecting deception and bias in texts from different domains recently. In this paper, we identify different types of unrealistic news (clickbait and fake news written for entertainment purposes) written in Hungarian on the basis of a rich feature set and with the help of machine learning methods. Our tool achieves competitive scores: it is able to classify clickbait, fake news written for entertainment purposes and real news with an accuracy of over 80%. It is also highlighted that morphological features perform the best in this classification task.

3 citations

Proceedings Article
01 Dec 2020
TL;DR: In this paper, the authors used such language features as bag-of-n-grams and bag of Rhetorical Structure Theory features, and BERT embeddings for fake news detection in Russian.
Abstract: In this paper, we trained and compared different models for fake news detection in Russian. For this task, we used such language features as bag-of-n-grams and bag of Rhetorical Structure Theory features, and BERT embeddings. We also compared the score of our models with the human score on this task and showed that our models deal with fake news detection better. We investigated the nature of fake news by dividing it into two non-overlapping classes: satire and fake news. As a result, we obtained the set of models for fake news detection; the best of these models achieved 0.889 F1-score on the test set for 2 classes and 0.9076 F1-score on 3 classes task.

2 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
20207
20191
20181