scispace - formally typeset
Open AccessProceedings ArticleDOI

Separating Facts from Fiction: Linguistic Models to Classify Suspicious and Trusted News Posts on Twitter

TLDR
This work builds predictive models to classify 130 thousand news posts as suspicious or verified, and predict four sub-types of suspicious news – satire, hoaxes, clickbait and propaganda, and shows that neural network models trained on tweet content and social network interactions outperform lexical models.
Abstract
Pew research polls report 62 percent of U.S. adults get news on social media (Gottfried and Shearer, 2016). In a December poll, 64 percent of U.S. adults said that “made-up news” has caused a “great deal of confusion” about the facts of current events (Barthel et al., 2016). Fabricated stories in social media, ranging from deliberate propaganda to hoaxes and satire, contributes to this confusion in addition to having serious effects on global stability. In this work we build predictive models to classify 130 thousand news posts as suspicious or verified, and predict four sub-types of suspicious news – satire, hoaxes, clickbait and propaganda. We show that neural network models trained on tweet content and social network interactions outperform lexical models. Unlike previous work on deception detection, we find that adding syntax and grammar features to our models does not improve performance. Incorporating linguistic features improves classification results, however, social interaction features are most informative for finer-grained separation between four types of suspicious news posts.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Towards Understanding the Information Ecosystem Through the Lens of Multiple Web Communities

TL;DR: The analysis reveal that fringe Web communities like 4chan's /pol/ and The_Donald subreddit have a disproportionate influence on mainstream communities like Twitter with regard to the dissemination of news and memes, while for Web archiving services, they can be misused to penalize ad revenue from news sources with conflicting ideology.
Book ChapterDOI

Fake News Detection Through Topic Modeling and Optimized Deep Learning with Multi-Domain Knowledge Sources

TL;DR: In this article, a two-step automatic fake news detection model was proposed using bidirectional encoder representations from the Transformers (BERT) model with optimal Neurons and Domain knowledge.
Posted Content

Political Bias and Factualness in News Sharing across more than 100,000 Online Communities

TL;DR: The authors conducted the largest study of news sharing on reddit to date, analyzing more than 550 million links spanning 4 years and found that extremely biased and low factual content is very concentrated, with 99% of such content being shared in only 0.5% of communities, giving credence to the recent strategy of communitywide bans and quarantines.
Book ChapterDOI

Detecting Fake News with Machine Learning

TL;DR: This article used Part of Speech and Sentiment Analysis features to detect fake news and found that the top ten features instead of all 43 features gave the accuracy of 0.85 and F-score of 087.
Dissertation

Detection of automatically generated texts

TL;DR: This thesis first introduces different methods of generating free texts that resemble a certain topic and how those texts can be used and sheds light on multiple important research questions about the possibility of detecting automatically generated texts in different setting.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings ArticleDOI

Glove: Global Vectors for Word Representation

TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Proceedings ArticleDOI

Large-Scale Video Classification with Convolutional Neural Networks

TL;DR: This work studies multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggests a multiresolution, foveated architecture as a promising way of speeding up the training.
Journal ArticleDOI

Liberals and Conservatives Rely on Different Sets of Moral Foundations

TL;DR: Across 4 studies using multiple methods, liberals consistently showed greater endorsement and use of the Harm/care and Fairness/reciprocity foundations compared to the other 3 foundations, whereas conservatives endorsed and used the 5 foundations more equally.
Related Papers (5)