scispace - formally typeset
Open AccessProceedings ArticleDOI

Red Bots Do It Better:Comparative Analysis of Social Bot Partisan Behavior

Reads0
Chats0
TLDR
It is shown that social bots can be accurately classified according to their political leaning and behave accordingly, and that conservative bots are more deeply embedded in the social network and more effective than liberal bots at exerting influence on humans.
Abstract
Recent research brought awareness of the issue of bots on social media and the significant risks of mass manipulation of public opinion in the context of political discussion. In this work, we leverage Twitter to study the discourse during the 2018 US midterm elections and analyze social bot activity and interactions with humans. We collected 2.6 million tweets for 42 days around the election day from nearly 1 million users. We use the collected tweets to answer three research questions: (i) Do social bots lean and behave according to a political ideology? (ii) Can we observe different strategies among liberal and conservative bots? (iii) How effective are bot strategies in engaging humans? We show that social bots can be accurately classified according to their political leaning and behave accordingly. Conservative bots share most of the topics of discussion with their human counterparts, while liberal bots show less overlap and a more inflammatory attitude. We studied bot interactions with humans and observed different strategies. Finally, we measured bots embeddedness in the social network and the extent of human engagement with each group of bots. Results show that conservative bots are more deeply embedded in the social network and more effective than liberal bots at exerting influence on humans.

read more

Content maybe subject to copyright    Report

Red Bots Do It Beer:
Comparative Analysis of Social Bot Partisan Behavior
Luca Luceri
University of Applied Sciences and Arts of Southern
Switzerland, and University of Bern
Manno, Switzerland
luca.luceri@supsi.ch
Ashok Deb
USC Information Sciences Institute
Marina del Rey, CA
ashok@isi.edu
Adam Badawy
USC Information Sciences Institute
Marina del Rey, CA
badawy@isi.edu
Emilio Ferrara
USC Information Sciences Institute
Marina del Rey, CA
emiliofe@usc.edu
ABSTRACT
Recent research brought awareness of the issue of bots on social
media and the signicant risks of mass manipulation of public opin-
ion in the context of political discussion. In this work, we leverage
Twitter to study the discourse during the 2018 US midterm elections
and analyze social bot activity and interactions with humans. We
collected 2.6 million tweets for 42 days around the election day
from nearly 1 million users. We use the collected tweets to answer
three research questions:
(i)
Do social bots lean and behave according
to a political ideology?
(ii)
Can we observe dierent strategies among
liberal and conservative bots?
(iii )
How eective are bot strategies in
engaging humans?
We show that social bots can be accurately classied according
to their political leaning and behave accordingly. Conservative bots
share most of the topics of discussion with their human counter-
parts, while liberal bots show less overlap and a more inammatory
attitude. We studied bot interactions with humans and observed
dierent strategies. Finally, we measured bots embeddedness in
the social network and the extent of human engagement with each
group of bots. Results show that conservative bots are more deeply
embedded in the social network and more eective than liberal bots
at exerting inuence on humans.
CCS CONCEPTS
Networks Social media networks
;
Human-centered com-
puting Social network analysis.
KEYWORDS
social media; political elections; social bots; political manipulation
Also with USC Information Sciences Institute.
L. Luceri & A. Deb contributed equally to this work.
This paper is published under the Creative Commons Attribution 4.0 International
(CC-BY 4.0) license. Authors reserve their rights to disseminate the work on their
personal and corporate Web sites with the appropriate attribution.
WWW ’19 Companion, May 13–17, 2019, San Francisco, CA, USA
©
2019 IW3C2 (International World Wide Web Conference Committee), published
under Creative Commons CC-BY 4.0 License.
ACM ISBN 978-1-4503-6675-5/19/05.
https://doi.org/10.1145/3308560.3316735
ACM Reference Format:
Luca Luceri, Ashok Deb, Adam Badawy, and Emilio Ferrara. 2019. Red
Bots Do It Better: Comparative Analysis of Social Bot Partisan Behavior. In
Companion Proce edings of the 2019 World Wide Web Conference (WWW ’19
Companion), May 13–17, 2019, San Francisco, CA, USA. ACM, New York, NY,
USA, 6 pages. https://doi.org/10.1145/3308560.3316735
1 INTRODUCTION
During the last decade, social media have become the conventional
communication channel to socialize, share opinions, and access
the news. Accuracy, truthfulness, and authenticity of the shared
content are necessary ingredients to maintain a healthy online
discussion. However, in recent times, social media have been dealing
with a considerable growth of false content and fake accounts. The
resulting wave of misinformation (and disinformation) highlights
the pitfalls of social media and their potential harms to several
constituents of our society, ranging from politics to public health.
In fact, social media networks have been used for malicious
purposes to a great extent [
11
]. Various studies raised awareness
about the risk of mass manipulation of public opinion, especially
in the context of political discussion. Disinformation campaigns [
2
,
5
,
12
,
14
17
,
22
,
24
,
26
,
30
] and social bots [
3
,
4
,
21
,
23
,
25
,
29
,
31
,
32
] have been indicated as factors contributing to social media
manipulation.
The 2016 US Presidential election represents a prime example of
the signicant perils of mass manipulation of political discourse.
Badawy et al
. [1]
studied the Russian interference in the election
and the activity of Russian trolls on Twitter. Im et al
. [18]
suggested
that troll accounts are still active to these days. The presence of
social bots does not show any sign of decline [
10
,
32
] despite the
attempts from social network providers to suspend suspected, mali-
cious accounts. Various research eorts have been focusing on the
analysis, detection, and countermeasures development against so-
cial bots. Ferrara et al
. [13]
highlighted the consequences associated
with bot activity in social media. The online conversation related
to the 2016 US presidential election was further examined [
3
] to
quantify the extent of social bots activity. More recently, Stella et al
.
[27]
discussed bots’ strategy of targeting inuential humans to ma-
nipulate online conversation during the Catalan referendum for
independence, whereas Shao et al
. [25]
analyzed the role of social
bots in spreading articles from low credibility sources. Deb et al
.

[10]
focused on the 2018 US Midterms elections with the objective
to nd instances of voter suppression.
In this work, we investigate social bots behavior by analyzing
their activity, strategy, and interactions with humans. We aim to
answer the following research questions (RQs) regarding social
bots behavior during the 2018 US Midterms election.
RQ1:
Do social bots lean and behave according to a political ideology?
We investigate whether social bots can be classied based on
their political inclination into liberal or conservative leaning.
Further, we explore to what extent they act similarly to the
corresponding human counterparts.
RQ2:
Can we observe dierent strategies among liberal and conser-
vative bots? We examine the dierences between social bot
strategies to mimic humans and inltrate political discus-
sion. For this purpose, we measure bot activity in terms of
volume and frequency of posts, interactions with humans,
and embeddedness in the social network.
RQ3:
How eective are bot strategies in engaging humans? We in-
troduce four metrics to estimate the eectiveness of bot
strategies in involving humans in their conversation and to
evaluate the degree of human interplay with social bots.
We leverage Twitter to capture the political discourse during
the 2018 US midterm elections. We collected 2.6 million tweets for
42 days around election day from nearly 1 million users. We then
explore collected data and attain the following ndings:
We show that social bots are embedded in each political side
and behave accordingly. Conservative bots abide by the topic
discussed by the human counterpart more than liberal bots,
which in turn exhibit a more provocative attitude.
We examined bots’ interactions with humans and observed
dierent strategies. Conservative bots stand in a more cen-
tral social network position, and divide their interactions
between humans and other conservative bots, whereas lib-
eral bots focused mainly on the interplay with the human
counterparts.
We measured the extent of human engagement with bots
and recognized the strategy of conservative bots as the most
eective in terms of inuence exerted on human users.
2 DATA
In this study, we use Twitter to investigate the partisan behavior
of malicious accounts during the 2018 US midterm elections. For
this purpose, we carried out a data collection from the month prior
(October 6, 2018) to two weeks after (November 19, 2018) the day
of the election. We kept the collection running after the election
day as several races remained unresolved. We employed the Python
module Twyton to collect tweets through the Twitter Streaming
API using the following keywords as a lter: 2018midtermelections,
2018midterms, elections, midterm, and midtermelections. As a result,
we gathered 2.7 million tweets, whose IDs are publicly available
for download.
1
From this set, we rst removed any duplicate tweet,
which may have been captured by accidental redundant queries
to the Twitter API. Then,we ltered out all the tweets not written
in English language and those that were out of the context of this
1
https://github.com/A-Deb/midterms
Table 1: Dataset statistics
Statistic Count
# of Tweets 452,288
# of Retweets 1,869,313
# of Replies 267,973
# of Users 997,406
study. Overall, we retain nearly 2.6 millions tweets, whose aggregate
statistics are reported in Table 1.
3 METHODOLOGY
3.1 Bot Detection
Nowadays, bot detection is a fundamental asset for understanding
social media manipulation and, more specically, to reveal ma-
licious accounts. In the last few years, the problem of detecting
automated accounts gathered both attention and concern [
13
], also
bringing a wide variety of approaches to the table [
7
,
8
,
20
,
28
].
While increasingly sophisticated techniques keep emerging [
20
],
in this study, we employ the widely used Botometer.
2
Botometer is a machine learning-based tool developed by Indiana
University [
9
,
29
] to detect social bots in Twitter. It is based on an
ensemble classier [
6
] that aims to provide an indicator, namely bot
score, used to classify an account either as a bot or as a human. To
feed the classier, the Botometer API extracts about 1,200 features
related to the Twitter account under analysis. These features fall in
six broad categories and characterize the account’s prole, friends,
social network, temporal activity patterns, language, and sentiment.
Botometer outputs a bot score: the lower the score, the higher the
probability that the user is human. In this study we use version
v3 of Botometer, which brings some innovations, as detailed in
[
32
]. Most importantly, the bot scores are now rescaled (and not
centered around 0.5 anymore) through a non-linear re-calibration
of the model.
In Figure 1, we depict the bot score distribution of the 997,406
distinct users in our datasets. The distribution exhibits a right skew:
most of the probability mass is in the range [0, 0.2] and some peaks
can be noticed around 0.3. Prior studies used the 0.5 threshold to sep-
arate humans from bots. However, according to the re-calibration
introduced in Botometer v3 [
32
], along with the emergence of in-
creasingly more sophisticated bots, we here lower the bot score
threshold to 0.3 (i.e., a user is labeled as a bot if the score is above
0.3). This threshold corresponds to the same level of sensitivity
setting of 0.5 in prior versions of Botometer (cf. Fig 5 from [32]).
According to this choice, we classied 21.1% of the accounts as
bots, which in turn generated 30.6% of the tweets in our data set.
Overall, Botometer did not return a score for 35,029 users that corre-
sponds to 3.5% of the accounts. We used the Twitter API to further
inspect them. Interestingly, 99.4% of these accounts were suspended
by Twitter, whereas the remaining percentage of users protected
their tweets turning on the privacy settings of their accounts.
3.2 Political Ideology Inference
In parallel to the bot detection analysis, we examine the political
leaning of both bots and humans in our dataset. To classify users
based on their political ideology, we rely on the political leaning of
2
https://botometer.iuni.iu.edu/

Figure 1: Bot score distribution
the media outlets they share. We make use of a list of partisan media
outlets released by third-party organizations, such as AllSides
3
and
Media Bias/Fact Check.
4
We combine liberal and liberal-center me-
dia outlets into one list (composed of 641 outlets) and conservative
and conservative-center into another (composed of 398 outlets).
To cross reference these media URLs with the URLs in the Twitter
dataset, we need to get the expanded URLs for most of the links in
the dataset, as most of them are shortened. However, this process is
quite time-consuming, thus, we decided to rank the top 5,000 URLs
by popularity and retrieve the long version only for those. These
top 5,000 URLs accounts for more than 254K, or more than 1/3 of all
the URLs in the dataset. After cross-referencing the 5,000 extended
URLs with the media URLs, we observe that 32,115 tweets in the
dataset contain a URL that points to one of the liberal media outlets
and 25,273 tweets with a URL pointing to one of the conservative
media outlets.
To label Twitter accounts as liberal or conservative, we use a
polarity rule based on the number of tweets they produce with links
to liberal or conservative sources. Thereby, if an account has more
tweets with URLs pointing to liberal sources, it is labeled as liberal
and vice versa. Although the overwhelming majority of accounts
include URLs that are either liberal or conservative, we remove any
account that has equal number of tweets from each side. Our nal
set of labeled accounts includes 38,920 users.
Finally, we use label propagation to classify the remaining ac-
counts in a similar way to previous work (cf. [
1
]). For this purpose,
we construct a social network based on the retweets exchanged
between users. The nodes of the retweet network are the users,
which are connected by a direct link if one user retweeted a post of
another user. To validate results of the label propagation algorithm,
we apply a stratied cross (5-fold) validation to a set composed of
38,920 seed accounts. We train the algorithm using 80% of the seeds
and we evaluate the performance on the remaining 20%. Finally,
we compute precision and recall by reiterating the validation of the
5-folds. Both precision and recall scores show value around 0.89
with bounds from 0.88 to 0.90. Both the scores for liberals are about
0.87 with 0.85-0.88 bounds, while for conservatives the scores are
around 0.93 with 0.92-0.93 bounds. To further validate the proposed
approach, we use as a ground truth the political leaning of the
media outlet that users shared in their prole, obtaining precision
and recall scores in line with the previous approach.
3
https://www.allsides.com/media-bias/media-bias-ratings
4
https://mediabiasfactcheck.com/
Table 2: Users and tweets statistics
Liberal Conservative
Humans 386,391 (38.7%) 122,761 (12.3%)
Bots 82,118 (8.2%) 49,488 (4.9%)
(a) Number (percentage) of users per group
Liberal Conservative
Humans 957,726 (37.0%) 476,231 (18.4%)
Bots 288,659 (11.1%) 364,727 (14.1%)
(b) Number (percentage) of tweets per group
3.3 Human-Bot Interaction
We next introduce four metrics to estimate the eectiveness of bot
actions in involving humans and, at the same time, measure to what
extent humans rely upon, and interact with the content generated
by social bots. Thereby, we propose the following metrics:
Retweet Pervasiveness (
RT P
) measures the intrusiveness of
bot-generated content in human-generated retweets:
RT P =
no. of human retweets from bot tweets
no. of human retweets
(1)
Reply Rate (
RR
) measures the percentage of replies given by
humans to social bots:
RR =
no. of human replies to bot tweets
no. of human replies
(2)
Human to Bot Rate (
H
2
BR
) quanties human interaction with
bots over all the human activities in the social network:
H 2BR =
no. of humans interaction with bots
no. of humans activity
, (3)
where the numerator counts for human replies/retweets to/of
bots generated content, while the denominator is the sum of
the number of human tweets, retweets, and replies.
Tweet Success Rate (
T SR
) is the percentage of tweets gener-
ated by bots that obtained at least one retweet by a human:
T SR =
no. of tweet retweeted at least once by a human
no. of bots tweets
(4)
4 RESULTS
Next, we address the research questions discussed in the Intro-
duction. We examine social bot partisanship and, accordingly, we
analyze bots’ strategies and measure the eectiveness of their ac-
tions in terms of human engagement.
4.1 RQ1: Bot Political Leaning
The combination of the outcome from the bot detection algorithm
and the political ideology inference allowed us to identify four
groups of users, namely Liberal Humans, Conservative Humans,
Liberal Bots, and Conservative Bots. In Table 2a, we show the per-
centage of users per group. Note that percentages do not sum up
to 100 as either the political ideology inference was not able to
classify every user, or Botometer did not return a score, as we pre-
viously mentioned. In particular, we were able to assign a political
leaning to 63% of bots and 67% of humans. We nd that the liberal
user population is almost three times larger than the conservative
counterpart. This discrepancy is also present, but less evident, for
the bot accounts, which exhibit an unbalance in favor of liberal

(b) 25-core decomposition
(a) 10-core decomposition
Figure 2: Political discussion over (a) the 10-core, and ( b) the 25-core decomposition of the retweet network. Each node repre-
sents a user, while links represent retweets. Links with weight (i.e., frequency of occurrence) less than 2 are hidden to minimize
visual clutter. Blue nodes represent liberal accounts, while red nodes indicate conservative users. Darker tones (blue and red)
depict bots, while lighter tones (cyan and pink) relate to humans, and the few green nodes represent unclassied accounts.
The link takes the same color of the source node (author of the retweet), whereas node size is proportional to the in-degree of
the user.
Table 3: Top 20 hashtags generated by liberal and conserva-
tive bots. Hashtags in bold are not present in the top 50 hash-
tags used by the corresponding human group.
Liberal Bots Conservative Bots
#MAGA #BrowardCounty
#NovemberisComing #MAGA
#TheResistance #StopTheSteal
#GOTV #WalkAway
#Florida #WednesdayWisdom
#ImpeachTrump #PalmBeachCounty
#Russia #Florida
#VoteThemOut #QAnon
#unhackthevote #KAG
#FlipTheHouse #IranRegime
#RegisterToVote #Tehran
#Resist #WWG1WGA
#ImpeachKavanaugh #Louisiana
#GOP #BayCounty
#MeToo #AmericaFirst
#AMJoy #DemocratsAreDangerous
#txlege #StopTheCaravan
#FlipTheSenate #Blexit
#CultureOfCorruption #VoteDemsOut
#TrumpTrain #VoterFraud
bots. Further, we investigate the suspended accounts to inspect
the consistency of this result. The inference algorithm attributed a
political ideology to 63% of these accounts, which show once again
the liberal advantage over the conservative faction (45% vs. 18%).
Figure 2 shows two k-core decomposition graphs of the retweet
network. In a k-core, each node is connected with at least k other
nodes. Figures 2a and 2b capture the 10-core and 25-core decompo-
sition, respectively. Here, nodes represent Twitter users and link
represent retweets among them. We indicate as source the user that
retweeted the tweet of a target user. Colors represent the political
ideology, with darker colors (red and blue) being bots and lighter
colors (cyan and pink) being human users; size represents the in-
degree. The graph is visualized using a force-directed layout [
19
],
where nodes repulse each other, while edges attract their nodes. In
our setting, this means that users are spatially distributed accord-
ing to the amount of retweets between each other. The result is a
network naturally split into two communities, where each side is
almost entirely populated by users with the same political ideology.
This polarization is also reected by bots, which are embedded,
with humans, in each political side. Two facts are worth noting: (i)
as k increases, the left k-core appears to disrupt, while the right
k-core remains well connected; and, (ii) as k increases, bots appear
to outnumber humans, suggesting that bots may populate areas of
the retweet network that are more central and better connected.
Next, we examine the topics discussed by social bots and com-
pare them with the human counterparts. Table 3 shows the top 20
hashtags utilized by liberal and conservative bots. We highlight
(in bold) the hashtags that are not present in the top 50 hashtags
used by the corresponding human group to point out the similar-
ities and dierences among the groups. In this table, we do not
take into account hashtags related to the keywords used in the data
collection (such as #elections, #midterms), and hashtags used to sup-
port the political group (such as #democrats, #liberals, #VoteRed(or
Blue)ToSaveAmerica) as
(i)
the overlap between bot and human
hashtags is noticeable when these terms are considered (in the
interest of space, we do not show this result in Table 3), and
(ii)
we aim to narrow the analysis to specic topics and inammatory
content, inspired by [
27
]. Moreover, we used an enlarged subset
of hashtags for the human groups to further strengthen the dif-
ferences and, at the same time, to better understand the objective
of social bots. Although bots and humans share the majority of

Table 4: Average network centrality measures
Liberal Conservative
Humans 2.66 ·10
6
4.14 ·10
6
Bots 3.70 ·10
6
7.81 ·10
6
(a) Out-degree centrality
Liberal Conservative
Humans 2.52 ·10
6
4.24 ·10
6
Bots 2.53 ·10
6
6.22 ·10
6
(b) In-degree centrality
hashtags, two main dierences can be noticed. First, conservative
bots abide by the corresponding human counterpart more than the
liberal bots. Second, liberal bots focus on more inammatory and
provocative content (e.g., #ImpeachTrump, #ImpeachKavanaugh,
#FlipTheSenate) w.r.t. conservative bots.
4.2 RQ2: Bot Activity and Strategies
In this Section, we investigate social bot activity based on their
political leaning. We explore their strategies in interacting with
humans and the degree of embeddedness in the social network.
Table 2b depicts the number (and percentage) of tweets generated
by each group. Despite the group composed of conservative bots
is the smallest in terms of number of accounts, it produced more
tweets than liberal bots and closely approaches the number of
tweets generated by the human counterpart. The resulting tweet
per user ratio shows that conservative bots produce 7.4 tweets per
account, which is more than twice the ratio related to the liberal
bots (3.5), almost the double of the human counterpart (3.9), and
nearly three times the ratio of liberal humans (2.5).
To investigate the interplay between bots and humans, we con-
sider the previously described retweet network. Figure 3 shows the
interaction among the four groups. We maintain the same color
mapping described before, with darker color (on the bottom) rep-
resenting bots and lighter color (on top) indicating humans. Node
size is proportional to the percentage of accounts in each group,
while edge size is proportional to the percentage of interactions
between each group. In Figure 3a, this percentage is computed
considering all the interactions in the retweet network, while in
Figure 3b we consider each group separately, therefore, the edge
size gives a measure of the group propensity to interact with the
other groups. Consistently with Figure 2, we observe that there is a
limited amount of interaction between the two political sides. The
majority of interactions are either intra-group or between groups
of the same political leaning. From Figure 3b, we can observe that
the two bot factions adopted dierent strategies. Conservative bots
balanced their interactions by retweeting group members 43% of
the time, and the human counterpart 52% of the time. On the other
hand, liberal bots mainly retweeted liberal humans (71% of the time)
and limited the intra-group interactions to the 22% of their retweet
activity. Interestingly, conservative humans interacted with the
conservative bots (28% of the time) much more than the liberal
counterpart (16%) with the liberal bots. To better understand these
results and to measure the extent of human engagement with bots,
in the next Section we evaluate the four metrics introduced earlier
in this paper.
Bots
Humans
(a) Overall interactions
(b) Group-based interactions
Figure 3: Interactions according to political ideology
Figure 4: k-core decomposition, liberal vs. conservative
users
Finally, we examine the degree of embeddedness of both hu-
mans and bots within the retweet network. For this purpose, we
rst compute dierent network centrality measures, and then we
adopt the k-core decomposition technique to identify the most
central nodes in the graph. In Table 4, we show the average out-
and in-degree centrality for each group of users. Out-degree cen-
trality measures the quantity of outgoing links, while in-degree
centrality considers the number of of incoming links. Both of these
measures are normalized by the maximum possible degree of the
graph. Overall, conservative groups have higher centrality mea-
sures than the liberal ones. We can notice that conservative bots
achieve the highest values both for the out- and in-degree centrality.
To further investigate bots embeddedness in the social network,
we use the k-core decomposition. The objective of this technique
is to determine the set of nodes deeply embedded in a graph. The
k-core is a subgraph of the original graph in which every node has
a degree equal to or greater than a given value
k
. We extracted
the k-cores from the retweet network by varying
k
in the range
between 0 and 30. Figure 4 depicts the percentage of liberal and
conservative users as a function of
k
. We can notice that, as
k
grows,
the fraction of conservative bots increases, while the percentage
of liberal bots remains almost stationary. On the human side, the
liberal fraction drops with
k
, whereas the conservative percentage
remains approximately steady. Overall, conservative bots sit in a
more central position in the social network and are more deeply
connected if compared to the liberal counterpart.

Citations
More filters
Book ChapterDOI

Social Media, Echo Chambers, and Political Polarization

TL;DR: Benkler et al. as mentioned in this paper argue that the main characteristic of social networking sites is that they allow politically like-minded individuals to find one another, and that the outcome of this process is a society that is increasingly segregated along partisan lines, and where compromise becomes unlikely due to rising mistrust on public officials, media outlets, and ordinary citizens on the other side of the ideological spectrum.
Journal ArticleDOI

False equivalencies: Online activism from left to right.

TL;DR: It is argued that in the United States and throughout the industrialized West, left- and right-wing activists use digital and legacy media differently to achieve political goals.
Journal ArticleDOI

Social Bots' Sentiment Engagement in Health Emergencies: A Topic-Based Analysis of the COVID-19 Pandemic Discussions on Twitter.

TL;DR: A topic-based sentiment analysis for bot-generated and human-generated tweets shows that social bots contributed to as much as 9.27% of COVID-19 discussions on Twitter, and social bots and humans shared a similar trend on sentiment polarity—positive or negative—for almost all topics.
Book ChapterDOI

Online Hate Speech

TL;DR: The authors examines the state of the literature, including scientific research, legal scholarship, and policy reports, on online hate speech and offers quantitative insights into what interventions might be most effective in combating harmful rhetoric online.
References
More filters
Journal ArticleDOI

Random Forests

TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Journal ArticleDOI

ForceAtlas2, a Continuous Graph Layout Algorithm for Handy Network Visualization Designed for the Gephi Software

TL;DR: ForceAtlas2 is a force-directed layout close to other algorithms used for network spatialization, designed for the Gephi user experience, and proposed for the first time as a benchmark for the compromise between performance and quality.
Journal ArticleDOI

Fake News Detection on Social Media: A Data Mining Perspective

TL;DR: Wang et al. as discussed by the authors presented a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets.
Journal ArticleDOI

The rise of social bots

TL;DR: In this article, the authors discuss the threat posed by today's social bots and how their presence can endanger online ecosystems as well as our society, and how to deal with them.
Proceedings ArticleDOI

BotOrNot: A System to Evaluate Social Bots

TL;DR: BotOrNot as discussed by the authors is a publicly available service that leverages more than one thousand features to evaluate the extent to which a Twitter account exhibits similarity to the known characteristics of social bots.
Related Papers (5)
Frequently Asked Questions (5)
Q1. What have the authors contributed in "Red bots do it better:comparative analysis of social bot partisan behavior" ?

In this work, the authors leverage Twitter to study the discourse during the 2018 US midterm elections and analyze social bot activity and interactions with humans. 

This, and related analysis, will be expanded in future work. 

To validate results of the label propagation algorithm, the authors apply a stratified cross (5-fold) validation to a set composed of 38,920 seed accounts. 

Two facts are worth noting: (i) as k increases, the left k-core appears to disrupt, while the right k-core remains well connected; and, (ii) as k increases, bots appear to outnumber humans, suggesting that bots may populate areas of the retweet network that are more central and better connected. 

For this purpose, the authors first compute different network centrality measures, and then the authors adopt the k-core decomposition technique to identify the most central nodes in the graph.