scispace - formally typeset
Open AccessJournal ArticleDOI

Asymmetrical perceptions of partisan political bots

Reads0
Chats0
TLDR
Investigating the ability to differentiate bots with partisan personas from humans on Twitter reveals asymmetrical partisan-motivated reasoning, in that conservative profiles appear to be more confusing and Republican participants perform less well in the recognition task.
Abstract
Political bots are social media algorithms that impersonate political actors and interact with other users, aiming to influence public opinion. This study investigates the ability to differentiate ...

read more

Content maybe subject to copyright    Report

PERCEPTIONS OF POLITICAL BOTS
1
Asymmetrical Perceptions of Partisan Political Bots
Authors: Harry Yaojun Yan
1,2
, Kai-Cheng Yang
2
, Filippo Menczer
2
, James Shanahan
1
Affiliations: 1. The Media School, Indiana University-Bloomington
2. Luddy School of Informatics, Computing, and Engineering
This manuscript has been accepted for publication at New Media & Society

PERCEPTIONS OF POLITICAL BOTS
2
Abstract
Political bots are social media algorithms that impersonate political actors and interact
with other users, aiming to influence public opinion. This study examines perceptions of bots
with partisan personas by conducting an online experiment (N = 656) that examines the ability to
differentiate bots from humans on Twitter. We explore how characteristics of the experiment
participants and the profiles being evaluated bias recognition accuracy. Our analysis reveals
asymmetrical partisan-motivated reasoning, in that conservative profiles appear to be more
confusing and Republican participants perform less well in the recognition task. Moreover,
Republican users are more likely to confuse conservative bots with humans, whereas Democratic
users are more likely to confuse conservative human users with bots. We discuss implications for
how partisan identities affect motivated reasoning and how political bots exacerbate political
polarization.
Keywords: Political bots, Social media, Online human-bot interaction, Partisan-motivated
reasoning, Political polarization

PERCEPTIONS OF POLITICAL BOTS
3
Asymmetrical Perceptions of Partisan Political Bots
In January 2018, a New York Times’ investigation exposed the business of manufacturing
social media popularity. One Florida company called Devumi alone produced and sold social
bots accounting for over 200 million fake Twitter followers. Social bots are defined as computer
algorithms that use software to produce content automatically or semi-automatically, trying to
emulate and possibly alter human behavior (Ferrara, Varol, Davis, Menczer, & Flammini, 2016).
Although not all bots are created with malicious intent, bots designed to sway public opinion and
potentially change political behaviors have been found to be damaging and prevalent,
particularly since the 2016 U.S. presidential campaign (Badawy, Ferrara, & Lerman, 2018; Bessi
& Ferrara, 2016; Shao et al., 2018a).
The emergence of social bots, including political bots, has facilitated the distribution of
false information in at least two ways. First, evolving bot algorithms that mimic online human
behavior make even experienced users vulnerable (Wagner, Mitter, Körner, & Strohmaier,
2012). Bots can monitor user traffic flow and follow circadian rhythms to maximize the visibility
of their content (Ferrara et al., 2016). Second, network effects can inflate the popularity of
certain issues (Ferrara et al., 2016). So-called “botnets”—coordinated groups of inauthentic
accounts—can do everything regular users do to generate influence, but faster, at a more massive
scale, and at lower cost (Boshmaf, Muslukhov, Beznosov, & Ripeanu, 2013). When a user
observes a tweet or an account profile along with their social influence indicators (number of
retweets, replies, likes, friends, and followers), it can be virtually impossible to efficiently
discern if the indicators are genuine or inflated by bots, regardless of the legitimacy of the tweet
or account.

PERCEPTIONS OF POLITICAL BOTS
4
In comparison to generic social bots, political bots that have explicit partisan personas
can be even more deceptive in the contemporary political climate. First, the appearance of
political bots can be more strategically organized, and the attacks more targeted (Stella, Ferrara,
& De Domenico, 2018). The deployment of political bots has been observed during election
cycles (Bessi & Ferrara, 2016), swamping voters already overwhelmed by a dramatic increase in
political information. With a polarized ideological gap that is at its historical peak in the US
(Kiley, 2017), political bots are likely to exploit the biased perceptions of people with different
party identifications, making them more vulnerable to manipulation.
In light of a Pew Research survey showing an increased public awareness of social bots
(Stoking & Sumida, 2018), the main goal of this study is to explore and understand the human
capability to distinguish political bots from human users. This task presents both the challenge of
recognition (Gardiner, Ramponi, & Richardson-Klavehn, 2002) and that of decision under
uncertainty (Busemeyer, 1985; Busemeyer, & Townsend, 1993). Here we seek to investigate
how new information, past experiences, and deliberation processes affect the accuracy of
people’s classifications. Accordingly, we explore the effects of three sets of factors on bot-
recognition. We focus on 1) new information about the profiles, such as their political personas;
2) past experiences and characteristics of the participants, such as their extant knowledge of
social bots and their party identifications; and 3) the cognitive factors involved in the
deliberation process, such as attention allocated to decision making.
To examine how these factors affect the ability to differentiate bots from humans, we
designed a two-by-two online experiment that included a recognition task involving 20 actual
Twitter profiles. The experimental conditions corresponded to test profiles with low vs. high
bot/human ambiguity and liberal vs. conservative personas. The profiles were sampled from

PERCEPTIONS OF POLITICAL BOTS
5
followers of US politicians on Twitter. The level of ambiguity was determined using a mix of
machine learning and manual coding to select test profiles in the four experimental conditions.
By investigating bots with explicitly partisan personas, we explore whether partisan-
motivated reasoning (Redlawsk, 2002) is associated with higher susceptibility to bots among
certain individuals and groups. The results demonstrate how the recognition task performance
depends on both the partisan personas of the profiles being evaluated and the party affiliations of
the evaluators. We explain the discrepancies in performance through the theoretical lenses of
motivated-reasoning (Kunder, 1988) and social identity theory (Tajfel, Turner, Austin, &
Worchel, 1979). The results have important implications for connecting the two theories (Lelkes
& Westwood, 2017) and explaining political polarization on Twitter (Conover et al., 2011).
Social Bots, Political Bots, and Detection
Some social bots appear easily identifiable because they are designed for a single
purpose, such as boosting an account’s follower or retweet count (Yang et al. 2019). A user can
discover the authenticity of the profiles by checking for suspicious profile information, such as
automatically generated usernames, unbalanced following-follower ratios, recent profile creation
times, missing or incoherent profile descriptions, weirdly distorted profile pictures, etc. (Davis,
Varol, Ferrara, Flammini, & Menczer, 2016). In addition, users can check patterns in activity
including lack of linguistic diversity, limited number of original tweets, a large number of
retweets of the same information, and repetitive interactions with other Twitter users (Davis, et
al. 2016).
In comparison with generic social bots, the major damage political bots are designed to
do is to impersonate political actors with partisan personas, generating false support and

Citations
More filters
Journal ArticleDOI

Conservatives' susceptibility to political misperceptions.

TL;DR: This paper found that conservatives have lower sensitivity than liberals, performing worse at distinguishing truths and falsehoods, partially explained by the fact that the most widely shared falsehoods tend to promote conservative positions, while corresponding truths typically favor liberals.
Journal ArticleDOI

Right and left, partisanship predicts (asymmetric) vulnerability to misinformation

TL;DR: This paper analyzed the relationship between partisanship, echo chambers, and vulnerability to online misinformation by studying news sharing behavior on Twitter and found that vulnerability to misinformation is most strongly influenced by partisanship for both left and right-leaning users.
Journal ArticleDOI

Neutral bots probe political bias on social media.

TL;DR: The authors deploy neutral social bots who start following different news sources on Twitter, and track them to probe distinct biases emerging from platform mechanisms versus user interactions, finding no strong or consistent evidence of political bias in the news feed.
Journal ArticleDOI

Corrections of political misinformation: no evidence for an effect of partisan worldview in a US convenience sample.

TL;DR: One factor assumed to affect post-correction reliance on misinformation is worldview-driven motivated re-evaluation as mentioned in this paper, which often has a continuing effect on people's reasoning despite clear correction.
Proceedings ArticleDOI

The Disinformation Dozen: An Exploratory Analysis of Covid-19 Disinformation Proliferation on Twitter

TL;DR: In this study, an exploratory analysis on Disinfo12’s activity on Twitter is performed aiming at identifying their sharing strategies, favorite sources of information, and potential secondary actors contributing to the proliferation of questionable narratives.
References
More filters
Journal ArticleDOI

The case for motivated reasoning.

TL;DR: It is proposed that motivation may affect reasoning through reliance on a biased set of cognitive processes--that is, strategies for accessing, constructing, and evaluating beliefs--that are considered most likely to yield the desired conclusion.
Journal ArticleDOI

Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments.

TL;DR: Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability.
Journal ArticleDOI

The Psychology of Prejudice: Ingroup Love and Outgroup Hate?

TL;DR: A review of research and theory on the motivations for maintaining ingroup boundaries and the implications of ingroup boundary protection for intergroup relations, conflict, and conflict prevention can be found in this paper.
Journal ArticleDOI

The science of fake news

TL;DR: The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age as discussed by the authors. But much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors.
Journal ArticleDOI

Decision Field Theory: A Dynamic-Cognitive Approach to Decision Making in an Uncertain Environment.

TL;DR: Decision field theory provides for a mathematical foundation leading to a dynamic, stochastic theory of decision behavior in an uncertain environment and is compared with 4 other theories of decision making under uncertainty.
Related Papers (5)
Frequently Asked Questions (15)
Q1. What are the contributions in this paper?

This study examines perceptions of bots with partisan personas by conducting an online experiment ( N = 656 ) that examines the ability to differentiate bots from humans on Twitter. The authors discuss implications for how partisan identities affect motivated reasoning and how political bots exacerbate political polarization. 

Future research could build experimental environments in which researchers have full control over the profiles. Considering the complex and diverse design of bots, future research will benefit from large-scale content analysis, where reliability of the coding is tested on an independent sample. In particular, the deceptive nature of conservative profiles needs to be further explored in studies with representative samples of partisan bots. Finally, to further test partisan differences in motivated reasoning, it would be of interest to replicate the study when the president is from the Democratic Party. 

when subjects are asked to perform more difficult tasks, the longer duration could indicate cognitive-resource deficit due to a higher level of perceived difficulty, which might impede good performance. 

the authors inspect skepticism rather than hostility, because it is not an aggressive behavior and therefore it is more likely to be observed. 

While the authors explore the potential nonlinear relationship between time and task performance, the authors are inclined to believe that longer time implies lower accuracy in telling bots from humans, partly because their methodology screens inattentive participants. 

They use supervised Machine Learning (ML) classification to categorize bots based onPERCEPTIONS OF POLITICAL BOTS7individual account characteristics such as the number of friends, number of favorites, number of mentions, account age, etc. Botometer and other supervised ML algorithms use statistical methods (e.g., random forest) or neural networks and have demonstrated satisfying results in identifying individual accounts (Kudugunta & Ferrara, 2018). 

Some social bots appear easily identifiable because they are designed for a singlepurpose, such as boosting an account’s follower or retweet count (Yang et al. 2019). 

Deb, Badawy, and Ferrara (2019) analyzed approximately one million Twitter accounts and showed that conservative bots are much more active in terms of the number of tweets; they are twice as active as liberal bots or their conservative human counterparts, and nearly three times as active as liberal humans. 

Given the broad diversity of political bots, here the authors focus on two dimensionsthat may play key roles in affecting human perceptions: the ambiguity between human and botPERCEPTIONS OF POLITICAL BOTS9accounts, and the partisanship of an account. 

To further examine Wang’s et al. (2014) finding with insight from cognitive psychology,in the present bot detection task the authors consider the role of two factors in recognition accuracy: time (as a proxy for attention) and perceived uncertainty in the deliberation process (H4). 

Differentiating bots from human users relies mostly on individual capacity; it is essentially a deliberative process with considerable uncertainty, especially in an experimental setting. 

their research showed that conservative bots are particularly effective compared to liberal bots at establishing following-follower relationships and interactions with humans, e.g., they receive more replies and are retweeted more often. 

These observations could be explained by a combination of two phenomena: conservative bots being more effective, and/or conservative users being more vulnerable to manipulation by social bots (Grinberg et al. 2019). 

Usually in simple tasks, a longer reaction time could simply mean that more attention resources are allocated (Lang & Basil, 1998). 

To test the observation in a more controlled setting, researchers designed bot accounts with different combinations of realistic features and used them to send private messages to users in an experiment (Wald, Khoshgoftaar, Napolitano, & Sumner, 2013).