Asymmetrical perceptions of partisan political bots
read more
Citations
Conservatives' susceptibility to political misperceptions.
Right and left, partisanship predicts (asymmetric) vulnerability to misinformation
Neutral bots probe political bias on social media.
Corrections of political misinformation: no evidence for an effect of partisan worldview in a US convenience sample.
The Disinformation Dozen: An Exploratory Analysis of Covid-19 Disinformation Proliferation on Twitter
References
The case for motivated reasoning.
Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments.
The Psychology of Prejudice: Ingroup Love and Outgroup Hate?
The science of fake news
Decision Field Theory: A Dynamic-Cognitive Approach to Decision Making in an Uncertain Environment.
Related Papers (5)
Frequently Asked Questions (15)
Q2. What are the future works in this paper?
Future research could build experimental environments in which researchers have full control over the profiles. Considering the complex and diverse design of bots, future research will benefit from large-scale content analysis, where reliability of the coding is tested on an independent sample. In particular, the deceptive nature of conservative profiles needs to be further explored in studies with representative samples of partisan bots. Finally, to further test partisan differences in motivated reasoning, it would be of interest to replicate the study when the president is from the Democratic Party.
Q3. What is the effect of longer duration on the cognitive-resource deficit?
when subjects are asked to perform more difficult tasks, the longer duration could indicate cognitive-resource deficit due to a higher level of perceived difficulty, which might impede good performance.
Q4. Why is skepticism more likely to be observed?
the authors inspect skepticism rather than hostility, because it is not an aggressive behavior and therefore it is more likely to be observed.
Q5. Why do the authors believe that longer time implies lower accuracy in telling bots from humans?
While the authors explore the potential nonlinear relationship between time and task performance, the authors are inclined to believe that longer time implies lower accuracy in telling bots from humans, partly because their methodology screens inattentive participants.
Q6. What are the main features of supervised ML?
They use supervised Machine Learning (ML) classification to categorize bots based onPERCEPTIONS OF POLITICAL BOTS7individual account characteristics such as the number of friends, number of favorites, number of mentions, account age, etc. Botometer and other supervised ML algorithms use statistical methods (e.g., random forest) or neural networks and have demonstrated satisfying results in identifying individual accounts (Kudugunta & Ferrara, 2018).
Q7. Why do some social bots appear easily identifiable?
Some social bots appear easily identifiable because they are designed for a singlepurpose, such as boosting an account’s follower or retweet count (Yang et al. 2019).
Q8. How many conservative bots are active on Twitter?
Deb, Badawy, and Ferrara (2019) analyzed approximately one million Twitter accounts and showed that conservative bots are much more active in terms of the number of tweets; they are twice as active as liberal bots or their conservative human counterparts, and nearly three times as active as liberal humans.
Q9. What are the dimensions of the ambiguity between bots and humans?
Given the broad diversity of political bots, here the authors focus on two dimensionsthat may play key roles in affecting human perceptions: the ambiguity between human and botPERCEPTIONS OF POLITICAL BOTS9accounts, and the partisanship of an account.
Q10. What is the role of time in the recognition of bots?
To further examine Wang’s et al. (2014) finding with insight from cognitive psychology,in the present bot detection task the authors consider the role of two factors in recognition accuracy: time (as a proxy for attention) and perceived uncertainty in the deliberation process (H4).
Q11. What is the difference between a bot and a human?
Differentiating bots from human users relies mostly on individual capacity; it is essentially a deliberative process with considerable uncertainty, especially in an experimental setting.
Q12. What is the difference between bots and humans?
their research showed that conservative bots are particularly effective compared to liberal bots at establishing following-follower relationships and interactions with humans, e.g., they receive more replies and are retweeted more often.
Q13. What could be explained by the combination of two phenomena?
These observations could be explained by a combination of two phenomena: conservative bots being more effective, and/or conservative users being more vulnerable to manipulation by social bots (Grinberg et al. 2019).
Q14. What is the common explanation for the longer reaction time in a task?
Usually in simple tasks, a longer reaction time could simply mean that more attention resources are allocated (Lang & Basil, 1998).
Q15. What did the researchers use to test the observation in a more controlled setting?
To test the observation in a more controlled setting, researchers designed bot accounts with different combinations of realistic features and used them to send private messages to users in an experiment (Wald, Khoshgoftaar, Napolitano, & Sumner, 2013).