scispace - formally typeset
Search or ask a question

Showing papers on "Social media published in 2017"


Journal ArticleDOI
TL;DR: The authors found that people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks, and that the average American adult saw on the order of one or perhaps several fake news stories in the months around the 2016 U.S. presidential election, with just over half of those who recalled seeing them believing them.
Abstract: Following the 2016 U.S. presidential election, many have expressed concern about the effects of false stories (“fake news”), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: (i) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their “most important” source; (ii) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; (iii) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and (iv) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.

3,959 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors presented a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets.
Abstract: Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of \fake news", i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ine ective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.

1,891 citations


Proceedings Article
03 May 2017
TL;DR: This work used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords and labels a sample of these tweets into three categories: those containinghate speech, only offensive language, and those with neither.
Abstract: A key challenge for automatic hate-speech detection on social media is the separation of hate speech from other instances of offensive language. Lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech and previous work using supervised learning has failed to distinguish between the two categories. We used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords. We use crowd-sourcing to label a sample of these tweets into three categories: those containing hate speech, only offensive language, and those with neither. We train a multi-class classifier to distinguish between these different categories. Close analysis of the predictions and the errors shows when we can reliably separate hate speech from other offensive language and when this differentiation is more difficult. We find that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive. Tweets without explicit hate keywords are also more difficult to classify.

1,425 citations


06 Sep 2017
TL;DR: For instance, a survey conducted by the Pew Research Center found that a majority of adults in the United States access their news on social media, with 18% doing so often as mentioned in this paper.
Abstract: As part of an ongoing examination of social media platforms and news, the Pew Research Centre has found that a majority of adults in the United States – 62% or around two thirds – access their news on social media, with 18% doing so often. The researchers analysed the scope and characteristics of social media news consumers across nine social networking sites, with Facebook coming out on top. News plays a varying role across the social networking sites studied. The survey shows that two-thirds of Facebook users (66%) access news on the site, nearly six-in-ten Twitter users (59%) access news on Twitter, and seven-in-ten Reddit users get news on that platform. On Tumblr, the figure sits at 31%, while for the other five social networking sites it is true of only about one-fifth or less of their user bases. Addressing the issue of news audiences overlapping on social media platforms, the researchers found that of those who access news using at least one of the sites, a majority (64%) access news on just one – most commonly Facebook. About a quarter (26%) get news on two social media sites. Just one-in-ten access news on three or more sites. The study is based on a survey conducted between 12 January and 8 February 2016 with 4,654 members of the Pew Research Center’s American Trends Panel.

966 citations


Posted Content
TL;DR: This survey presents a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets, and future research directions for fake news detection on socialMedia.
Abstract: Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of "fake news", i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ineffective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.

887 citations


Journal ArticleDOI
TL;DR: The notion of self-branding has drawn myriad academic responses over the last decade as mentioned in this paper, and has been criticised by some academic researchers, such as the authors of this article.
Abstract: The notion of self-branding has drawn myriad academic responses over the last decade. First popularised in a provocative piece published in Fast Company, self-branding has been criticised by some o...

708 citations


Journal ArticleDOI
TL;DR: The findings supported the notion of addictive social media use reflecting a need to feed the ego and an attempt to inhibit a negative self-evaluation, indicating that women may tend to develop more addictive use of activities involving social interaction than men.

671 citations


Journal ArticleDOI
TL;DR: The researchers were able to provide an overview of the main themes and trends covered by the relevant literature such as the role of social media on advertising, the electronic word of mouth, customers’ relationship management, and firms’ brands and performance.

602 citations


Journal ArticleDOI
TL;DR: 10 lessons learned concerning online social networking sites and addiction based on the insights derived from recent empirical research will be presented and recommendations for research and clinical applications are provided.
Abstract: Online social networking sites (SNSs) have gained increasing popularity in the last decade, with individuals engaging in SNSs to connect with others who share similar interests. The perceived need to be online may result in compulsive use of SNSs, which in extreme cases may result in symptoms and consequences traditionally associated with substance-related addictions. In order to present new insights into online social networking and addiction, in this paper, 10 lessons learned concerning online social networking sites and addiction based on the insights derived from recent empirical research will be presented. These are: (i) social networking and social media use are not the same; (ii) social networking is eclectic; (iii) social networking is a way of being; (iv) individuals can become addicted to using social networking sites; (v) Facebook addiction is only one example of SNS addiction; (vi) fear of missing out (FOMO) may be part of SNS addiction; (vii) smartphone addiction may be part of SNS addiction; (viii) nomophobia may be part of SNS addiction; (ix) there are sociodemographic differences in SNS addiction; and (x) there are methodological problems with research to date. These are discussed in turn. Recommendations for research and clinical applications are provided.

596 citations


Journal ArticleDOI
TL;DR: A review of recent literature contextualises the findings of a fresh content analysis of news values within a range of UK media 15 years on from the last study, concluding that no taxonomy can ever explain everything.
Abstract: The deceptively simple question “What is news?” remains pertinent even as we ponder the future of journalism in the digital age. This article examines news values within mainstream journalism and considers the extent to which news values may be changing since earlier landmark studies were undertaken. Its starting point is Harcup and O’Neill’s widely-cited 2001 updating of Galtung and Ruge’s influential 1965 taxonomy of news values. Just as that study put Galtung and Ruge’s criteria to the test with an empirical content analysis of published news, this new study explores the extent to which Harcup and O’Neill’s revised list of news values remain relevant given the challenges (and opportunities) faced by journalism today, including the emergence of social media. A review of recent literature contextualises the findings of a fresh content analysis of news values within a range of UK media 15 years on from the last study. The article concludes by suggesting a revised and updated set of contemporary news values, whilst acknowledging that no taxonomy can ever explain everything.

589 citations


Journal ArticleDOI
TL;DR: In this paper, the authors of 50c party posts vociferously argue for the government's side in political and policy debates are identified and analyzed, and the authors show that most of these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime.
Abstract: The Chinese government has long been suspected of hiring as many as 2 million people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called 50c party posts vociferously argue for the government’s side in political and policy debates. As we show, this is also true of most posts openly accused on social media of being 50c. Yet almost no systematic empirical evidence exists for this claim or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity. In the first large-scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We show that the goal of this massive secretive operation is instead to distract the public and change the subject, as most of these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. We discuss how these results fit with what is known about the Chinese censorship program and suggest how they may change our broader theoretical understanding of “common knowledge” and information control in authoritarian regimes.

Journal ArticleDOI
TL;DR: In this article, the authors examined three major online review platforms, TripAdvisor, Expedia, and Yelp, in terms of information quality related to online reviews about the entire hotel population in Manhattan, New York City.

Journal ArticleDOI
09 Jan 2017-PLOS ONE
TL;DR: It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.
Abstract: Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic) use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD). Using the Bergen Social Media Addiction Scale (BSMAS) and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

Journal ArticleDOI
TL;DR: There is growing evidence to suggest that Facebook is a useful recruitment tool and its use, therefore, should be considered when implementing future health research.
Abstract: Background: Social media is a popular online tool that allows users to communicate and exchange information. It allows digital content such as pictures, videos and websites to be shared, discussed, republished and endorsed by its users, their friends and businesses. Adverts can be posted and promoted to specific target audiences by demographics such as region, age or gender. Recruiting for health research is complex with strict requirement criteria imposed on the participants. Traditional research recruitment relies on flyers, newspaper adverts, radio and television broadcasts, letters, emails, website listings, and word of mouth. These methods are potentially poor at recruiting hard to reach demographics, can be slow and expensive. Recruitment via social media, in particular Facebook, may be faster and cheaper. Objective: The aim of this study was to systematically review the literature regarding the current use and success of Facebook to recruit participants for health research purposes. Methods: A literature review was completed in March 2017 in the English language using MEDLINE, EMBASE, Web of Science, PubMed, PsycInfo, Google Scholar, and a hand search of article references. Papers from the past 12 years were included and number of participants, recruitment period, number of impressions, cost per click or participant, and conversion rate extracted. Results: A total of 35 studies were identified from the United States (n=22), Australia (n=9), Canada (n=2), Japan (n=1), and Germany (n=1) and appraised using the Critical Appraisal Skills Programme (CASP) checklist. All focused on the feasibility of recruitment via Facebook, with some (n=10) also testing interventions, such as smoking cessation and depression reduction. Most recruited young age groups (16-24 years), with the remaining targeting specific demographics, for example, military veterans. Information from the 35 studies was analyzed with median values being 264 recruited participants, a 3-month recruitment period, 3.3 million impressions, cost per click of US $0.51, conversion rate of 4% (range 0.06-29.50), eligibility of 61% (range 17-100), and cost per participant of US $14.41. The studies showed success in penetrating hard to reach populations, finding the results representative of their control or comparison demographic, except for an over representation of young white women. Conclusions: There is growing evidence to suggest that Facebook is a useful recruitment tool and its use, therefore, should be considered when implementing future health research. When compared with traditional recruitment methods (print, radio, television, and email), benefits include reduced costs, shorter recruitment periods, better representation, and improved participant selection in young and hard to reach demographics. It however, remains limited by Internet access and the over representation of young white women. Future studies should recruit across all ages and explore recruitment via other forms of social media. [J Med Internet Res 2017;19(8):e290]

Journal ArticleDOI
TL;DR: In this paper, the authors focus on another part of the hybrid media system and explore how politicians in four countries (AT, CH, IT, UK) use Facebook and Twitter for populist purposes.
Abstract: Populism is a relevant but contested concept in political communication research. It has been well-researched in political manifestos and the mass media. The present study focuses on another part of the hybrid media system and explores how politicians in four countries (AT, CH, IT, UK) use Facebook and Twitter for populist purposes. Five key elements of populism are derived from the literature: emphasizing the sovereignty of the people, advocating for the people, attacking the elite, ostracizing others, and invoking the ‘heartland’. A qualitative text analysis reveals that populism manifests itself in a fragmented form on social media. Populist statements can be found across countries, parties, and politicians’ status levels. While a broad range of politicians advocate for the people, attacks on the economic elite are preferred by left-wing populists. Attacks on the media elite and ostracism of others, however, are predominantly conducted by right-wing speakers. Overall, the paper provides an in-d...

Journal ArticleDOI
TL;DR: A new taxonomy to describe Twitter use in health research with 6 categories is identified and many data elements discernible from a user's Twitter profile are underreported in the literature and can provide new opportunities to characterize the users whose data are analyzed in these studies.
Abstract: Background. Researchers have used traditional databases to study public health for decades. Less is known about the use of social media data sources, such as Twitter, for this purpose.Objectives. To systematically review the use of Twitter in health research, define a taxonomy to describe Twitter use, and characterize the current state of Twitter in health research.Search methods. We performed a literature search in PubMed, Embase, Web of Science, Google Scholar, and CINAHL through September 2015.Selection criteria. We searched for peer-reviewed original research studies that primarily used Twitter for health research.Data collection and analysis. Two authors independently screened studies and abstracted data related to the approach to analysis of Twitter data, methodology used to study Twitter, and current state of Twitter research by evaluating time of publication, research topic, discussion of ethical concerns, and study funding source.Main results. Of 1110 unique health-related articles mentioning Twi...

Journal ArticleDOI
TL;DR: In this paper, the differences between Facebook, Twitter, Instagram, and Snapchat in terms of intensity of use, time spent daily on the platform, and use motivations are explored, and the study applies t...
Abstract: The current research explores differences between Facebook, Twitter, Instagram, and Snapchat in terms of intensity of use, time spent daily on the platform, and use motivations. The study applies t...

Journal ArticleDOI
TL;DR: Automated detection methods may help to identify depressed or otherwise at-risk individuals through the large-scale passive monitoring of social media, and in the future may complement existing screening procedures.
Abstract: Although rates of diagnosing mental illness have improved over the past few decades, many cases remain undetected. Symptoms associated with mental illness are observable on Twitter, Facebook, and web forums, and automated methods are increasingly able to detect depression and other mental illnesses. In this paper, recent studies that aimed to predict mental illness using social media are reviewed. Mentally ill users have been identified using screening surveys, their public sharing of a diagnosis on Twitter, or by their membership in an online forum, and they were distinguishable from control users by patterns in their language and online activity. Automated detection methods may help to identify depressed or otherwise at-risk individuals through the large-scale passive monitoring of social media, and in the future may complement existing screening procedures.

Journal ArticleDOI
TL;DR: It is found that inclusion of widely used content related to brand personality is associated with higher levels of consumer engagement (Likes, comments, shares) with a message, and certain directly informative content, such as deals and promotions, drive consumers’ path to conversio...
Abstract: We describe the effects of social media advertising content on customer engagement using Facebook data. We content-code more than 100,000 messages across 800 companies using a combination of Amazon Mechanical Turk and state-of-the-art Natural Language Processing and machine learning algorithms. We use this large-scale dataset of content attributes to describe the association of various kinds of social media marketing content with user engagement - defined as Likes, comments, shares, and click-throughs - with the messages. We find that inclusion of widely used content related to brand-personality - like humor, emotion and brand’s philanthropic positioning - is associated with higher levels of consumer engagement (like, comment, share) with a message. We find that directly informative content - like mentions of prices and availability - is associated with lower levels of engagement when included in messages in isolation, but higher engagement levels when provided in combination with brand-personality content. We also find certain directly informative content such as the mention of deals and promotions drive consumers’ path-to-conversion (click-throughs). These results hold after correcting for the non-random targeting of Facebook’s EdgeRank (News Feed) algorithm, so reflect more closely user reaction to content, rather than Facebook’s behavioral targeting. Our results suggest therefore that there may be substantial gains from content engineering by combining informative characteristics associated with immediate leads (via improved click-throughs) with brand-personality related content that help maintain future reach and branding on the social media site (via improved engagement). These results inform content design strategies in social media. Separately, the methodology we apply to content-code large-scale textual data provides a framework for future studies on unstructured data such as advertising content or product reviews.

Journal ArticleDOI
TL;DR: This review provides an extensive account of the state of the art in both scholarly use of social media and altmetrics, reviewing the various functions these platforms have in the scholarly communication process and the factors that affect this use.
Abstract: Social media has become integrated into the fabric of the scholarly communication system in fundamental ways, principally through scholarly use of social media platforms and the promotion of new indicators on the basis of interactions with these platforms. Research and scholarship in this area has accelerated since the coining and subsequent advocacy for altmetrics—that is, research indicators based on social media activity. This review provides an extensive account of the state-of-the art in both scholarly use of social media and altmetrics. The review consists of 2 main parts: the first examines the use of social media in academia, reviewing the various functions these platforms have in the scholarly communication process and the factors that affect this use. The second part reviews empirical studies of altmetrics, discussing the various interpretations of altmetrics, data collection and methodological limitations, and differences according to platform. The review ends with a critical discussion of the implications of this transformation in the scholarly communication system.

Journal ArticleDOI
TL;DR: Examination of users of four social networking sites and their influence on online bridging and bonding social capital found that Twitter users had the highest bridging social capital, followed by Instagram, Facebook, and Snapchat, while Snapchat usersHad the highest bonding socialcapital, following by Facebook, Instagram, and Twitter.

Book
14 Mar 2017
TL;DR: Sunstein this article argues that today's Internet is driving political fragmentation, polarization, and even extremism, and proposes practical and legal changes to make the Internet friendlier to democratic deliberation, showing that #Republic need not be an ironic term.
Abstract: From the New York Times bestselling author of Nudge and The World According to Star Wars, a revealing account of how today's Internet threatens democracy--and what can be done about it As the Internet grows more sophisticated, it is creating new threats to democracy. Social media companies such as Facebook can sort us ever more efficiently into groups of the like-minded, creating echo chambers that amplify our views. It's no accident that on some occasions, people of different political views cannot even understand one another. It's also no surprise that terrorist groups have been able to exploit social media to deadly effect. Welcome to the age of #Republic. In this revealing book, New York Times bestselling author Cass Sunstein shows how today's Internet is driving political fragmentation, polarization, and even extremism--and what can be done about it. He proposes practical and legal changes to make the Internet friendlier to democratic deliberation, showing that #Republic need not be an ironic term. Rather, it can be a rallying cry for the kind of democracy that citizens of diverse societies need most.

Proceedings ArticleDOI
Anbang Xu1, Zhe Liu1, Yufan Guo1, Vibha Singhal Sinha1, Rama Akkiraju1 
02 May 2017
TL;DR: A new conversational system to automatically generate responses for users requests on social media that is integrated with state-of-the-art deep learning techniques and is trained by nearly 1M Twitter conversations between users and agents from over 60 brands.
Abstract: Users are rapidly turning to social media to request and receive customer service; however, a majority of these requests were not addressed timely or even not addressed at all. To overcome the problem, we create a new conversational system to automatically generate responses for users requests on social media. Our system is integrated with state-of-the-art deep learning techniques and is trained by nearly 1M Twitter conversations between users and agents from over 60 brands. The evaluation reveals that over 40% of the requests are emotional, and the system is about as good as human agents in showing empathy to help users cope with emotional situations. Results also show our system outperforms information retrieval system based on both human judgments and an automatic evaluation metric.

Journal ArticleDOI
TL;DR: Anomalous account usage patterns suggest the possible existence of a black market for reusable political disinformation bots and a characterization of both the bots and the users who engaged with them, and oppose it to those users who didn’t.
Abstract: Recent accounts from researchers, journalists, as well as federal investigators, reached a unanimous conclusion: social media are systematically exploited to manipulate and alter public opinion. Some disinformation campaigns have been coordinated by means of bots, social media accounts controlled by computer scripts that try to disguise themselves as legitimate human users. In this study, we describe one such operation that occurred in the run up to the 2017 French presidential election. We collected a massive Twitter dataset of nearly 17 million posts, posted between 27 April and 7 May 2017 (Election Day). We then set to study the MacronLeaks disinformation campaign: By leveraging a mix of machine learning and cognitive behavioral modeling techniques, we separated humans from bots, and then studied the activities of the two groups independently, as well as their interplay. We provide a characterization of both the bots and the users who engaged with them, and oppose it to those users who didn’t. Prior interests of disinformation adopters pinpoint to the reasons of scarce success of this campaign: the users who engaged with MacronLeaks are mostly foreigners with pre-existing interest in alt-right topics and alternative news media, rather than French users with diverse political views. Concluding, anomalous account usage patterns suggest the possible existence of a black market for reusable political disinformation bots.

Journal ArticleDOI
Gunn Enli1
TL;DR: In the 2016 US presidential election campaign, social media platforms were increasingly used as direct sources of news, bypassing the editorial media as discussed by the authors, and with the candidates' millions of followers, Tw...
Abstract: In the 2016 US presidential election campaign, social media platforms were increasingly used as direct sources of news, bypassing the editorial media. With the candidates’ millions of followers, Tw...

Journal ArticleDOI
TL;DR: Attention to social comparison, SNS trust, tie strength, and homophily also significantly moderated the relationship between frequent use of each SNS to follow brands, and brand community-related outcomes.

Proceedings ArticleDOI
25 Jun 2017
TL;DR: The authors proposed a robust methodology for extracting text, user, and network-based attributes, studying the properties of bullies and aggressors, and what features distinguish them from regular users, finding that bullies are relatively popular and tend to include more negativity in their posts.
Abstract: In recent years, bullying and aggression against social media users have grown significantly, causing serious consequences to victims of all demographics. Nowadays, cyberbullying affects more than half of young social media users worldwide, suffering from prolonged and/or coordinated digital harassment. Also, tools and technologies geared to understand and mitigate it are scarce and mostly ineffective. In this paper, we present a principled and scalable approach to detect bullying and aggressive behavior on Twitter. We propose a robust methodology for extracting text, user, and network-based attributes, studying the properties of bullies and aggressors, and what features distinguish them from regular users. We find that bullies post less, participate in fewer online communities, and are less popular than normal users. Aggressors are relatively popular and tend to include more negativity in their posts. We evaluate our methodology using a corpus of 1.6M tweets posted over 3 months, and show that machine learning classification algorithms can accurately detect users exhibiting bullying and aggressive behavior, with over 90% AUC.

Posted Content
TL;DR: Analysis of 14 million messages spreading 400 thousand claims on Twitter during and following the 2016 U.S. presidential campaign and election suggests that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.
Abstract: The massive spread of fake news has been identified as a major global risk and has been alleged to influence elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in efforts to study the complex causes for the viral diffusion of digital misinformation and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. However, to date, these efforts have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thousand claims on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots play a key role in the spread of fake news. Accounts that actively spread misinformation are significantly more likely to be bots. Automated accounts are particularly active in the early spreading phases of viral claims, and tend to target influential users. Humans are vulnerable to this manipulation, retweeting bots who post false news. Successful sources of false and biased claims are heavily supported by social bots. These results suggests that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.


Journal ArticleDOI
TL;DR: It is found that bots play a major role in the spread of low-credibility content on Twitter, and control measures for limiting thespread of misinformation are suggested.
Abstract: The massive spread of digital misinformation has been identified as a major global risk and has been alleged to influence elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in efforts to study the complex causes for the viral diffusion of misinformation online and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. With few exceptions, these efforts have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots played a disproportionate role in amplifying low-credibility content. Accounts that actively spread articles from low-credibility sources are significantly more likely to be bots. Automated accounts are particularly active in amplifying content in the very early spreading moments, before an article goes viral. Bots also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, retweeting bots who post links to low-credibility content. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.