scispace - formally typeset
Search or ask a question
Institution

YouGov

About: YouGov is a based out in . It is known for research contribution in the topics: Politics & Voting. The organization has 23 authors who have published 42 publications receiving 888 citations. The organization is also known as: YouGov plc & You Gov.
Topics: Politics, Voting, Population, Social media, Welfare

Papers
More filters
Journal ArticleDOI
TL;DR: The authors analyzed the structure and content of the political conversations that took place through the microblogging platform Twitter in the context of the 2011 Spanish legislative elections and the 2012 U.S. presidential elections and found that Twitter replicates most of the existing inequalities in public political exchanges.
Abstract: In this article, we analyze the structure and content of the political conversations that took place through the microblogging platform Twitter in the context of the 2011 Spanish legislative elections and the 2012 U.S. presidential elections. Using a unique database of nearly 70 million tweets collected during both election campaigns, we find that Twitter replicates most of the existing inequalities in public political exchanges. Twitter users who write about politics tend to be male, to live in urban areas, and to have extreme ideological preferences. Our results have important implications for future research on the relationship between social media and politics, since they highlight the need to correct for potential biases derived from these sources of inequality.

274 citations

Journal ArticleDOI
TL;DR: The authors examined several theories as to why many polls, particularly in the Upper Midwest, underestimated support for Donald Trump and found that voter turnout changed from 2012 to 2016 in ways that favored Trump, though there is only mixed evidence that misspecified likely voter models were a major cause of systematic polling error.
Abstract: The 2016 presidential election was a jarring event for polling in the United States. Preelection polls fueled high-profile predictions that Hillary Clinton’s likelihood of winning the presidency was about 90 percent, with estimates ranging from 71 to over 99 percent. When Donald Trump was declared the winner of the presidency, there was a widespread perception that the polls failed. But did the polls fail? And if so, why? Those are among the central questions addressed by an American Association for Public Opinion Research (AAPOR) ad hoc committee. This paper presents the committee’s analysis of the performance of preelection polls in 2016, how that performance compares to polling in prior elections, and the extent to which performance varied by poll design. In addition, the committee examined several theories as to why many polls, particularly in the Upper Midwest, underestimated support for Trump. The explanations for which the most evidence exists are a late swing in vote preference toward Trump and a pervasive failure to adjust for overrepresentation of college graduates (who favored Clinton). In addition, there is clear evidence that voter turnout changed from 2012 to 2016 in ways that favored Trump, though there is only mixed evidence that misspecified likely voter models were a major cause of the systematic polling error. Finally, there is little evidence that socially desirable (Shy Trump) responding was an important contributor to poll error. Donald Trump’s victory in the 2016 presidential election came as a shock to pollsters, political analysts, reporters, and pundits, including those inside Trump’s own campaign (Jacobs and House 2016). Leading up to the election, three types of information widely discussed in the news media indicated that Democratic nominee Hillary Clinton was likely to win. First, polling data showed Clinton consistently leading the national popular vote, which is usually predictive of the winner (Erikson and Wlezien 2012), and leading, if narrowly, in Pennsylvania, Michigan, and Wisconsin—states that had voted Democratic for president six elections running. Second, early voting patterns in key states, particularly in Florida and North Carolina, were described in high-profile news stories as favorable for Clinton (Silver 2017a). Third, election forecasts from highly trained academics and data journalists declared that Clinton’s probability of winning was about 90 percent, with estimates ranging from 71 to over 99 percent (Katz 2016). The day after the election, there was a palpable mix of surprise and outrage directed toward the polling community, as many felt that the industry had seriously misled the country about who would win (e.g., Byers 2016; Cillizza 2016; Easley 2016; Shepard 2016). The unexpected US outcome added to concerns about polling raised by errors in the Kennedy et al. in Public Opinion Quarterly 82 (2018) 3 2014 referendum on Scottish independence, the 2015 UK general election, and the 2016 British referendum on European Union membership (Barnes 2016). In the weeks after the 2016 US election, states certified their vote totals and researchers began assessing what happened with the polls. It became clear that a confluence of factors made the collective polling miss seem worse than it actually was, at least in some respects. The winner of the popular vote (Clinton) was different than the winner of the Electoral College (Trump). While such a divided result is not without precedent, the full arc of US history suggests it is highly unlikely. With respect to polling, preelection estimates pointed to an Electoral College contest that was less certain than interpretations in the news media suggested (Trende 2016; Silver 2017b). Eight states with more than a third of the electoral votes needed to win the presidency had polls showing a lead of three points or less (Trende 2016). Trende noted that his organization’s battleground-state poll averages had Clinton leading by a very slim margin in the Electoral College (272 to 266), putting Trump one state away from winning the election. Relatedly, the elections in the three Upper Midwest states that broke unexpectedly for Trump (Pennsylvania, Michigan, and Wisconsin) were extremely close. More than 13.8 million people voted for president in those states, and Trump’s combined margin of victory was 77,744 votes (0.56 percent). Even the most rigorously designed polls cannot reliably indicate the winner in contests with such razor-thin margins. Even with these caveats about the election, a number of important questions surrounding polling remained. There was a systematic underestimation of support for Trump in state-level and, to a lesser extent, national polls. The causes of that pattern were not clear but potentially important for avoiding bias in future polls. Also, different types of polls (e.g., online versus live telephone) seemed to be producing somewhat different estimates. This raised questions about whether some types of polls were more accurate and why. More broadly, how did the performance of 2016 preelection polls compare to those of prior elections? These questions became the central foci for an ad hoc committee commissioned by the American Association for Public Opinion Research (AAPOR) in the spring of 2016. The committee was tasked with summarizing the accuracy of 2016 preelection polling, reviewing variation by different poll methodologies, and assessing performance Kennedy et al. in Public Opinion Quarterly 82 (2018) 4 through a historical lens. After the election, the committee decided to also investigate why polls, particularly in the Upper Midwest, underestimated support for Trump. The next section presents several of the main theories for why many polls underestimated Trump’s support. This is followed by a discussion of the data and key metrics the committee used to perform its analyses. Subsequent sections of the paper present analyses motivated by the research questions posed here. The paper concludes with a discussion of the main findings and implications for the field. Theories about Why Polls Underestimated Support for Trump A number of theories were put forward as to why many polls missed in 2016.1 Nonresponse Bias and Deficient Weighting Most preelection polls have single-digit response rates or feature an opt-in sample for which a response rate cannot be computed (Callegaro and DiSogra 2008; AAPOR 2016). While the link between low response rates and bias is not particularly strong (e.g., Merkle and Edelman 2002; Groves and Peytcheva 2008; Pew Research Center 2012, 2017a), such low rates do carry an increased risk of bias (e.g., Burden 2000). Of particular note, adults with weaker partisan strength (e.g., Keeter et al. 2006), lower educational levels (Battaglia, Frankel, and Link 2008; Chang and Krosnick 2009; Link et al. 2008; Pew Research Center 2012, 2017a), and anti-government views (U.S. Census Bureau 2015) are less likely to take part in surveys. Given the anti-elite themes of the Trump campaign, Trump voters may have been less likely than other voters to accept survey requests. If survey response was correlated with presidential vote and some factor not accounted for in the weighting, then a deficient weighting protocol could be one explanation for the polling errors. 1. The original committee report (AAPOR 2017) also discussed ballot-order effects. That discussion has been dropped in this paper because there was not strong evidence that such effects were a major contributor to polling errors in 2016. There remains an important debate about the possibility that ballot order affected the outcome of the presidential race in several states, including Michigan, Wisconsin, and Florida. Kennedy et al. in Public Opinion Quarterly 82 (2018) 5

107 citations

Journal ArticleDOI
Joe Twyman1
TL;DR: The authors describes the historical development and current status of internet polling in Britain, focusing on the survey methods employed by YouGov. The survey house has played a pioneering role in these developments and discusses future innovations in online survey research.
Abstract: The past two decades have witnessed significant changes in how survey research is conducted in Britain. One of the most important innovations is the use of national internet surveys. Internet surveys are now used by the national media and the British Election Study to provide information on party support and the dynamics of public opinion on a wide variety of topics. The survey house YouGov has played a pioneering role in these developments. YouGov’s track record of “getting it right”, i.e., of providing accurate forecasts of the results of several major elections, has convinced many – not all – observers that online surveys will have a major role to play in future studies of voting and elections. This paper describes the historical development and current status of internet polling in Britain, focusing on the survey methods employed by YouGov. The paper concludes by discussing future innovations in online survey research.

103 citations

Journal ArticleDOI
TL;DR: There is considerable opportunity to improve the holistic nature of OA consultations especially in provision of information and promotion of self-management strategies.
Abstract: Osteoarthritis (OA) is the fastest growing cause of disability worldwide. The aim of this study was to understand the impact of OA on individuals and to explore current treatment strategies. An online UK-wide survey of people with self-reported OA was conducted, composed of 52 questions exploring the impact of OA, diagnosis and treatment, the role of health professionals and self-management. Four thousand forty-three people were invited with 2,001 respondents (49 % response, 56 % women; mean age 65 years). Fifty-two percent reported that OA had a large impact on their lives. Fifteen percent of respondents had taken early retirement on average 7.8 years earlier than planned. In consultations with general practitioners, only half reported a discussion on pain; fewer reported discussing their fears (21 %) or management goals (15 %). Nearly half (48 %) reported not seeking medical help until pain was frequently unbearable. Oral analgesics (62 %), topical therapies (47 %), physiotherapy (38 %) and steroid injections (28 %) were commonly used. The majority (71 %) reported varying degrees of persistent pain despite taking all prescribed medication. Although 64 % knew that increasing exercise was important, only 36 % acted on this knowledge; 87 % who increased exercise found it beneficial. Over half had future concerns related to mobility (60 %), maintaining independence (52 %) and coping with everyday activities (51 %). OA had significant individual economic impact especially on employment. Current treatment strategies still leave most people in pain with significant fears for the future. There is considerable opportunity to improve the holistic nature of OA consultations especially in provision of information and promotion of self-management strategies.

74 citations


Network Information
Related Institutions (5)
London School of Economics and Political Science
35K papers, 1.4M citations

74% related

German University of Administrative Sciences, Speyer
250 papers, 4.3K citations

74% related

United States Postal Service
447 papers, 7.5K citations

72% related

Harvard University Press
51 papers, 2.7K citations

72% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20215
20202
20192
20182
20172
20169