scispace - formally typeset
Search or ask a question

Showing papers by "Frauke Kreuter published in 2019"


Journal ArticleDOI
TL;DR: Willingness to participate in passive mobile data collection is strongly influenced by the incentive promised for study participation but also by other study characteristics (sponsor, duration of data collection period, option to switch off the app) as well as respondent characteristics.
Abstract: The rising penetration of smartphones now gives researchers the chance to collect data from smartphone users through passive mobile data collection via apps. Examples of passively collected data include geolocation, physical movements, online behavior and browser history, and app usage. However, to passively collect data from smartphones, participants need to agree to download a research app to their smartphone. This leads to concerns about nonconsent and nonparticipation. In the current study, we assess the circumstances under which smartphone users are willing to participate in passive mobile data collection. We surveyed 1,947 members of a German nonprobability online panel who own a smartphone using vignettes that described hypothetical studies where data are automatically collected by a research app on a participant's smartphone. The vignettes varied the levels of several dimensions of the hypothetical study, and respondents were asked to rate their willingness to participate in such a study. Willingness to participate in passive mobile data collection is strongly influenced by the incentive promised for study participation but also by other study characteristics (sponsor, duration of data collection period, option to switch off the app) as well as respondent characteristics (privacy and security concerns, smartphone experience).

83 citations


Journal ArticleDOI
TL;DR: An introduction to prominent tree-based machine learning methods is provided and the usage of these techniques in the context of modeling and predicting nonresponse in panel surveys is exemplified.
Abstract: Predictive modeling methods from the field of machine learning have become a popular tool across various disciplines for exploring and analyzing diverse data. These methods often do not require specific prior knowledge about the functional form of the relationship under study and are able to adapt to complex non-linear and non-additive interrelations between the outcome and its predictors while focusing specifically on prediction performance. This modeling perspective is beginning to be adopted by survey researchers in order to adjust or improve various aspects of data collection and/or survey management. To facilitate this strand of research, this paper (1) provides an introduction to prominent tree-based machine learning methods, (2) reviews and discusses previous and (potential) prospective applications of tree-based supervised learning in survey research, and (3) exemplifies the usage of these techniques in the context of modeling and predicting nonresponse in panel surveys.

59 citations


Journal ArticleDOI
TL;DR: The usability of samples with unknown selection probabilities for various research questions is discussed and research strategies developed to overcome sampling limitations are discussed.
Abstract: The long-standing approach of using probability samples in social science research has come under pressure through eroding survey response rates, advanced methodology, and easier access to large am

34 citations


Journal ArticleDOI
TL;DR: A significant interaction between placement and framing of the linkage consent question on the consent rate is found and guidance is provided on the optimal administration of the linkage consent question.
Abstract: Numerous surveys link interview data to administrative records, conditional on respondent consent, in order to explore new and innovative research questions. Optimizing the linkage consent rate is a critical step toward realizing the scientific advantages of record linkage and minimizing the risk of linkage consent bias. Linkage consent rates have been shown to be particularly sensitive to certain design features, such as where the consent question is placed in the questionnaire and how the question is framed. However, the interaction of these design features and their relative contributions to the linkage consent rate have never been jointly studied, raising the practical question of which design feature (or combination of features) should be prioritized from a consent rate perspective. We address this knowledge gap by reporting the results of a placement and framing experiment embedded within separate telephone and Web surveys. We find a significant interaction between placement and framing of the linkage consent question on the consent rate. The effect of placement was larger than the effect of framing in both surveys, and the effect of framing was only evident in the Web survey when the consent question was placed at the end of the questionnaire. Both design features had negligible impact on linkage consent bias for a series of administrative variables available for consenters and non-consenters. We conclude this research note with guidance on the optimal administration of the linkage consent question.

28 citations


Journal ArticleDOI
TL;DR: The results show that overall mean estimates on the web are more biased compared to the telephone mode, and that the web does not consistently outperform the Telephone mode for sensitive questions.
Abstract: More and more surveys are conducted online. While web surveys are generally cheaper and tend to have lower measurement error in comparison to other survey modes, especially for sensitive questions, potential advantages might be offset by larger nonresponse bias. This article compares the data quality in a web survey administration to another common mode of survey administration, the telephone. The unique feature of this study is the availability of administrative records for all sampled individuals in combination with a random assignment of survey mode. This specific design allows us to investigate and compare potential bias in survey statistics due to 1) nonresponse error, 2) measurement error, and 3) combined bias of these two error sources and hence, an overall assessment of data quality for two common modes of survey administration, telephone and web. Our results show that overall mean estimates on the web are more biased compared to the telephone mode. Nonresponse and measurement bias tend to reinforce each other in both modes, with nonresponse bias being somewhat more pronounced in the web mode. While measurement error bias tends to be smaller in the web survey implementation, interestingly, our results also show that the web does not consistently outperform the telephone mode for sensitive questions.

15 citations


Journal ArticleDOI
21 Aug 2019-PLOS ONE
TL;DR: It is found that there is no relationship between trust and cooperation, and this non-relationship may be rationalized in different ways which provides important lessons for the study of the trust—behavior nexus beyond the particular situation the authors study empirically.
Abstract: Trust is praised by many social scientists as the foundation of functioning social systems owing to its assumed connection to cooperative behavior. The existence of such a link is still subject to debate. In the present study, we first highlight important conceptual issues within this debate. Second, we examine previous evidence, highlighting several issues. Third, we present findings from an original experiment, in which we tried to identify a "real" situation that allowed us to measure both trust and cooperation. People's expectations and behavior when they decide to share (or not) their data represents such a situation, and we make use of corresponding data. We found that there is no relationship between trust and cooperation. This non-relationship may be rationalized in different ways which, in turn, provides important lessons for the study of the trust-behavior nexus beyond the particular situation we study empirically.

11 citations


Journal ArticleDOI
TL;DR: It is found that emphasizing linkage benefits related to "time savings" yielded a small, albeit statistically significant, improvement in the overall linkage consent rate, and this benefit argument was particularly effective among "busy" respondents.
Abstract: Survey researchers are increasingly seeking opportunities to link interview data with administrative records. However, obtaining consent from all survey respondents (or certain subgroups) remains a barrier to performing record linkage in many studies. We experimentally investigated whether emphasizing different benefits of record linkage to respondents in a telephone survey of employee working conditions improves respondents' willingness to consent to linkage of employment administrative records relative to a neutral consent request. We found that emphasizing linkage benefits related to "time savings" yielded a small, albeit statistically significant, improvement in the overall linkage consent rate (86.0) relative to the neutral consent request (83.8 percent). The time savings argument was particularly effective among "busy" respondents. A second benefit argument related to "improved study value" did not yield a statistically significant improvement in the linkage consent rate (84.4 percent) relative to the neutral request. This benefit argument was also ineffective among the subgroup of respondents considered to be most likely to have a self-interest in the study outcomes. The article concludes with a brief discussion of the practical implications of these findings and offers suggestions for possible research extensions.

4 citations


Journal ArticleDOI
TL;DR: In this article, the authors identify a "real" situation that allowed them to measure both trust and cooperation, and they make use of corresponding data to yield insights that are relevant for the trust behavior nexus beyond the particular situation they study empirically.
Abstract: Trust is praised by many social scientists as the foundation of functioning social systems owing to its assumed connection to cooperative behavior. The existence of such a link is still subject to debate. In the present study, we first highlight important conceptual issues within this debate. Second, we examine previous evidence, highlighting several issues. Third, we present findings from an original experiment, in which we tried to identify a "real" situation that allowed us to measure both trust and cooperation. People's expectations and behavior when they decide to share (or not) their data represents such a situation, and we make use of corresponding data. Our study yields insights that are relevant for the trust --- behavior nexus beyond the particular situation we study empirically.

4 citations


Journal ArticleDOI
TL;DR: This paper examined the influence of interviewers on the estimation of regression coefficients from survey data and found no evidence that interviewer effects on the response propensity have a large impact on the estimated regression parameters.
Abstract: This article examines the influence of interviewers on the estimation of regression coefficients from survey data. First, we present theoretical considerations with a focus on measurement errors and nonresponse errors due to interviewers. Then, we show via simulation which of several nonresponse and measurement error scenarios has the biggest impact on the estimate of a slope parameter from a simple linear regression model. When response propensity depends on the dependent variable in a linear regression model, bias in the estimated slope parameter is introduced. We find no evidence that interviewer effects on the response propensity have a large impact on the estimated regression parameters. We do find, however, that interviewer effects on the predictor variable of interest explain a large portion of the bias in the estimated regression

3 citations


Journal ArticleDOI
01 Nov 2019
TL;DR: In this paper, the authors describe a training program that has been run for the last three years by the University of Maryland, New York University, and University of Chicago, with partners such as Ohio State University, Indiana University/Purdue University, Indianapolis, and U.S. University of Missouri.
Abstract: From education to health to criminal justice, government regulation and policy decisions have important effects on social and individual experiences. New data science tools applied to data created by government agencies have the potential to enhance these meaningful decisions. However, certain institutional barriers limit the realization of this potential. First, we need to provide systematic training of government employees in data analytics. Second we need a careful rethinking of the rules and technical systems that protect data in order to expand access to linked individual-level data across agencies and jurisdictions, while maintaining privacy. Here, we describe a program that has been run for the last three years by the University of Maryland, New York University, and the University of Chicago, with partners such as Ohio State University, Indiana University/Purdue University, Indianapolis, and the University of Missouri. The program—which trains government employees on how to perform applied data analysis with confidential individual-level data generated through administrative processes, and extensive project-focused work—provides both online and onsite training components. Training takes place in a secure environment. The aim is to help agencies tackle important policy problems by using modern computational and data analysis methods and tools. We have found that this program accelerates the technical and analytical development of public sector employees. As such, it demonstrates the potential value of working with individual-level data across agency and jurisdictional lines. We plan to build on this initial success by creating a larger community of academic institutions, government agencies, and foundations that can work together to increase the capacity of governments to make more efficient and effective decisions.Keywords: training programs, evidence-based policy, confidential data, administrative data research facility, government data

3 citations


Book ChapterDOI
01 Jan 2019
TL;DR: In this article, post-survey interviewer observations of respondents and their behaviors during the interviewing process were analyzed for the European Social Survey (ESS) and National Survey of Family Growth (NSFG).
Abstract: This chapter focuses on a different type of paradata that could provide information about breakdowns of the survey response process: post-survey interviewer observations of respondents and their behaviors during the interviewing process. This chapter analyzes interviewer observations from two surveys – the European Social Survey (ESS) and the National Survey of Family Growth (NSFG). In the ESS, interviewers recorded the five post-survey interviewer observations on a five-point scale (ranging from “never” to “very often”). In the NSFG, the dependent variables measuring indirect indicators of data quality included one paradata variable and four proxy indicators of measurement error. In the NSFG, the types of observations contributed to defining the quality classes, suggesting that both respondent behaviors and the interviewing environment can affect response quality; this makes sense given the sensitive subject matter about sexual health. In the ESS, only the observations of respondent behaviors were found to vary across the derived quality classes.