scispace - formally typeset
Search or ask a question

Showing papers by "Annelies G. Blom published in 2022"


Journal ArticleDOI
TL;DR: In this paper, measurement equivalence holds between multiple probability and non-probability online panels in Australia and Germany using equivalence testing in the Confirmatory Factor Analysis framework, assessed equivalence in six multi-item scales (three in each country).
Abstract: Nonprobability online panels are commonly used in the social sciences as a fast and inexpensive way of collecting data in contrast to more expensive probability-based panels. Given their ubiquitous use in social science research, a great deal of research is being undertaken to assess the properties of nonprobability panels relative to probability ones. Much of this research focuses on selection bias, however, there is considerably less research assessing the comparability (or equivalence) of measurements collected from respondents in nonprobability and probability panels. This article contributes to addressing this research gap by testing whether measurement equivalence holds between multiple probability and nonprobability online panels in Australia and Germany. Using equivalence testing in the Confirmatory Factor Analysis framework, we assessed measurement equivalence in six multi-item scales (three in each country). We found significant measurement differences between probability and nonprobability panels and within them, even after weighting by demographic variables. These results suggest that combining or comparing multi-item scale data from different sources should be done with caution. We conclude with a discussion of the possible causes of these findings, their implications for survey research, and some guidance for data users.

3 citations


Journal ArticleDOI
TL;DR: The Mannheim Corona Study (MCS) as mentioned in this paper was a longitudinal probability-based online survey, in a daily rotating panel design that took place from March 20 through July 10, 2020.
Abstract: Abstract The outbreak of the COVID-19 pandemic has led to a vast increase in the demand for fast, frequent, and multi-faceted data to study the impact of the pandemic on people’s lives. Existing data collection infrastructures had to be adapted quickly during the early phase of the pandemic to meet this data demand. Our research group contributed to this by conducting the Mannheim Corona Study (MCS), a longitudinal probability-based online survey, in a daily rotating panel design that took place from March 20 through July 10, 2020. The fast-and-frequent panel data collection design of the MCS had numerous consequences for designing its questionnaires and choosing its measurement instruments. This included designing new instruments on the fly in the ever-changing pandemic environment, making efficient use of limited questionnaire space, and deciding on measurement frequencies in a structured manner under uncertain external conditions. In this report, we document the MCS approach to choosing measurement instruments fit for the purpose of fast and frequent data collection during the early phase of COVID-19 in Germany. We particularly highlight three examples of measurement instruments in the MCS and reflect on their measurement properties.

3 citations


Journal ArticleDOI
TL;DR: The Mannheim Corona Study (MCS) as mentioned in this paper was a longitudinal probability-based online survey, in a daily rotating panel design that took place from March 20 through July 10, 2020.
Abstract: Abstract The outbreak of the COVID-19 pandemic has led to a vast increase in the demand for fast, frequent, and multi-faceted data to study the impact of the pandemic on people’s lives. Existing data collection infrastructures had to be adapted quickly during the early phase of the pandemic to meet this data demand. Our research group contributed to this by conducting the Mannheim Corona Study (MCS), a longitudinal probability-based online survey, in a daily rotating panel design that took place from March 20 through July 10, 2020. The fast-and-frequent panel data collection design of the MCS had numerous consequences for designing its questionnaires and choosing its measurement instruments. This included designing new instruments on the fly in the ever-changing pandemic environment, making efficient use of limited questionnaire space, and deciding on measurement frequencies in a structured manner under uncertain external conditions. In this report, we document the MCS approach to choosing measurement instruments fit for the purpose of fast and frequent data collection during the early phase of COVID-19 in Germany. We particularly highlight three examples of measurement instruments in the MCS and reflect on their measurement properties.

3 citations


Journal ArticleDOI
TL;DR: In this paper , the authors show that response rates are significantly higher when offering early bird cash incentives and that fieldwork progresses considerably faster, leading to fewer reminders and greater cost-effectiveness.
Abstract: The literature on the effects of incentives in survey research is vast and covers a diversity of survey modes. The mode of probability-based online panels, however, is still young and so is research into how to best recruit sample units into the panel. This paper sheds light on the effectiveness of a specific type of incentive in this context: a monetary incentive that is paid conditionally upon panel registration within two weeks of receiving the initial postal mail invitation. We tested early bird cash incentives in a large-scale recruitment experiment for the German Internet Panel (GIP) in 2018. We find that panel response rates are significantly higher when offering early bird cash incentives and that fieldwork progresses considerably faster, leading to fewer reminders and greater cost-effectiveness. Furthermore, sample representativeness is similarly high with or without early bird incentives.

1 citations


Journal ArticleDOI
TL;DR: In this article , an alternative parametrization of the random components in multilevel models, so-called separate coding, delivers valuable insights into differential interviewer effects for specific groups of sample members.
Abstract: Despite its importance in terms of survey participation, the literature is sparse on how face-to-face interviewers differentially affect specific groups of sample units. This paper demonstrates how an alternative parametrization of the random components in multilevel models, so-called separate coding, delivers valuable insights into differential interviewer effects for specific groups of sample members. In the example of a face-to-face recruitment interview for a probability-based online panel, we detect small interviewer effects regarding survey participation for non-Internet households, whereas we find sizable interviewer effects for Internet households. We derive practical guidance for survey practitioners to address differential interviewer effects based on the proposed variance decomposition.