scispace - formally typeset
Search or ask a question

Showing papers in "Survey research methods in 2013"


Journal ArticleDOI
TL;DR: In this paper, three key technology-related trends: big data, non-probability samples, and mobile data collection are reviewed, focusing on the implications of these trends for survey research and the research profession.
Abstract: In this paper I review three key technology-related trends: 1) big data, 2) non-probability samples, and 3) mobile data collection. I focus on the implications of these trends for survey research and the research profession. With regard to big data, I review a number of concerns that need to be addressed, and argue for a balanced and careful evaluation of the role that big data can play in the future. I argue that these developments are unlikely to replace transitional survey data collection, but will supplement surveys and expand the range of research methods. I also argue for the need for the survey research profession to adapt to changing circumstances.

135 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focused on the effects of different devices (PC or cell phone) in Web surveys on the respondents' willingness to report sensitive information and found significant differences in the reporting of alcohol consumption by mode, consistent with their hypothesis.
Abstract: A large number of findings in survey research suggest that misreporting in sensitive questions is situational and can vary in relation to context. The methodological literature demonstrates that social desirability biases are less prevalent in self-administered surveys, particularly in Web surveys, when there is no interviewer and less risk of presenting oneself in an unfavorable light. Since there is a growing number of users of mobile Web browsers, we focused our study on the effects of different devices (PC or cell phone) in Web surveys on the respondents’ willingness to report sensitive information. To reduce selection bias, we carried out a two-wave cross-over experiment using a volunteer online access-panel in Russia. Participants were asked to complete the questionnaire in both survey modes: PC and mobile Web survey. We hypothesized that features of mobile Web usage may affect response accuracy and lead to more socially desirable responses compared to the PC Web survey mode. We found significant differences in the reporting of alcohol consumption by mode, consistent with our hypothesis. But other sensitive questions did not show similar effects. We also found that the presence of familiar bystanders had an impact on the responses, while the presence of strangers did not have any significant effect in either survey mode. Contrary to expectations, we did not find evidence of a positive impact of completing the questionnaire at home and trust in data confidentiality on the level of reporting. These results could help survey practitioners to design and improve data quality in Web surveys completed on different devices.

81 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluated three factors that are under the control of the survey designer to assess whether they impact respondents' likelihood of linkage consent: 1) the wording of the consent question; 2) the placement of consent question and 3) interviewer attributes (e.g., age, education, experience, expectations).
Abstract: Record linkage is becoming more important as survey budgets are tightening while at the same time demands for more statistical information are rising. Not all respondents consent to linking their survey answers to administrative records, threatening inferences made from linked data sets. So far, several studies have identified respondent-level attributes that are correlated with the likelihood of providing consent (e.g., age, education), but these factors are outside the control of the survey designer. In the present study three factors that are under the control of the survey designer are evaluated to assess whether they impact respondents' likelihood of linkage consent: 1) the wording of the consent question; 2) the placement of the consent question and; 3) interviewer attributes (e.g., attitudes toward data sharing and consent, experience, expectations). Data from an experiment were used to assess the impact of the first two and data from an interviewer survey that was administered prior to the start of data collection are used to examine the third. The results show that in a telephone setting: 1) indicating time savings in the wording of the consent question had no effect on the consent rate; 2) placement of the consent question at the beginning of the questionnaire achieved a higher consent rate than at the end and; 3) interviewers' who themselves would be willing to consent to data linkage requests were more likely to obtain linkage consent from respondents.

56 citations


Journal ArticleDOI
TL;DR: In this article, a multilevel approach highlights the importance of the interviewer for the consent decision: the empty model shows an intra-class correlation of 55%, which can be reduced to 35% in a full model including interviewer variables.
Abstract: Linking survey data with administrative records is becoming more common in the social sciences in recent years. Regulatory frameworks require the respondent's consent to this procedure in most cases. Similar to non-response, non-consent may lead to selective samples and could pose a problem when using the combined data for analyses. Thus investigating the selectivity and the determinants of the consent decision is important in order to find ways to reduce non-consent. Adapting the survey participation model by Groves and Couper (1998), this paper identifies different areas influencing the respondents' willingness to consent. In addition to control variables at the individual and household level, two further areas of interest are included: the interview situation and the characteristics of the interviewer. A multilevel approach highlights the importance of the interviewer for the consent decision: the empty model shows an intra-class correlation of 55%, which can be reduced to 35% in a full model including interviewer variables. An additional analysis including measures of interviewer performance shows that there are further unobserved interviewer characteristics that influence the respondent's consent decision. The results suggest that although respondent and household characteristics are important for the consent decision, a large part of the variation in the data is explained by the interviewers. This finding stresses the importance of the interviewers not only as an integral part in data collection efforts, but also as the direct link to gain a respondent's consent for linking survey data with administrative records.

55 citations


Journal ArticleDOI
TL;DR: The authors examined the role of interviewers' experience, attitudes, personality traits and inter-personal skills in determining survey co-operation, conditional on contact, and found evidence of effects of experience, attitude, personality trait and interpersonal skills on cooperation rates.
Abstract: This paper examines the role of interviewers' experience, attitudes, personality traits and inter-personal skills in determining survey co-operation, conditional on contact. We take the perspective that these characteristics influence interviewers' behaviour and hence influence the doorstep interaction between interviewer and sample member. Previous studies of the association between doorstep behaviour and co-operation have not directly addressed the role of personality traits and inter-personal skills and most have been based on small samples of interviewers. We use a large sample of 842 face-to-face interviewers working for a major survey institute and analyse co-operation outcomes for over 100,000 cases contacted by those interviewers over a 13-month period. We find evidence of effects of experience, attitudes, personality traits and inter-personal skills on co-operation rates. Several of the effects of attitudes and inter-personal skills are explained by differences in experience, though some independent effects remain. The role of attitudes, personality and skills seems to be greatest for the least experienced interviewers.

46 citations


Journal ArticleDOI
TL;DR: This paper showed that the problems of the short version of the PVQ exist in the full 40-item PVQ as well, and they proposed to use Confirmatory Factor Analysis (CFA) based on SEM analyses of the items of the full PVQ to provide measures of 15 more narrowly defined values with good discriminant validity.
Abstract: Schwartz's theory of human values, as operationalized using different instruments such as the Portrait Values Questionnaire (PVQ), was confirmed by multiple studies using Smallest Space Analysis (SSA). Because of its success, a short version of the PVQ was introduced in the European Social Survey (ESS). However, initial tests using Confirmatory Factor Analysis (CFA) pointed to low discriminant validity of the 10 basic values: The correlations between values next to each other in the two-dimensional space described by SSA were close to or greater than 1. In response, one research stream suggested combining the factors with low discriminant validity. Another stream suggested that the problem was not low discriminant validity but rather misspecifications in the model. Analyses of the short Portrait Values Questionnaire of the ESS confirmed the latter view. This paper demonstrates that the problems of the short version of the PVQ exist in the full 40-item PVQ as well. Based on SEM analyses of the items of the full PVQ, we propose that it can provide measures of 15 more narrowly defined values with good discriminant validity. Our proposal respects the conceptual complexity of the values theory while avoiding contamination of composite scores. It can be expected that the improved measurement of 15 values will increase their predictive power. The presence of some single items suggests the extension of the value theory and scales to encompass more than 15 values. Implications for further development of the scale are drawn.

33 citations


Journal ArticleDOI
TL;DR: Alternative ways of informing respondents about capture of paradata and seeking consent for their use are examined, showing that requiring such explicit consent may reduce survey participation without adequately informing survey respondents about what paradata are and why they are being used.
Abstract: Survey researchers are making increasing use of paradata - such as keystrokes, clicks, and timestamps - to evaluate and improve survey instruments but also to understand respondents and how they answer surveys. Since the introduction of paradata, researchers have been asking whether and how respondents should be informed about the capture and use of their paradata while completing a survey. In a series of three vignette-based experiments, we examine alternative ways of informing respondents about capture of paradata and seeking consent for their use. In all three experiments, any mention of paradata lowers stated willingness to participate in the hypothetical surveys. Even the condition where respondents were asked to consent to the use of paradata at the end of an actual survey resulted in a significant proportion declining. Our research shows that requiring such explicit consent may reduce survey participation without adequately informing survey respondents about what paradata are and why they are being used.

32 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the impact of different interviewers, countries and some respondent characteristics on interview length in the fifth round of the European Social Survey, and show substantial differences between countries with regard to interview length and reinforce that differences among countries are based on much more than just the differences between languages.
Abstract: The question ‘How long will the interview take?’ is frequently asked by interviewers during training and by respondents during the initial doorstep interaction. In this paper, we investigate the impact of different interviewers, countries and some respondent characteristics on interview length in the fifth round of the European Social Survey. The results show substantial differences between countries with regard to interview length and reinforce that differences between countries are based on much more than just the differences between languages. The results support the obvious suggestion that fewer applicable questions reduce the interview length. Further, interviewing older respondents takes more time, and the duration also increases if a respondent more frequently asks for clarification. The huge impact of interviewers on interview length is the most remarkable result. In all countries, the difference between interviewers accounts for a significant and substantial part of the variance in interview length. More detailed fieldwork monitoring in each country is necessary in order to understand these differences. The results also clearly illustrate the necessity for investment in training, monitoring and follow-up of interviewers in each country participating in a cross-national survey.

26 citations


Journal ArticleDOI
TL;DR: For instance, this article found that respondents give different answers to attitude questions when the question is worded positively (X is good) or negatively(X is bad) or on a bipolar scale.
Abstract: For decades, survey researchers have known that respondents give different answers to attitude questions worded positively (X is good. Agree-Disagree), negatively (X is bad. Agree-Disagree) or on a bipolar scale (X is bad-good). This makes survey answers hard to interpret, especially since findings on exactly how the answers are affected are conflicting. In the current paper, we present twelve studies in which the effect of question polarity was measured for a set of thirteen contrastive adjectives. In each study, the same adjectives were used so the generalizability of wording effects across studies could be examined for each word pair. Results show that for five of the word pairs an effect of question wording can be generalized. The direction of these effects are largely consistent: respondents generally give the same answers to positive and bipolar questions, but they are more likely to disagree with negative questions than to agree with positive questions or to choose the positive side of the bipolar scale. In other words, respondents express their opinions more positively when the question is worded negatively. Even though answers to the three wording alternatives sometimes differ, results also show that reliable answers can be obtained with all three wording alternatives. So, for survey practice, these results suggest that all three wording alternatives may be used for attitude measurement.

20 citations


Journal ArticleDOI
TL;DR: Studying data about trust and attitudes towards immigration, this paper shows that measurement equivalence holds across a face-to-face and a web survey done in the Netherlands (2008-2009).
Abstract: Measurement equivalence is a pre-requisite to be able to make comparisons across groups. In this paper we are interested in testing measurement equivalence across respondents answering surveys done using different modes of data collection. Indeed, different modes of data collection have specific characteristics that may create measurement non-equivalence across modes. If this is so, data collected in different modes cannot be compared. This would be problematic since, in order to respond to new challenges, like costs and time pressure, more and more often researchers choose to use different modes to collect their data across time, across surveys, and across countries. Studying data about trust and attitudes towards immigration, this paper shows that measurement equivalence holds across a face-to-face and a web survey done in the Netherlands (2008-2009). Moreover, the quality estimates of the Composite Scores are quite high and pretty similar in the two surveys for the four concepts considered.

18 citations


Journal ArticleDOI
TL;DR: In this article, a mixed-mode experiment parallel to the European Social Survey (ESS) fourth round (2008/2009) was conducted to compare data-quality of different data-collection modes.
Abstract: In order to compare data-quality of different data-collection modes, multitrait-multimethod (MTMM) experiments have been implemented in a mixed-mode experiment parallel to the European Social Survey (ESS) fourth round (2008/2009) Special interest lies in measurement effects between the modes which refer to the pure impact of a data-collection mode on the quality Nevertheless, mere comparison between quality estimates of the different modes does not allow drawing conclusions about measurement effects Indeed, measurement effects may be completely confounded with selection effects which refer to differences in respondents compositions across the modes However, by comparing the mixed-mode data with the main ESS data and treating the dataset of origin as an instrumental variable, some conditional measurement effects and selection effects can be disentangled This paper provides a preliminary exploratory analysis of this approach The results generally yield low to fair measurement effects while the selection effects on some items are rather large

Journal ArticleDOI
TL;DR: In 2011, the National Household Education Surveys Program Field Test used a two-phase ABS design with a postal or mail screener to identify households with eligible children and a mail topical questionnaire administered to parents of sampled children to collect measures of interest as discussed by the authors.
Abstract: Address-based sampling (ABS) with a two-phase data collection approach has emerged as a promising alternative to random digit dial (RDD) surveys for studying specific subpopulations in the United States. In 2011, the National Household Education Surveys Program Field Test used a two-phase ABS design with a postal or mail screener to identify households with eligible children and a mail topical questionnaire administered to parents of sampled children to collect measures of interest. Experiments with prepaid cash incentives and special mail delivery methods were applied in both phases. For the screener, sampled addresses were randomly designated to receive either $2 or $5 in the initial mailing. During the topical phase, incentives (ranging from $0 to $20) and delivery methods (First Class Mail or Priority Mail) were assigned randomly but depended on how quickly the household had responded to the screener. The paper first evaluates the effects of incentives on response rates, and then examines incentive levels for attracting the hard-to-reach groups and improving sample composition. The impact of incentive on data collection cost is also examined.

Journal ArticleDOI
TL;DR: New paradata collected on telephone interview breakoffs are used to describe their prevalence, associated field effort, the instrument sections and questions on which they occur, their source - whether respondent-initiated, interviewer-in Initiated, or related to telephone problems - and associations with respondent and interviewer characteristics.
Abstract: Nearly 23% of all telephone interviews in the most recently completed wave of the Panel Study of Income Dynamics break off at least once, requiring multiple sessions to complete the interview. Given this high rate, a study was undertaken to better understand the causes and consequences of temporary breakoffs in a computer-assisted telephone interview setting. The majority of studies examining breakoffs have been conducted in the context of self-administered web surveys. The present study uses new paradata collected on telephone interview breakoffs to describe their prevalence, associated field effort, the instrument sections and questions on which they occur, their source - whether respondent-initiated, interviewer-initiated, or related to telephone problems - and associations with respondent and interviewer characteristics. The results provide information about the survey response process and suggest a set of recommendations for instrument design and interviewer training, as well as additional paradata that should be collected to provide more insight into the breakoff phenomenon.

Journal ArticleDOI
TL;DR: One way to overcome the problem of nonparticipation in surveys involving a non-response bias is to use access panels as a sampling frame, leading to expected higher response rates.
Abstract: Household and individual surveys increasingly gain importance in policy support and other areas. However, the raising number of surveys leads to reduced response rates. One way to overcome the problem of nonparticipation in surveys involving a non-response bias is to use access panels as a sampling frame. Though leading to expected higher response rates, the self-selection process at the recruitment stage urges the need for a bias correction. This can be done directly when extrapolating the estimates to the population of interest or when using response propensity scores. The latter implies a correct model specification on the recruitment stage.

Journal ArticleDOI
TL;DR: In this paper, the authors explore classification of parental leave takers in EU-LFS and show that classification rules differ cross-nationally: in some countries, parents are considered inactive, in others they are employed but temporarily not working.
Abstract: In survey research the parental leave beneficiaries are usually coded as either employed or inactive. An exception is the European Labor Force Survey (EU-LFS), which includes parental leave among other forms of being employed but temporarily not working. This paper explores classification of parental leave takers in EU-LFS. We show that classification rules differ cross-nationally: in some countries parental leave takers are considered inactive, in others -- employed but temporarily not working. In particular in the Czech Republic, Estonia, Hungary and Slovakia the EU-LFS data classify the beneficiaries as inactive. We estimate the number of mothers on parental leave in these countries and show that EU-LFS employment rates of women aged 18-40 are biased downwards 2-7 percentage points; for mothers of children aged 0-2 the bias reaches 12-45 percentage points. Our study shows the limited comparability of EU-LFS employment rates and warns about possible bias in cross-national studies.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the recruitment from the 2006 German microcensus (MC) using socio-economic and demographic characteristics available in both the access panel and the MC to explore the selectivity of the recruitment process.
Abstract: In 2004 Germany's Federal Statistical Office (Statistisches Bundesamt, Destatis) started the recruitment of an access panel (AP) from participantes in the German microcensus (MC), a large household survey. This access panel, a pool of persons willing to take part in voluntary surveys, currently serves as the sampling frame for the DE-SILC, the German subsample of the European Union Statistic on Income and Living Conditions. Sampling from panelists rather than directly from the population promised lower survey costs due to easy access to the AP participants and higher response rates. While participation in the MC is mandatory by law, joining the AP is voluntary. Approx. 10 percent of the MC households agree to enter the panel. In this work we examine the recruitment from the 2006 MC using socio-economic and demographic characteristics available in both the AP and the MC to explore the selectivity of the recruitment process. We also discuss the implications of German privacy protection legislation for this analysis. Finally we consider the longitudinal use of the AP in a methodological discussion on the question whether samples from the AP can be regarded as probability samples from the general population.