scispace - formally typeset
Search or ask a question
JournalISSN: 2325-0984

Journal of Survey Statistics and Methodology 

Oxford University Press
About: Journal of Survey Statistics and Methodology is an academic journal published by Oxford University Press. The journal publishes majorly in the area(s): Computer science & Population. It has an ISSN identifier of 2325-0984. Over the lifetime, 342 publications have been published receiving 3872 citations.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: A wide range of non-probability designs exist and are being used in various settings, including case control studies, clinical trials, evaluation research, and more.
Abstract: Survey researchers routinely conduct studies that use different methods of data collection and inference. But for at least the past 60 years, the probabilitysampling framework has been used in most surveys. More recently, concerns about coverage and nonresponse coupled with rising costs have led some to wonder whether non-probability sampling methods might be an acceptable alternative, at least under some conditions (Groves 2006; Savage and Burrows 2007). A wide range of non-probability designs exist and are being used in various settings, including case control studies, clinical trials, evaluation research

539 citations

Journal ArticleDOI
TL;DR: The authors used the Total Survey Error (TSE) framework to highlight important historical developments and advances in the study of interviewer effects on a variety of important survey process outcomes, including sample frame coverage, contact and recruitment of potential respondents, survey measurement, and data processing.
Abstract: A rich and diverse literature exists on the effects that human interviewers can have on different aspects of the survey data collection process. This research synthesis uses the Total Survey Error (TSE) framework to highlight important historical developments and advances in the study of interviewer effects on a variety of important survey process outcomes, including sample frame coverage, contact and recruitment of potential respondents, survey measurement, and data processing. Included in the scope of the synthesis is research literature that has focused on explaining variability among interviewers in these effects and the different types of variable errors that they can introduce, which can ultimately affect the efficiency of survey estimates. We first consider common tasks with which human interviewers are often charged and then use the TSE framework to organize and synthesize the literature discussing the variable errors that interviewers can introduce when attempting to execute each task. Based on our synthesis, we identify key gaps in knowledge and then use these gaps to motivate an organizing model for future research investigating explanations for interviewer effects on different aspects of the survey data collection process.

166 citations

Journal ArticleDOI
TL;DR: The authors replicated and extended a meta-analysis done in 2008 which found that, based on 45 experimental comparisons, web surveys had an 11 percentage points lower response rate compared with other survey modes, and found that prenotifications, sample recruitment strategy, the survey's solicitation mode, the type of target population, the number of contact attempts, and the country in which the survey was conducted moderated the magnitude of the response rate differences.
Abstract: Do web surveys still yield lower response rates compared with other survey modes? To answer this question, we replicated and extended a meta-analysis done in 2008 which found that, based on 45 experimental comparisons, web surveys had an 11 percentage points lower response rate compared with other survey modes. Fundamental changes in internet accessibility and use since the publication of the original meta-analysis would suggest that people’s propensity to participate in web surveys has changed considerably in the meantime. However, in our replication and extension study, which comprised 114 experimental comparisons between web and other survey modes, we found almost no change: web surveys still yielded lower response rates than other modes (a difference of 12 percentage points in response rates). Furthermore, we found that prenotifications, the sample recruitment strategy, the survey’s solicitation mode, the type of target population, the number of contact attempts, and the country in which the survey was conducted moderated the magnitude of the response rate differences. These findings have substantial implications for web survey methodology and operations.

109 citations

Journal ArticleDOI
TL;DR: The conditions under which nonprobability sample surveys may provide accurate results in theory and empirical evidence on which types of samples produce the highest accuracy in practice are described.
Abstract: There is an ongoing debate in the survey research literature about whether and when probability and nonprobability sample surveys produce accurate estimates of a larger population. Statistical theory provides a justification for confidence in probability sampling as a function of the survey design, whereas inferences based on nonprobability sampling are entirely dependent on models for validity. This article reviews the current debate about probability and nonprobability sample surveys. We describe the conditions under which nonprobability sample surveys may provide accurate results in theory and discuss empirical evidence on which types of samples produce the highest accuracy in practice. From these theoretical and empirical considerations, we derive best-practice recommendations and outline paths for future research.

106 citations

Journal ArticleDOI
TL;DR: This paper shows how both AIC and BIC criteria can be modified to handle complex samples, using data from NHANES and a case–control study.
Abstract: Model-selection criteria such as AIC and BIC are widely used in applied statistics. In recent years, there has been a huge increase in modeling data from large complex surveys, and a resulting demand for versions of AIC and BIC that are valid under complex sampling. In this paper, we show how both criteria can be modified to handle complex samples. We illustrate with two examples, the first using data from NHANES and the second using data from a case–control study.

95 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202334
202249
202176
202065
201923
201825