scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Evaluating survey quality in health services research: a decision framework for assessing nonresponse bias.

01 Jun 2013-Health Services Research (Health Research & Educational Trust)-Vol. 48, Iss: 3, pp 913-930
TL;DR: It is important that the quality of survey data be considered to assess the relative contribution to the literature of a given study and the potential effects of nonresponse bias should be considered both before and after survey administration.
Abstract: Objective: To address the issue of nonresponse as problematic and offer appropriate strategies for assessing nonresponse bias. Study Design: A review of current strategies used to assess the quality of survey data and the challenges associated with these strategies is provided along with appropriate post-data collection techniques that researchers should consider. Principal Findings: Response rates are an incomplete assessment of survey data quality and quick reactions to response rate should be avoided. Based on a five-question decision making framework we offer potential ways to assess nonresponse bias along with a description of the advantages and disadvantages to each. Conclusions: It is important that the quality of survey data be considered to assess the relative contribution to the literature of a given study. Authors and funding agencies should consider the potential effects of nonresponse bias both before and after survey administration and report the results of assessments of nonresponse bias in addition to response rates.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors collected data from 19 general-population studies from 13 European countries and investigated international variation in CKD prevalence by age, sex, and presence of diabetes, hypertension, and obesity.
Abstract: CKD prevalence estimation is central to CKD management and prevention planning at the population level. This study estimated CKD prevalence in the European adult general population and investigated international variation in CKD prevalence by age, sex, and presence of diabetes, hypertension, and obesity. We collected data from 19 general-population studies from 13 European countries. CKD stages 1-5 was defined as eGFR 30 mg/g, and CKD stages 3-5 was defined as eGFR<60 ml/min per 1.73 m(2) CKD prevalence was age- and sex-standardized to the population of the 27 Member States of the European Union (EU27). We found considerable differences in both CKD stages 1-5 and CKD stages 3-5 prevalence across European study populations. The adjusted CKD stages 1-5 prevalence varied between 3.31% (95% confidence interval [95% CI], 3.30% to 3.33%) in Norway and 17.3% (95% CI, 16.5% to 18.1%) in northeast Germany. The adjusted CKD stages 3-5 prevalence varied between 1.0% (95% CI, 0.7% to 1.3%) in central Italy and 5.9% (95% CI, 5.2% to 6.6%) in northeast Germany. The variation in CKD prevalence stratified by diabetes, hypertension, and obesity status followed the same pattern as the overall prevalence. In conclusion, this large-scale attempt to carefully characterize CKD prevalence in Europe identified substantial variation in CKD prevalence that appears to be due to factors other than the prevalence of diabetes, hypertension, and obesity.

387 citations

Journal ArticleDOI
TL;DR: This AMEE Guide explains response rate calculations and discusses methods for improving response rates to surveys as a whole and to questions within a survey (item nonresponse).
Abstract: Robust response rates are essential for effective survey-based strategies. Researchers can improve survey validity by addressing both response rates and nonresponse bias. In this AMEE Guide, we explain response rate calculations and discuss methods for improving response rates to surveys as a whole (unit nonresponse) and to questions within a survey (item nonresponse). Finally, we introduce the concept of nonresponse bias and provide simple methods to measure it.

177 citations


Cites background or methods from "Evaluating survey quality in health..."

  • ...We now know that the number of nonrespondents and the probability of nonresponse bias are very poorly related (r ¼ 0.3) (Groves 2006; Halbesleben & Whitman 2013)....

    [...]

  • ...An alternative decision tree and examples of additional methods are also available (Groves 2006; Halbesleben & Whitman 2013)....

    [...]

Journal ArticleDOI
TL;DR: It is nonresponse bias that is the focus of this editorial and it is also the subject of the paper by Halbesleben and Whitman (2013) that this editorial accompanies.
Abstract: Survey researchers are rightly concerned with measuring the level of potential bias in estimates generated from the surveys.2 Bias in estimates can result from measurement error, processing/editing error, coverage error, and nonresponse error (Federal Committee on Statistical Methodology [FCSM] 2001). It is nonresponse bias that is the focus of this editorial and it is also the subject of the paper by Halbesleben and Whitman (2013) that this editorial accompanies. Nonresponse bias is a perennial concern for survey researchers as not everyone we attempt to include in our surveys responds. And to the extent that nonrespondents are different from respondents on the key variables the survey was designed to study these differences could bias the very estimates the survey was designed to make. Because we often have very little information about those who do not respond, survey researchers have long focused on the response rate as a key indicator of survey quality. The assumption is that the more nonresponse there is in a survey, the higher the potential for nonresponse bias. Unfortunately, this assumption has served as a problematic diversion for survey research from the real concern of how survey nonresponse potentially biases survey estimates.

170 citations

Journal ArticleDOI
TL;DR: The results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys, and caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure.
Abstract: Background: The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Objective: Our objective was to explore the representativeness of a self-selected sample of online gamers using online players’ virtual characters (avatars). Methods: All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars’ characteristics were defined using various games’ scores, reported on the WoW’s official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. Results: We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Conclusions: Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted. [J Med Internet Res 2014;16(7):e164]

158 citations


Cites background from "Evaluating survey quality in health..."

  • ...Furthermore, response bias is considered as an “individual” characteristic of a given sample; replication of the results on different samples could be considered as a way to increase the validity of a given study conclusion [26]....

    [...]

Journal ArticleDOI
TL;DR: In the 2009/2010 English General Practice Patient Survey, the survey was mailed to 556 million randomly sampled adults registered with a National Health Service general practice (representing 99% of England's adult population) In all, 2.169,718 people responded (39% response rate), including 27,497 people who described themselves as gay, lesbian, or bisexual as mentioned in this paper.
Abstract: The health and healthcare of sexual minorities have recently been identified as priorities for health research and policy To compare the health and healthcare experiences of sexual minorities with heterosexual people of the same gender, adjusting for age, race/ethnicity, and socioeconomic status Multivariate analyses of observational data from the 2009/2010 English General Practice Patient Survey The survey was mailed to 556 million randomly sampled adults registered with a National Health Service general practice (representing 99 % of England’s adult population) In all, 2,169,718 people responded (39 % response rate), including 27,497 people who described themselves as gay, lesbian, or bisexual Two measures of health status (fair/poor overall self-rated health and self-reported presence of a longstanding psychological condition) and four measures of poor patient experiences (no trust or confidence in the doctor, poor/very poor doctor communication, poor/very poor nurse communication, fairly/very dissatisfied with care overall) Sexual minorities were two to three times more likely to report having a longstanding psychological or emotional problem than heterosexual counterparts (age-adjusted for 52 % heterosexual, 109 % gay, 150 % bisexual for men; 60 % heterosexual, 123 % lesbian and 188 % bisexual for women; p < 0001 for each) Sexual minorities were also more likely to report fair/poor health (adjusted 196 % heterosexual, 218 % gay, 264 % bisexual for men; 205 % heterosexual, 249 % lesbian and 316 % bisexual for women; p < 0001 for each) Adjusted for sociodemographic characteristics and health status, sexual minorities were about one and one-half times more likely than heterosexual people to report unfavorable experiences with each of four aspects of primary care Little of the overall disparity reflected concentration of sexual minorities in low-performing practices Sexual minorities suffer both poorer health and worse healthcare experiences Efforts should be made to recognize the needs and improve the experiences of sexual minorities Examining patient experience disparities by sexual orientation can inform such efforts

157 citations

References
More filters
Posted Content
TL;DR: Valid predictions for the direction of nonresponse bias were obtained from subjective estimates and extrapolations in an analysis of mail survey data from published studies and the use of extrapolation led to substantial improvements over a strategy of not using extrapolation.
Abstract: Valid predictions for the direction of nonresponse bias were obtained from subjective estimates and extrapolations in an analysis of mail survey data from published studies. For estimates of the magnitude of bias, the use of extrapolations led to substantial improvements over a strategy of not using extrapolations.

9,589 citations


"Evaluating survey quality in health..." refers background in this paper

  • ...Arguably the most common approach to assessing and addressing nonresponse bias has been to examine how one’s sample matches known characteristics of the population (Armstrong and Overton 1977; Beebe et al. 2011)....

    [...]

  • ...Arguably the most common approach to assessing and addressing nonresponse bias has been to examine how one’s sample matches known characteristics of the population (Armstrong and Overton 1977; Beebe et al. 2011). There are two different approaches one could take for such comparisons. Most common is a comparison with general population data (e.g., Census data), typically focusing on demographics. For example, in their study of nurses in South Carolina, Ma, Samuels, and Alexander (2003) compared their sample to the population of registered nurses in the state and found no significant differences in gender, age, level of education, and geographic location....

    [...]

  • ...Arguably the most common approach to assessing and addressing nonresponse bias has been to examine how one’s sample matches known characteristics of the population (Armstrong and Overton 1977; Beebe et al. 2011). There are two different approaches one could take for such comparisons. Most common is a comparison with general population data (e.g., Census data), typically focusing on demographics. For example, in their study of nurses in South Carolina, Ma, Samuels, and Alexander (2003) compared their sample to the population of registered nurses in the state and found no significant differences in gender, age, level of education, and geographic location. Alternatively, Beebe et al. (2011) compared their sample with another data source from the same population, matching the data at the participant (rather than group) level....

    [...]

BookDOI
26 Aug 2002

6,148 citations


"Evaluating survey quality in health..." refers background in this paper

  • ...One option is to weight data to account for differences in the sample and population to “push” the sample data closer to the population (Little and Rubin 2002)....

    [...]

Journal ArticleDOI
TL;DR: The authors showed that nonresponse bias can be translated into causal models to guide hypotheses about when nonresponse causes bias, but the linkage between nonresponse rates and nonresponse biases is absent.
Abstract: Many surveys of the U.S. household population are experiencing higher refusal rates. Nonresponse can, but need not, induce nonresponse bias in survey estimates. Recent empirical findings illustrate cases when the linkage between nonresponse rates and nonresponse biases is absent. Despite this, professional standards continue to urge high response rates. Statistical expressions of nonresponse bias can be translated into causal models to guide hypotheses about when nonresponse. causes bias. Alternative designs to measure nonresponse bias exist, providing different but incomplete information about the nature of the bias. A synthesis of research studies estimating nonresponse bias shows the bias often present. A logical question at this moment in history is what advantage probability sample surveys have if they suffer from high nonresponse rates. Since postsurvey adjustment for nonresponse requires auxiliary variables, the answer depends on the nature of the design and the quality of the auxiliary variables.

2,290 citations


"Evaluating survey quality in health..." refers background or methods or result in this paper

  • ...Groves (2006) presents an alternative equation that is mathematically different, yet would yield similar conclusions about bias....

    [...]

  • ...Researchers commonly increase sample size to compensate for nonresponse bias; however, such action does not ensure a representative sample (Groves 2006; Groves and Peytcheva 2008)....

    [...]

  • ...In addition, we echo Groves’ (2006) call for reporting multiple methods for assessing nonresponse bias in any given study....

    [...]

  • ...Although there is a rich literature concerning nonresponse bias in the literature (Groves 2006; Rogelberg and Stanton 2007), we translate those works into a decision making framework to assist researchers in choosing a strategy for assessing nonresponse bias....

    [...]

Journal ArticleDOI
TL;DR: Although several mail survey techniques are associated with higher response rates, response rates to published mail surveys tend to be moderate, and investigators, journal editors, and readers should devote more attention to assessments of bias, and less to specific response rate thresholds.

2,154 citations


"Evaluating survey quality in health..." refers background in this paper

  • ...As evidence of the perceived importance of response rates, authors have attempted to determine benchmarks for response rates by examining the average response rate across a body of research (Asch, Jedrziewski, and Christakis 1997; Sitzia and Wood 1998; Cummings, Savitz, and Konrad 2001)....

    [...]